id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
chloecchng/biomedical_cpgQA | 2023-10-09T10:43:39.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"biology",
"medical",
"region:us"
] | chloecchng | null | null | null | 0 | 21 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- biology
- medical
size_categories:
- 1K<n<10K
---
# Dataset Card for the Biomedical Domain
### Dataset Summary
This dataset was obtain through github (https://github.com/mmahbub/cpgQA/blob/main/dataset/cpgQA-v1.0.csv?plain=1) to Huggin Face for easier access while fine tuning.
### Languages
English (en)
## Dataset Structure
The dataset is in a CSV format, with each row representing a single review. The following columns are included:
* **Title:** Categorises the QA.
* **Context:** Gives a context of the QA.
* **Question:** The question asked.
* **Answer:** The expected and appropriate answer to the question asked. |
keonroohparvar/music_vid_256 | 2023-10-10T04:40:44.000Z | [
"region:us"
] | keonroohparvar | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 93921778.139
num_examples: 9179
download_size: 80840102
dataset_size: 93921778.139
---
# Dataset Card for "music_vid_256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
simple_questions_v2 | 2022-11-18T21:46:14.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"region:us"
] | null | SimpleQuestions is a dataset for simple QA, which consists
of a total of 108,442 questions written in natural language by human
English-speaking annotators each paired with a corresponding fact,
formatted as (subject, relationship, object), that provides the answer
but also a complete explanation. Fast have been extracted from the
Knowledge Base Freebase (freebase.com). We randomly shuffle these
questions and use 70% of them (75910) as training set, 10% as
validation set (10845), and the remaining 20% as test set. | @misc{bordes2015largescale,
title={Large-scale Simple Question Answering with Memory Networks},
author={Antoine Bordes and Nicolas Usunier and Sumit Chopra and Jason Weston},
year={2015},
eprint={1506.02075},
archivePrefix={arXiv},
primaryClass={cs.LG}
} | null | 1 | 20 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: simplequestions
pretty_name: SimpleQuestions
dataset_info:
- config_name: annotated
features:
- name: id
dtype: string
- name: subject_entity
dtype: string
- name: relationship
dtype: string
- name: object_entity
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 12376039
num_examples: 75910
- name: validation
num_bytes: 12376039
num_examples: 75910
- name: test
num_bytes: 12376039
num_examples: 75910
download_size: 423435590
dataset_size: 37128117
- config_name: freebase2m
features:
- name: id
dtype: string
- name: subject_entity
dtype: string
- name: relationship
dtype: string
- name: object_entities
sequence: string
splits:
- name: train
num_bytes: 1964037256
num_examples: 10843106
download_size: 423435590
dataset_size: 1964037256
- config_name: freebase5m
features:
- name: id
dtype: string
- name: subject_entity
dtype: string
- name: relationship
dtype: string
- name: object_entities
sequence: string
splits:
- name: train
num_bytes: 2481753516
num_examples: 12010500
download_size: 423435590
dataset_size: 2481753516
---
# Dataset Card for SimpleQuestions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://research.fb.com/downloads/babi/
- **Repository:** https://github.com/fbougares/TSAC
- **Paper:** https://research.fb.com/publications/large-scale-simple-question-answering-with-memory-networks/
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [Antoine Borde](abordes@fb.com) [Nicolas Usunie](usunier@fb.com) [Sumit Chopra](spchopra@fb.com), [Jason Weston](jase@fb.com)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
* What American cartoonist is the creator of Andy Lippincott?
Fact: (andy_lippincott, character_created_by, garry_trudeau)
* Which forest is Fires Creek in?
Fact: (fires_creek, containedby, nantahala_national_forest)
* What does Jimmy Neutron do?
Fact: (jimmy_neutron, fictional_character_occupation, inventor)
* What dietary restriction is incompatible with kimchi?
Fact: (kimchi, incompatible_with_dietary_restrictions, veganism)
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
tep_en_fa_para | 2022-11-03T16:08:03.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:fa",
"license:unknown",
"region:us"
] | null | TEP: Tehran English-Persian parallel corpus. The first free Eng-Per corpus, provided by the Natural Language and Text Processing Laboratory, University of Tehran. | @InProceedings{“TEP: Tehran English-Persian Parallel Corpus”,
title = {TEP: Tehran English-Persian Parallel Corpus”, in proceedings of 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2011)},
authors={M. T. Pilevar, H. Faili, and A. H. Pilevar, },
year={2011}
} | null | 1 | 20 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- fa
license:
- unknown
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: TepEnFaPara
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- fa
config_name: en-fa
splits:
- name: train
num_bytes: 58735557
num_examples: 612087
download_size: 16353318
dataset_size: 58735557
---
# Dataset Card for [tep_en_fa_para]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[TEP: Tehran English-Persian parallel corpus](http://opus.nlpl.eu/TEP.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
TEP: Tehran English-Persian parallel corpus. The first free Eng-Per corpus, provided by the Natural Language and Text Processing Laboratory, University of Tehran.
### Supported Tasks and Leaderboards
The underlying task is machine translation for language pair English-Persian
### Languages
English, Persian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
M. T. Pilevar, H. Faili, and A. H. Pilevar, “TEP: Tehran English-Persian Parallel Corpus”, in proceedings of 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2011).
### Contributions
Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset. |
GroNLP/ik-nlp-22_slp | 2023-02-01T18:25:21.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unk... | GroNLP | Paragraphs from the Speech and Language Processing book (3ed) by Jurafsky and Martin extracted semi-automatically
from Chapters 2 to 11 of the original book draft. | @book{slp3ed-iknlp2022,
author = {Jurafsky, Daniel and Martin, James},
year = {2021},
month = {12},
pages = {1--235, 1--19},
title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition},
volume = {3}
} | null | 0 | 20 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
- summarization
- text-retrieval
pretty_name: slp3ed-iknlp2022
tags:
- question-generation
---
# Dataset Card for IK-NLP-22 Speech and Language Processing
## Table of Contents
- [Dataset Card for IK-NLP-22 Speech and Language Processing](#dataset-card-for-ik-nlp-22-speech-and-language-processing)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Projects](#projects)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Paragraphs Configuration](#paragraphs-configuration)
- [Questions Configuration](#questions-configuration)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Source:** [Stanford](https://web.stanford.edu/~jurafsky/slp3/)
- **Point of Contact:** [Gabriele Sarti](mmailto:ik-nlp-course@rug.nl)
### Dataset Summary
This dataset contains chapters extracted from the Speech and Language Processing book (3ed draft of January 2022) by Jurafsky and Martin via a semi-automatic procedure (see below for additional details). Moreover, a small set of conceptual questions associated with each chapter is provided alongside possible answers.
Only the content of chapters 2 to 11 of the book draft are provided, since these are the ones relevant to the contents of the 2022 edition of the Natural Language Processing course at the Information Science Master's Degree (IK) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) with the assistance of [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti).
*The Speech and Language Processing book was made freely available by the authors [Dan Jurafsky](http://web.stanford.edu/people/jurafsky/) and [James H. Martin](http://www.cs.colorado.edu/~martin/) on the [Stanford University website](https://web.stanford.edu/~jurafsky/slp3/). The present dataset was created for educational purposes, and is based on the draft of the 3rd edition of the book accessed on December 29th, 2021. All rights of the present contents are attributed to the original authors.*
### Projects
See the course page for a description of possible research directions.
### Languages
The language data of Speech and Language Processing is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
The dataset contains two configurations: `paragraphs` (default), containing the full set of parsed paragraphs associated to the respective chapter and sections, and `questions`, containing a small subset of example questions matched with the relevant paragraph, and with the answer span extracted.
#### Paragraphs Configuration
The `paragraphs` configuration contains all the paragraphs of the selected book chapters, each associated with the respective chapter, section and subsection. An example from the `train` split of the `paragraphs` config is provided below. The example belongs to section 2.3 but not to a subsection, so the `n_subsection` and `subsection` fields are empty strings.
```json
{
"n_chapter": "2",
"chapter": "Regular Expressions",
"n_section": "2.3",
"section": "Corpora",
"n_subsection": "",
"subsection": "",
"text": "It's also quite common for speakers or writers to use multiple languages in a single communicative act, a phenomenon called code switching. Code switching (2.2) Por primera vez veo a @username actually being hateful! it was beautiful:)"
}
```
The text is provided as-is, without further preprocessing or tokenization.
#### Questions Configuration
The `questions` configuration contains a small subset of questions, the top retrieved paragraph relevant to the question and the answer spans. An example from the `test` split of the `questions` config is provided below.
```json
{
"chapter": "Regular Expressions",
"section": "Regular Expressions",
"subsection": "Basic Regular Expressions",
"question": "What is the meaning of the Kleene star in Regex?",
"paragraph": "This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like \"some number of as\" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced \"cleany star\"). The Kleene star means \"zero or more occurrences of the immediately previous character or regular expression\". So /a*/ means \"any string of zero or more as\". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means \"zero or more a's or b's\" (not \"zero or more right square braces\"). This will match strings like aaaa or ababab or bbbb.",
"answer": "The Kleene star means \"zero or more occurrences of the immediately previous character or regular expression\""
}
```
### Data Splits
| config| train| test|
|------------:|-----:|----:|
|`paragraphs` | 1697 | - |
|`questions` | - | 59 |
### Dataset Creation
The contents of the Speech and Language Processing book PDF were extracted using the [PDF to S2ORC JSON Converter](https://github.com/allenai/s2orc-doc2json) by AllenAI. The texts extracted by the converter were then manually cleaned to remove end-of-chapter exercises and other irrelevant content (e.g. tables, TikZ figures, etc.). Some issues in the parsed content were preserved in the final version to maintain a naturalistic setting for the associated projects, promoting the use of data filtering heuristics for students.
The question-answer pairs were created manually by Gabriele Sarti.
## Additional Information
### Dataset Curators
For problems on this 🤗 Datasets version, please contact us at [ik-nlp-course@rug.nl](mailto:ik-nlp-course@rug.nl).
### Licensing Information
Please refer to the authors' websites for licensing information.
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@book{slp3ed-iknlp2022,
author = {Jurafsky, Daniel and Martin, James},
year = {2021},
month = {12},
pages = {1--235, 1--19},
title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition},
volume = {3}
}
``` |
sia-precision-education/pile_python | 2022-01-25T01:24:47.000Z | [
"region:us"
] | sia-precision-education | null | null | null | 2 | 20 | Entry not found |
andreamorgar/spanish_poetry | 2022-03-30T12:39:22.000Z | [
"license:gpl-3.0",
"region:us"
] | andreamorgar | null | null | null | 2 | 20 | ---
license: gpl-3.0
---
# Spanish Poetry Dataset
There are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.
It is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!
# Authors
Andrea Morales (@andreamorgar) and Miguel López (@wizmik12)
### Motivation
This dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: https://github.com/andreamorgar/poesIA
### Content
Data was acquired in July 2020 from the poetry webpage www.poemas-del-alma.com. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in www.poemas-del-alma.com, we collected the name of the poet, poem, and poem title. Scraping processed is available at https://github.com/andreamorgar/poesIA/blob/master/poetry-scrapper.py.
### Languages
Spanish
### Acknowledgements
We wouldn't be here without www.poemas-del-alma.com, which provides the poetry collection in this dataset. |
taln-ls2n/taln-archives | 2022-09-23T07:58:07.000Z | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"language:fr",
"language:en",
"license:cc-by-4.0",
"region:us"
] | taln-ls2n | TALN Archives benchmark dataset for keyphrase extraction an generation. | @inproceedings{boudin-2013-taln,
title = "{TALN} Archives : a digital archive of {F}rench research articles in Natural Language Processing ({TALN} Archives : une archive num{\'e}rique francophone des articles de recherche en Traitement Automatique de la Langue) [in {F}rench]",
author = "Boudin, Florian",
booktitle = "Proceedings of TALN 2013 (Volume 2: Short Papers)",
month = jun,
year = "2013",
address = "Les Sables d{'}Olonne, France",
publisher = "ATALA",
url = "https://aclanthology.org/F13-2001",
pages = "507--514",
} | null | 3 | 20 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 1K<n<10K
pretty_name: TALN-Archives
---
# TALN-Archives Benchmark Dataset for Keyphrase Generation
## About
TALN-Archives is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 1207 abstracts of scientific papers in French collected from the [TALN Archives](http://talnarchives.atala.org/).
Keyphrases were annotated by authors in an uncontrolled setting (that is, not limited to thesaurus entries).
English translations of title/abstract/keyphrases are also available for a subset of the documents (456 fully- and 719 partially-translated documents), allowing to experiment with cross-lingual / multilingual keyphrase generation.
Details about the dataset can be found in the original paper [(Boudin, 2013)][boudin-2013].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Test | 1207 | 138.3 | 4.12 | 53.83 | 12.32 | 21.69 | 12.16 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **translation**: translations of title, abstract and keyphrases in English if available.
## References
- (Boudin, 2013) Florian Boudin. 2013.
[TALN Archives : a digital archive of French research articles in Natural Language Processing (TALN Archives : une archive numérique francophone des articles de recherche en Traitement Automatique de la Langue) [in French]][boudin-2013].
In Proceedings of TALN 2013 (Volume 2: Short Papers), pages 507–514, Les Sables d’Olonne, France. ATALA.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[boudin-2013]: https://aclanthology.org/F13-2001/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ |
DFKI-SLT/wikitext_linked | 2022-07-04T06:09:56.000Z | [
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:masked-language-modeling",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:lemmatization",
"task_ids:parsing",
"task_ids:entity-linking-classification",
"... | DFKI-SLT | The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. Dependency Relations, POS, NER tags are marked with trankit and
entities are linked with entity-fishing.
The dataset is available under the Creative Commons Attribution-ShareAlike License. | @misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{nguyen2021trankit,
title={Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing},
author={Nguyen, Minh Van and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Nguyen, Thien Huu},
booktitle="Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
year={2021}
}
@misc{entity-fishing,
title = {entity-fishing},
howpublished = {\\url{https://github.com/kermitt2/entity-fishing}},
publisher = {GitHub},
year = {2016--2022},
archivePrefix = {swh},
eprint = {1:dir:cb0ba3379413db12b0018b7c3af8d0d2d864139c}
} | null | 4 | 20 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: wikitext_linked
size_categories:
- 1M<n<10M
source_datasets:
- extended|wikitext
task_categories:
- fill-mask
- token-classification
- text-classification
task_ids:
- masked-language-modeling
- named-entity-recognition
- part-of-speech
- lemmatization
- parsing
- entity-linking-classification
---
# Dataset Card for wikitext_linked
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -
- **Repository:** [https://github.com/GabrielKP/svo/](https://github.com/GabrielKP/svo/)
- **Paper:** -
- **Leaderboard:** -
- **Point of Contact:** [gabriel.kressin@dfki.de](mailto:gabriel.kressin@dfki.de)
### Dataset Summary
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from
the set of verified Good and Featured articles on Wikipedia. Dependency Relations, POS, NER tags
are marked with [trankit](https://github.com/nlp-uoregon/trankit), entities are linked with
[entity-fishing](https://nerd.readthedocs.io/en/latest/index.html), which also tags another field
of NER tags. The dataset is available under the Creative Commons Attribution-ShareAlike License.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and
WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary
and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is
composed of full articles, the dataset is well suited for models that can take advantage of long
term dependencies.
### Supported Tasks and Leaderboards
- masked-language-modeling
- named-entity-recognition
- part-of-speech
- lemmatization
- parsing
- entity-linking-classification
### Languages
English.
## Dataset Structure
### Data Instances
#### wikitext2
- **Size of downloaded dataset files:** 27.3 MB
- **Size of the generated dataset:** 197.2 MB
- **Total amount of disk used:** 197.2 MB
An example of 'validation' looks as follows.
```json
{
'text': 'It is closely related to the American lobster , H. americanus .',
'original_id': 3,
'tok_span': [[0, 0], [0, 2], [3, 5], [6, 13], [14, 21], [22, 24], [25, 28], [29, 37], [38, 45], [46, 47], [48, 50], [51, 61], [62, 63]],
'tok_upos': ['root', 'PRON', 'AUX', 'ADV', 'ADJ', 'ADP', 'DET', 'ADJ', 'NOUN', 'PUNCT', 'PROPN', 'PROPN', 'PUNCT'],
'tok_xpos': ['root', 'PRP', 'VBZ', 'RB', 'JJ', 'IN', 'DT', 'JJ', 'NN', ',', 'NNP', 'NNP', '.'],
'tok_dephead': [0, 4, 4, 4, 0, 8, 8, 8, 4, 8, 8, 10, 4],
'tok_deprel': ['root', 'nsubj', 'cop', 'advmod', 'root', 'case', 'det', 'amod', 'obl', 'punct', 'appos', 'flat', 'punct'],
'tok_lemma': [None, 'it', 'be', 'closely', 'related', 'to', 'the', 'american', 'lobster', ',', 'H.', 'americanus', '.'],
'tok_ner': [None, 'O', 'O', 'O', 'O', 'O', 'O', 'S-MISC', 'O', 'O', 'O', 'O', 'O'],
'ent_span': [[29, 45]],
'ent_wikipedia_external_ref': ['377397'],
'ent_ner': [None],
'ent_domains': [['Enterprise']],
}
```
#### wikitext103
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 7.82 GB
- **Total amount of disk used:** 7.82 GB
An example of 'train' looks as follows.
```json
{
'text': 'Vision for the PlayStation Portable .',
'original_id': 3,
'tok_span': [[0, 0], [0, 6], [7, 10], [11, 14], [15, 26], [27, 35], [36, 37]],
'tok_upos': ['root', 'NOUN', 'ADP', 'DET', 'PROPN', 'PROPN', 'PUNCT'],
'tok_xpos': ['root', 'NN', 'IN', 'DT', 'NNP', 'NNP', '.'],
'tok_dephead': [0, 0, 5, 5, 5, 1, 1],
'tok_deprel': ['root', 'root', 'case', 'det', 'compound', 'nmod', 'punct'],
'tok_lemma': [None, 'vision', 'for', 'the', 'PlayStation', 'Portable', '.'],
'tok_ner': [None, 'O', 'O', 'O', 'B-MISC', 'E-MISC', 'O'],
'ent_span': [[15, 35]],
'ent_wikipedia_external_ref': ['619009'],
'ent_ner': [None],
'ent_domains': [['Electronics', 'Computer_Science']]
}
```
Use following code to print the examples nicely:
```py
def print_tokens_entities(example):
text = example['text']
print(
"Text:\n"
f" {text}"
"\nOrig-Id: "
f"{example['original_id']}"
"\nTokens:"
)
iterator = enumerate(zip(
example["tok_span"],
example["tok_upos"],
example["tok_xpos"],
example["tok_ner"],
example["tok_dephead"],
example["tok_deprel"],
example["tok_lemma"],
))
print(f" Id | {'token':12} | {'upos':8} | {'xpos':8} | {'ner':8} | {'deph':4} | {'deprel':9} | {'lemma':12} | Id")
print("---------------------------------------------------------------------------------------------------")
for idx, (tok_span, upos, xpos, ner, dephead, deprel, lemma) in iterator:
print(f" {idx:3} | {text[tok_span[0]:tok_span[1]]:12} | {upos:8} | {xpos:8} | {str(ner):8} | {str(dephead):4} | {deprel:9} | {str(lemma):12} | {idx}")
iterator = list(enumerate(zip(
example.get("ent_span", []),
example.get("ent_wikipedia_external_ref", []),
example.get("ent_ner", []),
example.get("ent_domains", []),
)))
if len(iterator) > 0:
print("Entities")
print(f" Id | {'entity':21} | {'wiki_ref':7} | {'ner':7} | domains")
print("--------------------------------------------------------------------")
for idx, ((start, end), wiki_ref, ent_ner, ent_domains) in iterator:
print(f" {idx:3} | {text[start:end]:21} | {str(wiki_ref):7} | {str(ent_ner):7} | {ent_domains}")
```
### Data Fields
The data fields are the same among all splits.
* text: string feature.
* original_id: int feature. Mapping to index within original wikitext dataset.
* tok_span: sequence of (int, int) tuples. Denotes token spans (start inclusive, end exclusive)
within each sentence.
**Note that each sentence includes an artificial root node to align dependency relations.**
* tok_upos: string feature. [Universal Dependency POS tag](https://universaldependencies.org/)
tags. Aligned with tok_span. Root node has tag "root".
* tok_xpos: string geature. [XPOS POS tag](https://trankit.readthedocs.io/en/latest/overview.html#token-list).
Aligned with tok_span. Root node has tag "root".
* tok_dephead: int feature.
[Universal Dependency Head Node](https://universaldependencies.org/introduction.html). Int refers
to tokens in tok_span. Root node has head `0` (itself).
* tok_deprel: [Universal Dependency Relation Description](https://universaldependencies.org/introduction.html).
Refers to the relation between this token and head token. Aligned with tok_span. Root node has
dependency relation "root" to itself.
* tok_lemma: string feature. Lemma of token. Aligend with tok_span.
* tok_ner: string feature. NER tag of token. Marked in BIOS schema (e.g. S-MISC, B-LOC, ...)
Aligned with tok_span. Root node has NER tag `None`.
* ent_span: sequence of (int, int) tuples. Denotes entities found by entity-fishing
(start inclusive, end exclusive).
* ent_wikipedia_external_ref: string feature. External Reference to wikipedia page. You can
access the wikipedia page via the url `https://en.wikipedia.org/wiki?curid=<ent_wikipedia_external_ref>`.
Aligend with ent_span. All entities either have this field, or the `ent_ner` field, but not both.
An empty field is denoted by the string `None`. Aligned with ent_span.
* ent_ner: string feature. Denotes NER tags. An empty field is denoted by the string `None`.
Aligned with ent_span.
"ent_domains": sequence of string. Denotes domains of entity. Can be empty sequence. Aligned with
ent_span.
### Data Splits
| name | train |validation| test|
|-------------------|------:|---------:|----:|
|wikitext103 |4076530| 8607|10062|
|wikitext2 | 82649| 8606|10062|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[https://huggingface.co/datasets/wikitext](https://huggingface.co/datasets/wikitext)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
1. Started with `wikitext2-raw-v1` and `wikitext103-raw-v1` from [wikitext](https://huggingface.co/datasets/wikitext)
2. Ran datasets through Trankit. Marked all fields starting with `tok`.
In this step, the texts have been split into sentences. To retain the original text sections
you can accumulate over `original_id` (examples are in order).
3. Ran datasets through entity-fishing. Marked all fields starting with `ent`.
#### Who are the annotators?
Machines powered by [DFKI](https://www.dfki.de/web).
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
### Citation Information
Please cite the original creators of wikitext, and the great people
developing trankit and entity-fishing.
```
@misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{nguyen2021trankit,
title={Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing},
author={Nguyen, Minh Van and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Nguyen, Thien Huu},
booktitle="Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
year={2021}
}
@misc{entity-fishing,
title = {entity-fishing},
howpublished = {\\url{https://github.com/kermitt2/entity-fishing}},
publisher = {GitHub},
year = {2016--2022},
archivePrefix = {swh},
eprint = {1:dir:cb0ba3379413db12b0018b7c3af8d0d2d864139c}
}
```
### Contributions
Thanks to [@GabrielKP](https://github.com/GabrielKP) for adding this dataset.
|
sil-ai/bloom-speech | 2023-02-15T13:28:59.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ajz",
"language:bam",
"language:bi",
... | sil-ai | Bloom-speech is a dataset of text aligned speech from bloomlibrary.org. This dataset contains over 50 languages including many low-resource languages. This dataset should be useful for training and/or testing speech-to-text or text-to-speech/ASR models. | @InProceedings{huggingface:bloom-speech,
title = {bloom-speech},
author={Joshua Nemecek, Colin Leong, and Daniel Whitenack
},
year={2022}
} | null | 14 | 20 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ajz
- bam
- bi
- bis
- bjn
- bm
- boz
- bze
- bzi
- cak
- ceb
- chd
- chp
- clo
- csw
- en
- eng
- es
- fli
- fr
- fra
- gu
- guj
- hbb
- hi
- hin
- id
- ind
- jmx
- jra
- kan
- kbq
- kek
- kjb
- kmu
- kn
- kqr
- kwu
- loh
- mai
- mal
- mam
- mar
- ml
- mle
- mr
- my
- mya
- myk
- nas
- nsk
- nsn
- oj
- oji
- omw
- por
- pt
- quc
- sdk
- snk
- spa
- stk
- ta
- taj
- tam
- tbj
- tdc
- tgl
- tl
- tpi
- tuz
- tzj
license:
- cc-by-nc-4.0
- cc-by-sa-4.0
- cc-by-nc-nd-4.0
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-to-speech
paperswithcode_id: null
pretty_name: BloomSpeech
extra_gated_prompt: |-
One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled with a `cc-by-sa` license). A "license" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample.
These [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that:
1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:sj@derivation.co).
2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material.
In addition to the above implied by Creative Commons and when clicking "Access Repository" below, you agree:
1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised.
2. That your *contact information* (email address and username) can be shared with the model authors as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
<!-- - [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions) -->
## Dataset Description
- **Homepage:** [SIL AI](https://ai.sil.org/)
- **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org)
- **Source Data:** [Bloom Library](https://bloomlibrary.org/)
 
## Dataset Summary
**Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the automatic speech recognition and speech-to-text tasks. It includes data from 56 languages across 18 language families. There is a mean of 458 and median of 138 audio records per language.
**Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know!
**Note**: Although data from [bloom-lm](https://huggingface.co/datasets/sil-ai/bloom-lm) was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), the dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉
## Languages
Of the 500+ languages listed at BloomLibrary.org, there are 56 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
ajz, bam, bis, bjn, boz, bze, bzi, cak, ceb, chd, chp, clo, csw, eng, fli, fra, guj, hbb, hin, ind, jmx, jra, kan, kbq, kek, kjb, kmu, kqr, kwu, loh, mai, mal, mam, mar, mle, mya, myk, nas, nsk, nsn, oji, omw, por, quc, sdk, snk, spa, stk, taj, tam, tbj, tdc, tgl, tpi, tuz, tzj
## Dataset Statistics
Some of the languages included in the dataset include few audio cuts. These are not split between training, validation, and test. For those with higher numbers of available stories we include the following numbers of stories in each split:
| ISO 639-3 | Name | Train Cuts | Validation Cuts | Test Cuts |
|:------------|:------------------------------|----------------:|---------------------:|---------------:|
| ajz | Amri Karbi | 135 | 34 | 50 |
| bam | Bamanankan | 203 | 50 | 50 |
| bis | Bislama | 0 | 0 | 46 |
| bjn | Banjar | 80 | 20 | 50 |
| boz | Bozo, Tieyaxo | 427 | 50 | 52 |
| bze | Bozo, Jenaama | 101 | 26 | 50 |
| bzi | Bisu | 1363 | 50 | 157 |
| cak | Kaqchikel | 989 | 50 | 115 |
| ceb | Cebuano | 553 | 50 | 67 |
| chd | Chontal, Highland Oaxaca | 205 | 50 | 50 |
| chp | Dene | 0 | 0 | 14 |
| clo | Chontal, Lowland Oaxaca | 120 | 30 | 50 |
| csw | Cree, Swampy | 0 | 0 | 45 |
| eng | English | 4143 | 48 | 455 |
| fli | Fali Muchella | 59 | 15 | 50 |
| fra | French | 261 | 49 | 50 |
| guj | Gujarati | 27 | 0 | 48 |
| hbb | Nya Huba | 558 | 50 | 67 |
| hin | Hindi | 62 | 15 | 49 |
| ind | Indonesian | 0 | 0 | 14 |
| jmx | Mixtec, Western Juxtlahuaca | 39 | 0 | 50 |
| jra | Jarai | 203 | 50 | 50 |
| kan | Kannada | 281 | 43 | 50 |
| kbq | Kamano | 0 | 0 | 27 |
| kek | Q’eqchi’ | 1676 | 49 | 190 |
| kjb | Q’anjob’al | 770 | 50 | 91 |
| kmu | Kanite | 0 | 0 | 28 |
| kqr | Kimaragang | 0 | 0 | 18 |
| kwu | Kwakum | 58 | 15 | 50 |
| loh | Narim | 0 | 0 | 15 |
| mai | Maithili | 0 | 0 | 11 |
| mal | Malayalam | 125 | 31 | 44 |
| mam | Mam | 1313 | 50 | 151 |
| mar | Marathi | 25 | 0 | 49 |
| mle | Manambu | 0 | 0 | 8 |
| mya | Burmese | 321 | 50 | 50 |
| myk | Sénoufo, Mamara | 669 | 50 | 80 |
| nas | Naasioi | 13 | 0 | 50 |
| nsk | Naskapi | 0 | 0 | 15 |
| nsn | Nehan | 0 | 0 | 31 |
| oji | Ojibwa | 0 | 0 | 25 |
| omw | Tairora, South | 0 | 0 | 34 |
| por | Portuguese | 0 | 0 | 34 |
| quc | K’iche’ | 1460 | 50 | 167 |
| sdk | Sos Kundi | 312 | 50 | 50 |
| snk | Soninke | 546 | 50 | 66 |
| spa | Spanish | 1816 | 50 | 207 |
| stk | Aramba | 180 | 45 | 50 |
| taj | Tamang, Eastern | 0 | 0 | 24 |
| tam | Tamil | 159 | 39 | 46 |
| tbj | Tiang | 0 | 0 | 24 |
| tdc | Ẽpẽra Pedea | 0 | 0 | 19 |
| tgl | Tagalog | 352 | 48 | 50 |
| tpi | Tok Pisin | 1061 | 50 | 123 |
| tuz | Turka | 48 | 13 | 50 |
| tzj | Tz’utujil | 0 | 0 | 41 |
## Dataset Structure
### Data Instances
The examples look like this for Hindi:
```
from datasets import load_dataset
# Specify the language code.
dataset = load_dataset('sil-ai/bloom-speech', 'hin', use_auth_token=True) #note you must login to HuggingFace via the huggingface hub or huggingface cli
# A data point consists of transcribed audio in the specified language code.
# To see a transcription:
print(dataset['train']['text'][0])
```
This would produce an output:
```
चित्र: बो और शैम्पू की बोतल
```
Whereas if you wish to gather all the text for a language you may use this:
```
dataset['train']['text']
```
### Data Fields
The metadata fields are below. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).
- **file**: the local path to the audio file
- **audio**: a dictionary with a path, array, and sampling_rate as is standard for Hugging Face audio
- **text**: the transcribed text
- **book**: title of the book, e.g. "बो मेस्सी और शैम्पू".
- **instance**: unique ID for each book/translation assigned by Bloom Library. For example the Hindi version of 'बो मेस्सी और शैम्पू' is 'eba60f56-eade-4d78-a66f-f52870f6bfdd'
- **license**: specific license used, e.g. "cc-by-sa" for "Creative Commons, by attribution, share-alike".
- **credits**: attribution of contributors as described in the book metadata, including authors, editors, etc. if available
- **original_lang_tag**: the language tag originally assigned in Bloom Library. This may include information on script type, etc.
### Data Splits
All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.
## Changelog
- **26 September 2022** Page initiated |
Nexdata/Scene_Noise_Data_by_Voice_Recorder | 2023-08-30T10:36:39.000Z | [
"region:us"
] | Nexdata | null | null | null | 0 | 20 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Scene_Noise_Data_by_Voice_Recorder
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/25?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The data is multi-scene noise data, covering subway, supermarket, restaurant, road, airport, exhibition hall, high-speed rail, highway, city road, cinema and other daily life scenes.The data is recorded by the professional recorder Sony ICD-UX560F, which is collected in a high sampling rate and two-channel format, and the recording is clear and natural. The valid data is 101 hours.
For more details, please refer to the link: https://www.nexdata.ai/datasets/25?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition,noisy-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Noise Data
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
Bingsu/Gameplay_Images | 2022-08-26T05:31:58.000Z | [
"task_categories:image-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | Bingsu | null | null | null | 1 | 20 | ---
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Gameplay Images
size_categories:
- 1K<n<10K
task_categories:
- image-classification
---
# Gameplay Images
## Dataset Description
- **Homepage:** [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images)
- **Download Size** 2.50 GiB
- **Generated Size** 1.68 GiB
- **Total Size** 4.19 GiB
A dataset from [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images).
This is a dataset of 10 very famous video games in the world.
These include
- Among Us
- Apex Legends
- Fortnite
- Forza Horizon
- Free Fire
- Genshin Impact
- God of War
- Minecraft
- Roblox
- Terraria
There are 1000 images per class and all are sized `640 x 360`. They are in the `.png` format.
This Dataset was made by saving frames every few seconds from famous gameplay videos on Youtube.
※ This dataset was uploaded in January 2022. Game content updated after that will not be included.
### License
CC-BY-4.0
## Dataset Structure
### Data Instance
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/Gameplay_Images")
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 10000
})
})
```
```python
>>> dataset["train"].features
{'image': Image(decode=True, id=None),
'label': ClassLabel(num_classes=10, names=['Among Us', 'Apex Legends', 'Fortnite', 'Forza Horizon', 'Free Fire', 'Genshin Impact', 'God of War', 'Minecraft', 'Roblox', 'Terraria'], id=None)}
```
### Data Size
download: 2.50 GiB<br>
generated: 1.68 GiB<br>
total: 4.19 GiB
### Data Fields
- image: `Image`
- A `PIL.Image.Image object` containing the image. size=640x360
- Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over `dataset["image"][0]`.
- label: an int classification label.
Class Label Mappings:
```json
{
"Among Us": 0,
"Apex Legends": 1,
"Fortnite": 2,
"Forza Horizon": 3,
"Free Fire": 4,
"Genshin Impact": 5,
"God of War": 6,
"Minecraft": 7,
"Roblox": 8,
"Terraria": 9
}
```
```python
>>> dataset["train"][0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=640x360>,
'label': 0}
```
### Data Splits
| | train |
| ---------- | -------- |
| # of data | 10000 |
### Note
#### train_test_split
```python
>>> ds_new = dataset["train"].train_test_split(0.2, seed=42, stratify_by_column="label")
>>> ds_new
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 8000
})
test: Dataset({
features: ['image', 'label'],
num_rows: 2000
})
})
```
|
sbx/superlim-2 | 2023-04-02T19:30:34.000Z | [
"task_categories:multiple-choice",
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:token-classification",
"task_ids:sentiment-analysis",
"task_ids:acceptability-classification",
"task_ids:closed-domain-qa",
"task_id... | sbx | \ | \ | null | 9 | 20 | ---
annotations_creators:
- other
language:
- sv
language_creators:
- other
license: []
multilinguality:
- monolingual
pretty_name: 'A standardized suite for evaluation and analysis of Swedish natural
language understanding systems.'
size_categories:
- unknown
source_datasets: []
tags: []
task_categories:
- multiple-choice
- text-classification
- question-answering
- sentence-similarity
- token-classification
task_ids:
- sentiment-analysis
- acceptability-classification
- closed-domain-qa
- word-sense-disambiguation
- coreference-resolution
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The official homepage of Språkbanken](https://spraakbanken.gu.se/resurser/superlim/)
- **Repository:**
- **Paper:**[SwedishGLUE – Towards a Swedish Test Set for Evaluating Natural Language Understanding Models](https://gup.ub.gu.se/publication/299130?lang=sv)
- **Leaderboard:** [To be implemented]
- **Point of Contact:**[sb-info@svenska.gu.se](sb-info@svenska.gu.se)
### Dataset Summary
SuperLim 2.0 is a continuation of SuperLim 1.0, which aims for a standardized suite for evaluation and analysis of Swedish natural language understanding systems. The projects is inspired by the GLUE/SuperGLUE projects from which the name is derived: "lim" is the Swedish translation of "glue".
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Swedish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
Most datasets have a train, dev and test split. However, there are a few (`supersim`, `sweanalogy` and `swesat-synonyms`) who only have a train and test split. The diagnostic tasks `swediagnostics` and `swewinogender` only have a test split, but they could be evaluated on models trained on `swenli` since they are also NLI-based.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
To cite as a whole, use the standard reference. If you use or reference individual resources, cite the references specific for these resources:
Standard reference:
Yvonne Adesam, Aleksandrs Berdicevskis, Felix Morger (2020): [SwedishGLUE – Towards a Swedish Test Set for Evaluating Natural Language Understanding Models] (https://gup.ub.gu.se/publication/299130?lang=sv)
Dataset references:
[More information needed]
Thanks to [Felix Morger](https://github.com/felixhultin) for adding this dataset. |
TalTechNLP/ERRnews | 2023-04-10T13:17:48.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:et",
"license:cc-by-4.0",
"region:us"
] | TalTechNLP | null | null | null | 0 | 20 | ---
pretty_name: ERRnews
annotations_creators:
- expert-generated
language:
- et
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: err-news
---
# Dataset Card for "ERRnews"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
-
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/10_3_23_Harm.pdf
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
ERRnews is an estonian language summaryzation dataset of ERR News broadcasts scraped from the ERR Archive (https://arhiiv.err.ee/err-audioarhiiv). The dataset consists of news story transcripts generated by an ASR pipeline paired with the human written summary from the archive. For leveraging larger english models the dataset includes machine translated (https://neurotolge.ee/) transcript and summary pairs.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Estonian
## Dataset Structure
### Data Instances
```
{'name': 'Kütuseaktsiis Balti riikides on erinev.', 'summary': 'Eestis praeguse plaani järgi järgmise aasta maini kehtiv madalam diislikütuse aktsiis ei ajenda enam tankima Lätis, kuid bensiin on seal endiselt odavam. Peaminister Kaja Kallas ja kütusemüüjad on eri meelt selles, kui suurel määral mõjutab aktsiis lõpphinda tanklais.', 'transcript': 'Eesti-Läti piiri alal on kütusehinna erinevus eriti märgatav ja ka tuntav. Õigema pildi saamiseks tuleks võrrelda ühe keti keskmist hinda, kuna tanklati võib see erineda Circle K. [...] Olulisel määral mõjutab hinda kütuste sisseost, räägib kartvski. On selge, et maailmaturuhinna põhjal tehtud ost Tallinnas erineb kütusehinnast Riias või Vilniuses või Varssavis. Kolmas mõjur ja oluline mõjur on biolisandite kasutamise erinevad nõuded riikide vahel.', 'url': 'https://arhiiv.err.ee//vaata/uudised-kutuseaktsiis-balti-riikides-on-erinev', 'meta': '\n\n\nSarja pealkiri:\nuudised\n\n\nFonoteegi number:\nRMARH-182882\n\n\nFonogrammi tootja:\n2021 ERR\n\n\nEetris:\n16.09.2021\n\n\nSalvestuskoht:\nRaadiouudised\n\n\nKestus:\n00:02:34\n\n\nEsinejad:\nKond Ragnar, Vahtrik Raimo, Kallas Kaja, Karcevskis Ojars\n\n\nKategooria:\nUudised → uudised, muu\n\n\nPüsiviide:\n\nvajuta siia\n\n\n\n', 'audio': {'path': 'recordings/12049.ogv', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 2.44576868e-06, 6.38223427e-06, 0.00000000e+00]), 'sampling_rate': 16000}, 'recording_id': 12049}
```
### Data Fields
```
name: News story headline
summary: Hand written summary.
transcript: Automatically generated transcript from the audio file with an ASR system.
url: ERR archive URL.
meta: ERR archive metadata.
en_summary: Machine translated English summary.
en_transcript: Machine translated English transcript.
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
recording_id: Audio file id.
```
### Data Splits
|train|validation|test|
|:----|:---------|:---|
|10420|523|523|
### BibTeX entry and citation info
```bibtex
article{henryabstractive,
title={Abstractive Summarization of Broadcast News Stories for {Estonian}},
author={Henry, H{\"a}rm and Tanel, Alum{\"a}e},
journal={Baltic J. Modern Computing},
volume={10},
number={3},
pages={511-524},
year={2022}
}
```
|
zhengxuanzenwu/wikitext-2-split-128 | 2022-10-13T00:11:29.000Z | [
"region:us"
] | zhengxuanzenwu | null | null | null | 0 | 20 | This is a dataset created from the WikiText-2 dataset by splitting longer sequences into sequences with maximum of 128 tokens after using a wordpiece tokenizer. |
laion/laion2b-en-vit-h-14-embeddings | 2022-10-25T04:19:39.000Z | [
"region:us"
] | laion | null | null | null | 3 | 20 | Entry not found |
rufimelo/PortugueseLegalSentences-v3 | 2022-11-01T13:15:47.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | rufimelo | null | null | null | 2 | 20 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- pt
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
---
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
400000/50000/50000
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
|
bigbio/codiesp | 2022-12-22T15:44:28.000Z | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | bigbio | Synthetic corpus of 1,000 manually selected clinical case studies in Spanish
that was designed for the Clinical Case Coding in Spanish Shared Task, as part
of the CLEF 2020 conference.
The goal of the task was to automatically assign ICD10 codes (CIE-10, in
Spanish) to clinical case documents, being evaluated against manually generated
ICD10 codifications. The CodiEsp corpus was selected manually by practicing
physicians and clinical documentalists and annotated by clinical coding
professionals meeting strict quality criteria. They reached an inter-annotator
agreement of 88.6% for diagnosis coding, 88.9% for procedure coding and 80.5%
for the textual reference annotation.
The final collection of 1,000 clinical cases that make up the corpus had a total
of 16,504 sentences and 396,988 words. All documents are in Spanish language and
CIE10 is the coding terminology (the Spanish version of ICD10-CM and ICD10-PCS).
The CodiEsp corpus has been randomly sampled into three subsets. The train set
contains 500 clinical cases, while the development and test sets have 250
clinical cases each. In addition to these, a collection of 176,294 abstracts
from Lilacs and Ibecs with the corresponding ICD10 codes (ICD10-CM and
ICD10-PCS) was provided by the task organizers. Every abstract has at least one
associated code, with an average of 2.5 ICD10 codes per abstract.
The CodiEsp track was divided into three sub-tracks (2 main and 1 exploratory):
- CodiEsp-D: The Diagnosis Coding sub-task, which requires automatic ICD10-CM
[CIE10-Diagnóstico] code assignment.
- CodiEsp-P: The Procedure Coding sub-task, which requires automatic ICD10-PCS
[CIE10-Procedimiento] code assignment.
- CodiEsp-X: The Explainable AI exploratory sub-task, which requires to submit
the reference to the predicted codes (both ICD10-CM and ICD10-PCS). The goal
of this novel task was not only to predict the correct codes but also to
present the reference in the text that supports the code predictions.
For further information, please visit https://temu.bsc.es/codiesp or send an
email to encargo-pln-life@bsc.es | @article{miranda2020overview,
title={Overview of Automatic Clinical Coding: Annotations, Guidelines, and Solutions for non-English Clinical Cases at CodiEsp Track of CLEF eHealth 2020.},
author={Miranda-Escalada, Antonio and Gonzalez-Agirre, Aitor and Armengol-Estap{\'e}, Jordi and Krallinger, Martin},
journal={CLEF (Working Notes)},
volume={2020},
year={2020}
} | null | 0 | 20 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: CodiEsp
homepage: https://temu.bsc.es/codiesp/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_CLASSIFICATION
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for CodiEsp
## Dataset Description
- **Homepage:** https://temu.bsc.es/codiesp/
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS,NER,NED
Synthetic corpus of 1,000 manually selected clinical case studies in Spanish
that was designed for the Clinical Case Coding in Spanish Shared Task, as part
of the CLEF 2020 conference.
The goal of the task was to automatically assign ICD10 codes (CIE-10, in
Spanish) to clinical case documents, being evaluated against manually generated
ICD10 codifications. The CodiEsp corpus was selected manually by practicing
physicians and clinical documentalists and annotated by clinical coding
professionals meeting strict quality criteria. They reached an inter-annotator
agreement of 88.6% for diagnosis coding, 88.9% for procedure coding and 80.5%
for the textual reference annotation.
The final collection of 1,000 clinical cases that make up the corpus had a total
of 16,504 sentences and 396,988 words. All documents are in Spanish language and
CIE10 is the coding terminology (the Spanish version of ICD10-CM and ICD10-PCS).
The CodiEsp corpus has been randomly sampled into three subsets. The train set
contains 500 clinical cases, while the development and test sets have 250
clinical cases each. In addition to these, a collection of 176,294 abstracts
from Lilacs and Ibecs with the corresponding ICD10 codes (ICD10-CM and
ICD10-PCS) was provided by the task organizers. Every abstract has at least one
associated code, with an average of 2.5 ICD10 codes per abstract.
The CodiEsp track was divided into three sub-tracks (2 main and 1 exploratory):
- CodiEsp-D: The Diagnosis Coding sub-task, which requires automatic ICD10-CM
[CIE10-Diagnóstico] code assignment.
- CodiEsp-P: The Procedure Coding sub-task, which requires automatic ICD10-PCS
[CIE10-Procedimiento] code assignment.
- CodiEsp-X: The Explainable AI exploratory sub-task, which requires to submit
the reference to the predicted codes (both ICD10-CM and ICD10-PCS). The goal
of this novel task was not only to predict the correct codes but also to
present the reference in the text that supports the code predictions.
For further information, please visit https://temu.bsc.es/codiesp or send an
email to encargo-pln-life@bsc.es
## Citation Information
```
@article{miranda2020overview,
title={Overview of Automatic Clinical Coding: Annotations, Guidelines, and Solutions for non-English Clinical Cases at CodiEsp Track of CLEF eHealth 2020.},
author={Miranda-Escalada, Antonio and Gonzalez-Agirre, Aitor and Armengol-Estap{'e}, Jordi and Krallinger, Martin},
journal={CLEF (Working Notes)},
volume={2020},
year={2020}
}
```
|
ipipan/nkjp1m | 2022-12-07T16:47:51.000Z | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"task_ids:lemmatization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-4.0",
... | ipipan | This is the official dataset for NKJP1M – the 1-million token subcorpus of the
National Corpus of Polish (Narodowy Korpus Języka Polskiego)
Besides the text (divided into paragraphs/samples and sentences) the
set contains lemmas and morpho-syntactic tags for all tokens in the corpus.
This release corresponds to the version 1.2 of the corpus with
following corrections and improvements. In particular the
morpho-syntactic annotation has been aligned with the present version
of Morfeusz2 morphological analyser. | @Book{nkjp:12,
editor = "Adam Przepiórkowski and Mirosław Bańko and Rafał
L. Górski and Barbara Lewandowska-Tomaszczyk",
title = "Narodowy Korpus Języka Polskiego",
year = 2012,
address = "Warszawa",
pdf = "http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf",
publisher = "Wydawnictwo Naukowe PWN"} | null | 2 | 20 | ---
annotations_creators:
- expert-generated
language:
- pl
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: NKJP1M
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- National Corpus of Polish
- Narodowy Korpus Języka Polskiego
task_categories:
- token-classification
task_ids:
- part-of-speech
- lemmatization
dataset_info:
features:
- name: nkjp_text
dtype: string
- name: nkjp_par
dtype: string
- name: nkjp_sent
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: cposes
sequence:
class_label:
names:
0: A
1: Adv
2: Comp
3: Conj
4: Dig
5: Interj
6: N
7: Num
8: Part
9: Prep
10: Punct
11: V
12: X
- name: poses
sequence:
class_label:
names:
0: adj
1: adja
2: adjc
3: adjp
4: adv
5: aglt
6: bedzie
7: brev
8: comp
9: conj
10: depr
11: dig
12: fin
13: frag
14: ger
15: imps
16: impt
17: inf
18: interj
19: interp
20: num
21: numcomp
22: pact
23: pacta
24: pant
25: part
26: pcon
27: ppas
28: ppron12
29: ppron3
30: praet
31: pred
32: prep
33: romandig
34: siebie
35: subst
36: sym
37: winien
38: xxs
39: xxx
- name: tags
sequence:
class_label:
names:
0: adj:pl:acc:f:com
1: adj:pl:acc:f:pos
2: adj:pl:acc:f:sup
3: adj:pl:acc:m1:com
4: adj:pl:acc:m1:pos
5: adj:pl:acc:m1:sup
6: adj:pl:acc:m2:com
7: adj:pl:acc:m2:pos
8: adj:pl:acc:m2:sup
9: adj:pl:acc:m3:com
10: adj:pl:acc:m3:pos
11: adj:pl:acc:m3:sup
12: adj:pl:acc:n:com
13: adj:pl:acc:n:pos
14: adj:pl:acc:n:sup
15: adj:pl:dat:f:com
16: adj:pl:dat:f:pos
17: adj:pl:dat:f:sup
18: adj:pl:dat:m1:com
19: adj:pl:dat:m1:pos
20: adj:pl:dat:m1:sup
21: adj:pl:dat:m2:pos
22: adj:pl:dat:m3:com
23: adj:pl:dat:m3:pos
24: adj:pl:dat:n:pos
25: adj:pl:dat:n:sup
26: adj:pl:gen:f:com
27: adj:pl:gen:f:pos
28: adj:pl:gen:f:sup
29: adj:pl:gen:m1:com
30: adj:pl:gen:m1:pos
31: adj:pl:gen:m1:sup
32: adj:pl:gen:m2:com
33: adj:pl:gen:m2:pos
34: adj:pl:gen:m2:sup
35: adj:pl:gen:m3:com
36: adj:pl:gen:m3:pos
37: adj:pl:gen:m3:sup
38: adj:pl:gen:n:com
39: adj:pl:gen:n:pos
40: adj:pl:gen:n:sup
41: adj:pl:inst:f:com
42: adj:pl:inst:f:pos
43: adj:pl:inst:f:sup
44: adj:pl:inst:m1:com
45: adj:pl:inst:m1:pos
46: adj:pl:inst:m1:sup
47: adj:pl:inst:m2:pos
48: adj:pl:inst:m3:com
49: adj:pl:inst:m3:pos
50: adj:pl:inst:m3:sup
51: adj:pl:inst:n:com
52: adj:pl:inst:n:pos
53: adj:pl:inst:n:sup
54: adj:pl:loc:f:com
55: adj:pl:loc:f:pos
56: adj:pl:loc:f:sup
57: adj:pl:loc:m1:com
58: adj:pl:loc:m1:pos
59: adj:pl:loc:m1:sup
60: adj:pl:loc:m2:pos
61: adj:pl:loc:m3:com
62: adj:pl:loc:m3:pos
63: adj:pl:loc:m3:sup
64: adj:pl:loc:n:com
65: adj:pl:loc:n:pos
66: adj:pl:loc:n:sup
67: adj:pl:nom:f:com
68: adj:pl:nom:f:pos
69: adj:pl:nom:f:sup
70: adj:pl:nom:m1:com
71: adj:pl:nom:m1:pos
72: adj:pl:nom:m1:sup
73: adj:pl:nom:m2:com
74: adj:pl:nom:m2:pos
75: adj:pl:nom:m2:sup
76: adj:pl:nom:m3:com
77: adj:pl:nom:m3:pos
78: adj:pl:nom:m3:sup
79: adj:pl:nom:n:com
80: adj:pl:nom:n:pos
81: adj:pl:nom:n:sup
82: adj:pl:voc:f:pos
83: adj:pl:voc:m1:pos
84: adj:pl:voc:m2:pos
85: adj:pl:voc:n:pos
86: adj:sg:acc:f:com
87: adj:sg:acc:f:pos
88: adj:sg:acc:f:sup
89: adj:sg:acc:m1:com
90: adj:sg:acc:m1:pos
91: adj:sg:acc:m1:sup
92: adj:sg:acc:m2:com
93: adj:sg:acc:m2:pos
94: adj:sg:acc:m2:sup
95: adj:sg:acc:m3:com
96: adj:sg:acc:m3:pos
97: adj:sg:acc:m3:sup
98: adj:sg:acc:n:com
99: adj:sg:acc:n:pos
100: adj:sg:acc:n:sup
101: adj:sg:dat:f:com
102: adj:sg:dat:f:pos
103: adj:sg:dat:f:sup
104: adj:sg:dat:m1:com
105: adj:sg:dat:m1:pos
106: adj:sg:dat:m1:sup
107: adj:sg:dat:m2:pos
108: adj:sg:dat:m3:com
109: adj:sg:dat:m3:pos
110: adj:sg:dat:m3:sup
111: adj:sg:dat:n:com
112: adj:sg:dat:n:pos
113: adj:sg:dat:n:sup
114: adj:sg:gen:f:com
115: adj:sg:gen:f:pos
116: adj:sg:gen:f:sup
117: adj:sg:gen:m1:com
118: adj:sg:gen:m1:pos
119: adj:sg:gen:m1:sup
120: adj:sg:gen:m2:pos
121: adj:sg:gen:m2:sup
122: adj:sg:gen:m3:com
123: adj:sg:gen:m3:pos
124: adj:sg:gen:m3:sup
125: adj:sg:gen:n:com
126: adj:sg:gen:n:pos
127: adj:sg:gen:n:sup
128: adj:sg:inst:f:com
129: adj:sg:inst:f:pos
130: adj:sg:inst:f:sup
131: adj:sg:inst:m1:com
132: adj:sg:inst:m1:pos
133: adj:sg:inst:m1:sup
134: adj:sg:inst:m2:com
135: adj:sg:inst:m2:pos
136: adj:sg:inst:m2:sup
137: adj:sg:inst:m3:com
138: adj:sg:inst:m3:pos
139: adj:sg:inst:m3:sup
140: adj:sg:inst:n:com
141: adj:sg:inst:n:pos
142: adj:sg:inst:n:sup
143: adj:sg:loc:f:com
144: adj:sg:loc:f:pos
145: adj:sg:loc:f:sup
146: adj:sg:loc:m1:com
147: adj:sg:loc:m1:pos
148: adj:sg:loc:m1:sup
149: adj:sg:loc:m2:com
150: adj:sg:loc:m2:pos
151: adj:sg:loc:m3:com
152: adj:sg:loc:m3:pos
153: adj:sg:loc:m3:sup
154: adj:sg:loc:n:com
155: adj:sg:loc:n:pos
156: adj:sg:loc:n:sup
157: adj:sg:nom:f:com
158: adj:sg:nom:f:pos
159: adj:sg:nom:f:sup
160: adj:sg:nom:m1:com
161: adj:sg:nom:m1:pos
162: adj:sg:nom:m1:sup
163: adj:sg:nom:m2:com
164: adj:sg:nom:m2:pos
165: adj:sg:nom:m2:sup
166: adj:sg:nom:m3:com
167: adj:sg:nom:m3:pos
168: adj:sg:nom:m3:sup
169: adj:sg:nom:n:com
170: adj:sg:nom:n:pos
171: adj:sg:nom:n:sup
172: adj:sg:voc:f:pos
173: adj:sg:voc:f:sup
174: adj:sg:voc:m1:pos
175: adj:sg:voc:m1:sup
176: adj:sg:voc:m2:pos
177: adj:sg:voc:m3:pos
178: adj:sg:voc:n:pos
179: adja
180: adjc
181: adjp:dat
182: adjp:gen
183: adv
184: adv:com
185: adv:pos
186: adv:sup
187: aglt:pl:pri:imperf:nwok
188: aglt:pl:sec:imperf:nwok
189: aglt:sg:pri:imperf:nwok
190: aglt:sg:pri:imperf:wok
191: aglt:sg:sec:imperf:nwok
192: aglt:sg:sec:imperf:wok
193: bedzie:pl:pri:imperf
194: bedzie:pl:sec:imperf
195: bedzie:pl:ter:imperf
196: bedzie:sg:pri:imperf
197: bedzie:sg:sec:imperf
198: bedzie:sg:ter:imperf
199: brev:npun
200: brev:pun
201: comp
202: conj
203: depr:pl:acc:m2
204: depr:pl:nom:m2
205: depr:pl:voc:m2
206: dig
207: fin:pl:pri:imperf
208: fin:pl:pri:perf
209: fin:pl:sec:imperf
210: fin:pl:sec:perf
211: fin:pl:ter:imperf
212: fin:pl:ter:perf
213: fin:sg:pri:imperf
214: fin:sg:pri:perf
215: fin:sg:sec:imperf
216: fin:sg:sec:perf
217: fin:sg:ter:imperf
218: fin:sg:ter:perf
219: frag
220: ger:pl:acc:n:imperf:aff
221: ger:pl:acc:n:perf:aff
222: ger:pl:dat:n:perf:aff
223: ger:pl:gen:n:imperf:aff
224: ger:pl:gen:n:perf:aff
225: ger:pl:inst:n:imperf:aff
226: ger:pl:inst:n:perf:aff
227: ger:pl:loc:n:imperf:aff
228: ger:pl:loc:n:perf:aff
229: ger:pl:nom:n:imperf:aff
230: ger:pl:nom:n:perf:aff
231: ger:sg:acc:n:imperf:aff
232: ger:sg:acc:n:imperf:neg
233: ger:sg:acc:n:perf:aff
234: ger:sg:acc:n:perf:neg
235: ger:sg:dat:n:imperf:aff
236: ger:sg:dat:n:perf:aff
237: ger:sg:dat:n:perf:neg
238: ger:sg:gen:n:imperf:aff
239: ger:sg:gen:n:imperf:neg
240: ger:sg:gen:n:perf:aff
241: ger:sg:gen:n:perf:neg
242: ger:sg:inst:n:imperf:aff
243: ger:sg:inst:n:imperf:neg
244: ger:sg:inst:n:perf:aff
245: ger:sg:inst:n:perf:neg
246: ger:sg:loc:n:imperf:aff
247: ger:sg:loc:n:imperf:neg
248: ger:sg:loc:n:perf:aff
249: ger:sg:loc:n:perf:neg
250: ger:sg:nom:n:imperf:aff
251: ger:sg:nom:n:imperf:neg
252: ger:sg:nom:n:perf:aff
253: ger:sg:nom:n:perf:neg
254: imps:imperf
255: imps:perf
256: impt:pl:pri:imperf
257: impt:pl:pri:perf
258: impt:pl:sec:imperf
259: impt:pl:sec:perf
260: impt:sg:pri:imperf
261: impt:sg:sec:imperf
262: impt:sg:sec:perf
263: inf:imperf
264: inf:perf
265: interj
266: interp
267: num:pl:acc:f:congr:ncol
268: num:pl:acc:f:rec
269: num:pl:acc:f:rec:ncol
270: num:pl:acc:m1:rec
271: num:pl:acc:m1:rec:col
272: num:pl:acc:m1:rec:ncol
273: num:pl:acc:m2:congr:ncol
274: num:pl:acc:m2:rec
275: num:pl:acc:m2:rec:ncol
276: num:pl:acc:m3:congr
277: num:pl:acc:m3:congr:ncol
278: num:pl:acc:m3:rec
279: num:pl:acc:m3:rec:ncol
280: num:pl:acc:n:congr:ncol
281: num:pl:acc:n:rec
282: num:pl:acc:n:rec:col
283: num:pl:acc:n:rec:ncol
284: num:pl:dat:f:congr
285: num:pl:dat:f:congr:ncol
286: num:pl:dat:m1:congr
287: num:pl:dat:m1:congr:col
288: num:pl:dat:m1:congr:ncol
289: num:pl:dat:m2:congr
290: num:pl:dat:m3:congr:ncol
291: num:pl:dat:n:congr
292: num:pl:dat:n:congr:ncol
293: num:pl:gen:f:congr
294: num:pl:gen:f:congr:ncol
295: num:pl:gen:f:rec
296: num:pl:gen:f:rec:ncol
297: num:pl:gen:m1:congr
298: num:pl:gen:m1:congr:ncol
299: num:pl:gen:m1:rec
300: num:pl:gen:m1:rec:col
301: num:pl:gen:m2:congr
302: num:pl:gen:m2:congr:ncol
303: num:pl:gen:m2:rec
304: num:pl:gen:m3:congr
305: num:pl:gen:m3:congr:ncol
306: num:pl:gen:m3:rec
307: num:pl:gen:m3:rec:ncol
308: num:pl:gen:n:congr
309: num:pl:gen:n:congr:ncol
310: num:pl:gen:n:rec
311: num:pl:gen:n:rec:col
312: num:pl:inst:f:congr
313: num:pl:inst:f:congr:ncol
314: num:pl:inst:m1:congr
315: num:pl:inst:m1:congr:ncol
316: num:pl:inst:m1:rec:col
317: num:pl:inst:m2:congr
318: num:pl:inst:m2:congr:ncol
319: num:pl:inst:m3:congr
320: num:pl:inst:m3:congr:ncol
321: num:pl:inst:n:congr
322: num:pl:inst:n:congr:ncol
323: num:pl:inst:n:rec:col
324: num:pl:loc:f:congr
325: num:pl:loc:f:congr:ncol
326: num:pl:loc:m1:congr
327: num:pl:loc:m1:congr:ncol
328: num:pl:loc:m2:congr
329: num:pl:loc:m2:congr:ncol
330: num:pl:loc:m3:congr
331: num:pl:loc:m3:congr:ncol
332: num:pl:loc:n:congr
333: num:pl:loc:n:congr:ncol
334: num:pl:nom:f:congr:ncol
335: num:pl:nom:f:rec
336: num:pl:nom:f:rec:ncol
337: num:pl:nom:m1:congr:ncol
338: num:pl:nom:m1:rec
339: num:pl:nom:m1:rec:col
340: num:pl:nom:m1:rec:ncol
341: num:pl:nom:m2:congr:ncol
342: num:pl:nom:m2:rec
343: num:pl:nom:m2:rec:ncol
344: num:pl:nom:m3:congr:ncol
345: num:pl:nom:m3:rec
346: num:pl:nom:m3:rec:ncol
347: num:pl:nom:n:congr
348: num:pl:nom:n:congr:ncol
349: num:pl:nom:n:rec
350: num:pl:nom:n:rec:col
351: num:pl:nom:n:rec:ncol
352: num:sg:acc:f:rec
353: num:sg:acc:f:rec:ncol
354: num:sg:acc:m1:rec:ncol
355: num:sg:acc:m2:rec
356: num:sg:acc:m3:rec
357: num:sg:acc:m3:rec:ncol
358: num:sg:acc:n:rec
359: num:sg:gen:f:rec
360: num:sg:gen:m3:rec
361: num:sg:gen:n:rec
362: num:sg:inst:m3:rec
363: num:sg:loc:f:rec
364: num:sg:loc:m3:congr
365: num:sg:loc:m3:rec
366: num:sg:nom:f:rec
367: num:sg:nom:m2:rec
368: num:sg:nom:m3:rec
369: num:sg:nom:m3:rec:ncol
370: num:sg:nom:n:rec
371: numcomp
372: pact:pl:acc:f:imperf:aff
373: pact:pl:acc:f:imperf:neg
374: pact:pl:acc:m1:imperf:aff
375: pact:pl:acc:m2:imperf:aff
376: pact:pl:acc:m3:imperf:aff
377: pact:pl:acc:m3:imperf:neg
378: pact:pl:acc:n:imperf:aff
379: pact:pl:acc:n:imperf:neg
380: pact:pl:dat:f:imperf:aff
381: pact:pl:dat:m1:imperf:aff
382: pact:pl:dat:m2:imperf:aff
383: pact:pl:dat:m3:imperf:aff
384: pact:pl:dat:n:imperf:aff
385: pact:pl:gen:f:imperf:aff
386: pact:pl:gen:f:imperf:neg
387: pact:pl:gen:m1:imperf:aff
388: pact:pl:gen:m1:imperf:neg
389: pact:pl:gen:m2:imperf:aff
390: pact:pl:gen:m3:imperf:aff
391: pact:pl:gen:m3:imperf:neg
392: pact:pl:gen:n:imperf:aff
393: pact:pl:inst:f:imperf:aff
394: pact:pl:inst:m1:imperf:aff
395: pact:pl:inst:m2:imperf:aff
396: pact:pl:inst:m3:imperf:aff
397: pact:pl:inst:m3:imperf:neg
398: pact:pl:inst:n:imperf:aff
399: pact:pl:inst:n:imperf:neg
400: pact:pl:loc:f:imperf:aff
401: pact:pl:loc:m1:imperf:aff
402: pact:pl:loc:m3:imperf:aff
403: pact:pl:loc:m3:imperf:neg
404: pact:pl:loc:n:imperf:aff
405: pact:pl:loc:n:imperf:neg
406: pact:pl:nom:f:imperf:aff
407: pact:pl:nom:f:imperf:neg
408: pact:pl:nom:m1:imperf:aff
409: pact:pl:nom:m2:imperf:aff
410: pact:pl:nom:m3:imperf:aff
411: pact:pl:nom:n:imperf:aff
412: pact:pl:nom:n:imperf:neg
413: pact:pl:voc:f:imperf:aff
414: pact:sg:acc:f:imperf:aff
415: pact:sg:acc:f:imperf:neg
416: pact:sg:acc:m1:imperf:aff
417: pact:sg:acc:m2:imperf:aff
418: pact:sg:acc:m3:imperf:aff
419: pact:sg:acc:n:imperf:aff
420: pact:sg:acc:n:imperf:neg
421: pact:sg:dat:f:imperf:aff
422: pact:sg:dat:m1:imperf:aff
423: pact:sg:dat:m2:imperf:aff
424: pact:sg:dat:m3:imperf:aff
425: pact:sg:dat:n:imperf:aff
426: pact:sg:gen:f:imperf:aff
427: pact:sg:gen:f:imperf:neg
428: pact:sg:gen:m1:imperf:aff
429: pact:sg:gen:m1:imperf:neg
430: pact:sg:gen:m2:imperf:aff
431: pact:sg:gen:m3:imperf:aff
432: pact:sg:gen:m3:imperf:neg
433: pact:sg:gen:n:imperf:aff
434: pact:sg:gen:n:imperf:neg
435: pact:sg:inst:f:imperf:aff
436: pact:sg:inst:f:imperf:neg
437: pact:sg:inst:m1:imperf:aff
438: pact:sg:inst:m1:imperf:neg
439: pact:sg:inst:m2:imperf:aff
440: pact:sg:inst:m2:imperf:neg
441: pact:sg:inst:m3:imperf:aff
442: pact:sg:inst:m3:imperf:neg
443: pact:sg:inst:n:imperf:aff
444: pact:sg:loc:f:imperf:aff
445: pact:sg:loc:f:imperf:neg
446: pact:sg:loc:m1:imperf:aff
447: pact:sg:loc:m2:imperf:aff
448: pact:sg:loc:m3:imperf:aff
449: pact:sg:loc:m3:imperf:neg
450: pact:sg:loc:n:imperf:aff
451: pact:sg:loc:n:imperf:neg
452: pact:sg:nom:f:imperf:aff
453: pact:sg:nom:f:imperf:neg
454: pact:sg:nom:m1:imperf:aff
455: pact:sg:nom:m1:imperf:neg
456: pact:sg:nom:m2:imperf:aff
457: pact:sg:nom:m3:imperf:aff
458: pact:sg:nom:m3:imperf:neg
459: pact:sg:nom:n:imperf:aff
460: pact:sg:nom:n:imperf:neg
461: pact:sg:voc:m1:imperf:aff
462: pacta
463: pant:perf
464: part
465: part:nwok
466: part:wok
467: pcon:imperf
468: ppas:pl:acc:f:imperf:aff
469: ppas:pl:acc:f:perf:aff
470: ppas:pl:acc:f:perf:neg
471: ppas:pl:acc:m1:imperf:aff
472: ppas:pl:acc:m1:imperf:neg
473: ppas:pl:acc:m1:perf:aff
474: ppas:pl:acc:m1:perf:neg
475: ppas:pl:acc:m2:imperf:aff
476: ppas:pl:acc:m2:perf:aff
477: ppas:pl:acc:m3:imperf:aff
478: ppas:pl:acc:m3:perf:aff
479: ppas:pl:acc:m3:perf:neg
480: ppas:pl:acc:n:imperf:aff
481: ppas:pl:acc:n:imperf:neg
482: ppas:pl:acc:n:perf:aff
483: ppas:pl:acc:n:perf:neg
484: ppas:pl:dat:f:imperf:aff
485: ppas:pl:dat:f:perf:aff
486: ppas:pl:dat:f:perf:neg
487: ppas:pl:dat:m1:imperf:aff
488: ppas:pl:dat:m1:perf:aff
489: ppas:pl:dat:m1:perf:neg
490: ppas:pl:dat:m2:imperf:aff
491: ppas:pl:dat:m3:imperf:aff
492: ppas:pl:dat:m3:perf:aff
493: ppas:pl:dat:n:imperf:aff
494: ppas:pl:dat:n:perf:aff
495: ppas:pl:gen:f:imperf:aff
496: ppas:pl:gen:f:imperf:neg
497: ppas:pl:gen:f:perf:aff
498: ppas:pl:gen:f:perf:neg
499: ppas:pl:gen:m1:imperf:aff
500: ppas:pl:gen:m1:imperf:neg
501: ppas:pl:gen:m1:perf:aff
502: ppas:pl:gen:m1:perf:neg
503: ppas:pl:gen:m2:imperf:aff
504: ppas:pl:gen:m2:perf:aff
505: ppas:pl:gen:m3:imperf:aff
506: ppas:pl:gen:m3:imperf:neg
507: ppas:pl:gen:m3:perf:aff
508: ppas:pl:gen:m3:perf:neg
509: ppas:pl:gen:n:imperf:aff
510: ppas:pl:gen:n:perf:aff
511: ppas:pl:gen:n:perf:neg
512: ppas:pl:inst:f:imperf:aff
513: ppas:pl:inst:f:perf:aff
514: ppas:pl:inst:m1:imperf:aff
515: ppas:pl:inst:m1:perf:aff
516: ppas:pl:inst:m2:perf:aff
517: ppas:pl:inst:m3:imperf:aff
518: ppas:pl:inst:m3:perf:aff
519: ppas:pl:inst:n:imperf:aff
520: ppas:pl:inst:n:perf:aff
521: ppas:pl:loc:f:imperf:aff
522: ppas:pl:loc:f:imperf:neg
523: ppas:pl:loc:f:perf:aff
524: ppas:pl:loc:f:perf:neg
525: ppas:pl:loc:m1:imperf:aff
526: ppas:pl:loc:m1:perf:aff
527: ppas:pl:loc:m2:imperf:aff
528: ppas:pl:loc:m3:imperf:aff
529: ppas:pl:loc:m3:perf:aff
530: ppas:pl:loc:m3:perf:neg
531: ppas:pl:loc:n:imperf:aff
532: ppas:pl:loc:n:perf:aff
533: ppas:pl:loc:n:perf:neg
534: ppas:pl:nom:f:imperf:aff
535: ppas:pl:nom:f:imperf:neg
536: ppas:pl:nom:f:perf:aff
537: ppas:pl:nom:f:perf:neg
538: ppas:pl:nom:m1:imperf:aff
539: ppas:pl:nom:m1:imperf:neg
540: ppas:pl:nom:m1:perf:aff
541: ppas:pl:nom:m1:perf:neg
542: ppas:pl:nom:m2:imperf:aff
543: ppas:pl:nom:m2:perf:aff
544: ppas:pl:nom:m3:imperf:aff
545: ppas:pl:nom:m3:imperf:neg
546: ppas:pl:nom:m3:perf:aff
547: ppas:pl:nom:m3:perf:neg
548: ppas:pl:nom:n:imperf:aff
549: ppas:pl:nom:n:perf:aff
550: ppas:pl:nom:n:perf:neg
551: ppas:pl:voc:f:imperf:aff
552: ppas:sg:acc:f:imperf:aff
553: ppas:sg:acc:f:imperf:neg
554: ppas:sg:acc:f:perf:aff
555: ppas:sg:acc:f:perf:neg
556: ppas:sg:acc:m1:imperf:aff
557: ppas:sg:acc:m1:perf:aff
558: ppas:sg:acc:m2:imperf:aff
559: ppas:sg:acc:m2:perf:aff
560: ppas:sg:acc:m3:imperf:aff
561: ppas:sg:acc:m3:imperf:neg
562: ppas:sg:acc:m3:perf:aff
563: ppas:sg:acc:m3:perf:neg
564: ppas:sg:acc:n:imperf:aff
565: ppas:sg:acc:n:perf:aff
566: ppas:sg:acc:n:perf:neg
567: ppas:sg:dat:f:imperf:aff
568: ppas:sg:dat:f:imperf:neg
569: ppas:sg:dat:f:perf:aff
570: ppas:sg:dat:f:perf:neg
571: ppas:sg:dat:m1:imperf:aff
572: ppas:sg:dat:m1:perf:aff
573: ppas:sg:dat:m2:perf:aff
574: ppas:sg:dat:m3:imperf:aff
575: ppas:sg:dat:m3:perf:aff
576: ppas:sg:dat:n:perf:aff
577: ppas:sg:gen:f:imperf:aff
578: ppas:sg:gen:f:imperf:neg
579: ppas:sg:gen:f:perf:aff
580: ppas:sg:gen:f:perf:neg
581: ppas:sg:gen:m1:imperf:aff
582: ppas:sg:gen:m1:perf:aff
583: ppas:sg:gen:m1:perf:neg
584: ppas:sg:gen:m2:imperf:aff
585: ppas:sg:gen:m2:perf:aff
586: ppas:sg:gen:m3:imperf:aff
587: ppas:sg:gen:m3:imperf:neg
588: ppas:sg:gen:m3:perf:aff
589: ppas:sg:gen:m3:perf:neg
590: ppas:sg:gen:n:imperf:aff
591: ppas:sg:gen:n:imperf:neg
592: ppas:sg:gen:n:perf:aff
593: ppas:sg:gen:n:perf:neg
594: ppas:sg:inst:f:imperf:aff
595: ppas:sg:inst:f:imperf:neg
596: ppas:sg:inst:f:perf:aff
597: ppas:sg:inst:f:perf:neg
598: ppas:sg:inst:m1:imperf:aff
599: ppas:sg:inst:m1:imperf:neg
600: ppas:sg:inst:m1:perf:aff
601: ppas:sg:inst:m1:perf:neg
602: ppas:sg:inst:m2:imperf:aff
603: ppas:sg:inst:m2:perf:aff
604: ppas:sg:inst:m3:imperf:aff
605: ppas:sg:inst:m3:imperf:neg
606: ppas:sg:inst:m3:perf:aff
607: ppas:sg:inst:m3:perf:neg
608: ppas:sg:inst:n:imperf:aff
609: ppas:sg:inst:n:imperf:neg
610: ppas:sg:inst:n:perf:aff
611: ppas:sg:inst:n:perf:neg
612: ppas:sg:loc:f:imperf:aff
613: ppas:sg:loc:f:perf:aff
614: ppas:sg:loc:f:perf:neg
615: ppas:sg:loc:m1:imperf:aff
616: ppas:sg:loc:m1:perf:aff
617: ppas:sg:loc:m2:imperf:aff
618: ppas:sg:loc:m3:imperf:aff
619: ppas:sg:loc:m3:imperf:neg
620: ppas:sg:loc:m3:perf:aff
621: ppas:sg:loc:m3:perf:neg
622: ppas:sg:loc:n:imperf:aff
623: ppas:sg:loc:n:perf:aff
624: ppas:sg:loc:n:perf:neg
625: ppas:sg:nom:f:imperf:aff
626: ppas:sg:nom:f:imperf:neg
627: ppas:sg:nom:f:perf:aff
628: ppas:sg:nom:f:perf:neg
629: ppas:sg:nom:m1:imperf:aff
630: ppas:sg:nom:m1:imperf:neg
631: ppas:sg:nom:m1:perf:aff
632: ppas:sg:nom:m1:perf:neg
633: ppas:sg:nom:m2:imperf:aff
634: ppas:sg:nom:m2:perf:aff
635: ppas:sg:nom:m3:imperf:aff
636: ppas:sg:nom:m3:imperf:neg
637: ppas:sg:nom:m3:perf:aff
638: ppas:sg:nom:m3:perf:neg
639: ppas:sg:nom:n:imperf:aff
640: ppas:sg:nom:n:imperf:neg
641: ppas:sg:nom:n:perf:aff
642: ppas:sg:nom:n:perf:neg
643: ppas:sg:voc:m1:perf:aff
644: ppas:sg:voc:m2:imperf:aff
645: ppas:sg:voc:m3:perf:aff
646: ppron12:pl:acc:f:pri
647: ppron12:pl:acc:f:sec
648: ppron12:pl:acc:m1:pri
649: ppron12:pl:acc:m1:sec
650: ppron12:pl:acc:m2:sec
651: ppron12:pl:acc:n:sec
652: ppron12:pl:dat:f:pri
653: ppron12:pl:dat:f:sec
654: ppron12:pl:dat:m1:pri
655: ppron12:pl:dat:m1:sec
656: ppron12:pl:dat:m3:sec
657: ppron12:pl:gen:f:pri
658: ppron12:pl:gen:f:sec
659: ppron12:pl:gen:m1:pri
660: ppron12:pl:gen:m1:sec
661: ppron12:pl:gen:m2:pri
662: ppron12:pl:inst:f:pri
663: ppron12:pl:inst:m1:pri
664: ppron12:pl:inst:m1:sec
665: ppron12:pl:inst:n:pri
666: ppron12:pl:loc:f:sec
667: ppron12:pl:loc:m1:pri
668: ppron12:pl:loc:m1:sec
669: ppron12:pl:loc:m3:sec
670: ppron12:pl:nom:f:pri
671: ppron12:pl:nom:f:sec
672: ppron12:pl:nom:m1:pri
673: ppron12:pl:nom:m1:sec
674: ppron12:pl:nom:m2:pri
675: ppron12:pl:nom:n:sec
676: ppron12:pl:voc:m1:sec
677: ppron12:pl:voc:m2:sec
678: ppron12:sg:acc:f:pri:akc
679: ppron12:sg:acc:f:sec:akc
680: ppron12:sg:acc:f:sec:nakc
681: ppron12:sg:acc:m1:pri:akc
682: ppron12:sg:acc:m1:pri:nakc
683: ppron12:sg:acc:m1:sec:akc
684: ppron12:sg:acc:m1:sec:nakc
685: ppron12:sg:acc:m2:pri:akc
686: ppron12:sg:acc:m2:sec:nakc
687: ppron12:sg:acc:m3:pri:akc
688: ppron12:sg:acc:m3:sec:nakc
689: ppron12:sg:acc:n:pri:akc
690: ppron12:sg:acc:n:sec:nakc
691: ppron12:sg:dat:f:pri:akc
692: ppron12:sg:dat:f:pri:nakc
693: ppron12:sg:dat:f:sec:akc
694: ppron12:sg:dat:f:sec:nakc
695: ppron12:sg:dat:m1:pri:akc
696: ppron12:sg:dat:m1:pri:nakc
697: ppron12:sg:dat:m1:sec:akc
698: ppron12:sg:dat:m1:sec:nakc
699: ppron12:sg:dat:m2:pri:nakc
700: ppron12:sg:dat:m2:sec:akc
701: ppron12:sg:dat:m2:sec:nakc
702: ppron12:sg:gen:f:pri:akc
703: ppron12:sg:gen:f:sec:akc
704: ppron12:sg:gen:f:sec:nakc
705: ppron12:sg:gen:m1:pri:akc
706: ppron12:sg:gen:m1:sec:akc
707: ppron12:sg:gen:m1:sec:nakc
708: ppron12:sg:gen:m2:pri:akc
709: ppron12:sg:gen:m2:sec:akc
710: ppron12:sg:gen:m2:sec:nakc
711: ppron12:sg:gen:n:pri:akc
712: ppron12:sg:inst:f:pri
713: ppron12:sg:inst:f:sec
714: ppron12:sg:inst:m1:pri
715: ppron12:sg:inst:m1:pri:nakc
716: ppron12:sg:inst:m1:sec
717: ppron12:sg:inst:n:sec
718: ppron12:sg:loc:f:pri
719: ppron12:sg:loc:f:sec
720: ppron12:sg:loc:m1:pri
721: ppron12:sg:loc:m1:sec
722: ppron12:sg:loc:m3:pri
723: ppron12:sg:nom:f:pri
724: ppron12:sg:nom:f:sec
725: ppron12:sg:nom:m1:pri
726: ppron12:sg:nom:m1:pri:nakc
727: ppron12:sg:nom:m1:sec
728: ppron12:sg:nom:m2:pri
729: ppron12:sg:nom:m2:sec
730: ppron12:sg:nom:m3:pri
731: ppron12:sg:nom:m3:sec
732: ppron12:sg:nom:n:sec
733: ppron12:sg:voc:f:sec
734: ppron12:sg:voc:m1:sec
735: ppron12:sg:voc:m2:sec
736: ppron12:sg:voc:n:sec
737: ppron3:pl:acc:f:ter:akc:npraep
738: ppron3:pl:acc:f:ter:akc:praep
739: ppron3:pl:acc:m1:ter:akc:npraep
740: ppron3:pl:acc:m1:ter:akc:praep
741: ppron3:pl:acc:m2:ter:akc:npraep
742: ppron3:pl:acc:m2:ter:akc:praep
743: ppron3:pl:acc:m3:ter:akc:npraep
744: ppron3:pl:acc:m3:ter:akc:praep
745: ppron3:pl:acc:n:ter:akc:npraep
746: ppron3:pl:acc:n:ter:akc:praep
747: ppron3:pl:dat:f:ter:akc:npraep
748: ppron3:pl:dat:f:ter:akc:praep
749: ppron3:pl:dat:m1:ter:akc:npraep
750: ppron3:pl:dat:m1:ter:akc:praep
751: ppron3:pl:dat:m2:ter:akc:npraep
752: ppron3:pl:dat:m3:ter:akc:npraep
753: ppron3:pl:dat:m3:ter:akc:praep
754: ppron3:pl:dat:n:ter:akc:npraep
755: ppron3:pl:gen:f:ter:akc:npraep
756: ppron3:pl:gen:f:ter:akc:praep
757: ppron3:pl:gen:m1:ter:akc:npraep
758: ppron3:pl:gen:m1:ter:akc:praep
759: ppron3:pl:gen:m2:ter:akc:npraep
760: ppron3:pl:gen:m2:ter:akc:praep
761: ppron3:pl:gen:m3:ter:akc:npraep
762: ppron3:pl:gen:m3:ter:akc:praep
763: ppron3:pl:gen:n:ter:akc:npraep
764: ppron3:pl:gen:n:ter:akc:praep
765: ppron3:pl:inst:f:ter:akc:npraep
766: ppron3:pl:inst:f:ter:akc:praep
767: ppron3:pl:inst:m1:ter:akc:npraep
768: ppron3:pl:inst:m1:ter:akc:praep
769: ppron3:pl:inst:m2:ter:akc:npraep
770: ppron3:pl:inst:m2:ter:akc:praep
771: ppron3:pl:inst:m3:ter:akc:npraep
772: ppron3:pl:inst:m3:ter:akc:praep
773: ppron3:pl:inst:n:ter:akc:npraep
774: ppron3:pl:inst:n:ter:akc:praep
775: ppron3:pl:loc:f:ter:akc:praep
776: ppron3:pl:loc:m1:ter:akc:praep
777: ppron3:pl:loc:m2:ter:akc:praep
778: ppron3:pl:loc:m3:ter:akc:praep
779: ppron3:pl:loc:n:ter:akc:praep
780: ppron3:pl:nom:f:ter:akc:npraep
781: ppron3:pl:nom:f:ter:nakc:npraep
782: ppron3:pl:nom:m1:ter:akc:npraep
783: ppron3:pl:nom:m2:ter:akc:npraep
784: ppron3:pl:nom:m3:ter:akc:npraep
785: ppron3:pl:nom:n:ter:akc:npraep
786: ppron3:sg:acc:f:ter:akc:npraep
787: ppron3:sg:acc:f:ter:akc:praep
788: ppron3:sg:acc:m1:ter:akc:npraep
789: ppron3:sg:acc:m1:ter:akc:praep
790: ppron3:sg:acc:m1:ter:nakc:npraep
791: ppron3:sg:acc:m1:ter:nakc:praep
792: ppron3:sg:acc:m2:ter:akc:praep
793: ppron3:sg:acc:m2:ter:nakc:npraep
794: ppron3:sg:acc:m2:ter:nakc:praep
795: ppron3:sg:acc:m3:ter:akc:npraep
796: ppron3:sg:acc:m3:ter:akc:praep
797: ppron3:sg:acc:m3:ter:nakc:npraep
798: ppron3:sg:acc:m3:ter:nakc:praep
799: ppron3:sg:acc:n:ter:akc:npraep
800: ppron3:sg:acc:n:ter:akc:praep
801: ppron3:sg:dat:f:ter:akc:npraep
802: ppron3:sg:dat:f:ter:akc:praep
803: ppron3:sg:dat:m1:ter:akc:npraep
804: ppron3:sg:dat:m1:ter:akc:praep
805: ppron3:sg:dat:m1:ter:nakc:npraep
806: ppron3:sg:dat:m2:ter:akc:npraep
807: ppron3:sg:dat:m2:ter:nakc:npraep
808: ppron3:sg:dat:m3:ter:akc:npraep
809: ppron3:sg:dat:m3:ter:akc:praep
810: ppron3:sg:dat:m3:ter:nakc:npraep
811: ppron3:sg:dat:n:ter:akc:npraep
812: ppron3:sg:dat:n:ter:akc:praep
813: ppron3:sg:dat:n:ter:nakc:npraep
814: ppron3:sg:gen:f:ter:akc:npraep
815: ppron3:sg:gen:f:ter:akc:praep
816: ppron3:sg:gen:m1:ter:akc:npraep
817: ppron3:sg:gen:m1:ter:akc:praep
818: ppron3:sg:gen:m1:ter:nakc:npraep
819: ppron3:sg:gen:m1:ter:nakc:praep
820: ppron3:sg:gen:m2:ter:akc:npraep
821: ppron3:sg:gen:m2:ter:akc:praep
822: ppron3:sg:gen:m2:ter:nakc:npraep
823: ppron3:sg:gen:m3:ter:akc:npraep
824: ppron3:sg:gen:m3:ter:akc:praep
825: ppron3:sg:gen:m3:ter:nakc:npraep
826: ppron3:sg:gen:m3:ter:nakc:praep
827: ppron3:sg:gen:n:ter:akc:npraep
828: ppron3:sg:gen:n:ter:akc:praep
829: ppron3:sg:gen:n:ter:nakc:npraep
830: ppron3:sg:inst:f:ter:akc:praep
831: ppron3:sg:inst:m1:ter:akc:npraep
832: ppron3:sg:inst:m1:ter:akc:praep
833: ppron3:sg:inst:m2:ter:akc:npraep
834: ppron3:sg:inst:m2:ter:akc:praep
835: ppron3:sg:inst:m3:ter:akc:npraep
836: ppron3:sg:inst:m3:ter:akc:praep
837: ppron3:sg:inst:n:ter:akc:npraep
838: ppron3:sg:inst:n:ter:akc:praep
839: ppron3:sg:loc:f:ter:akc:praep
840: ppron3:sg:loc:m1:ter:akc:praep
841: ppron3:sg:loc:m2:ter:akc:praep
842: ppron3:sg:loc:m3:ter:akc:praep
843: ppron3:sg:loc:n:ter:akc:praep
844: ppron3:sg:nom:f:ter:akc:npraep
845: ppron3:sg:nom:f:ter:akc:praep
846: ppron3:sg:nom:m1:ter:akc:npraep
847: ppron3:sg:nom:m2:ter:akc:npraep
848: ppron3:sg:nom:m2:ter:akc:praep
849: ppron3:sg:nom:m3:ter:akc:npraep
850: ppron3:sg:nom:n:ter:akc:npraep
851: praet:pl:f:imperf
852: praet:pl:f:perf
853: praet:pl:m1:imperf
854: praet:pl:m1:imperf:agl
855: praet:pl:m1:perf
856: praet:pl:m2:imperf
857: praet:pl:m2:perf
858: praet:pl:m3:imperf
859: praet:pl:m3:perf
860: praet:pl:n:imperf
861: praet:pl:n:perf
862: praet:sg:f:imperf
863: praet:sg:f:imperf:agl
864: praet:sg:f:imperf:nagl
865: praet:sg:f:perf
866: praet:sg:m1:imperf
867: praet:sg:m1:imperf:agl
868: praet:sg:m1:imperf:nagl
869: praet:sg:m1:perf
870: praet:sg:m1:perf:agl
871: praet:sg:m1:perf:nagl
872: praet:sg:m2:imperf
873: praet:sg:m2:imperf:nagl
874: praet:sg:m2:perf
875: praet:sg:m2:perf:nagl
876: praet:sg:m3:imperf
877: praet:sg:m3:imperf:nagl
878: praet:sg:m3:perf
879: praet:sg:m3:perf:nagl
880: praet:sg:n:imperf
881: praet:sg:n:perf
882: pred
883: prep:acc
884: prep:acc:nwok
885: prep:acc:wok
886: prep:dat
887: prep:gen
888: prep:gen:nwok
889: prep:gen:wok
890: prep:inst
891: prep:inst:nwok
892: prep:inst:wok
893: prep:loc
894: prep:loc:nwok
895: prep:loc:wok
896: prep:nom
897: romandig
898: siebie:acc
899: siebie:dat
900: siebie:gen
901: siebie:inst
902: siebie:loc
903: subst:pl:acc:f
904: subst:pl:acc:m1
905: subst:pl:acc:m1:pt
906: subst:pl:acc:m2
907: subst:pl:acc:m3
908: subst:pl:acc:n:col
909: subst:pl:acc:n:ncol
910: subst:pl:acc:n:pt
911: subst:pl:dat:f
912: subst:pl:dat:m1
913: subst:pl:dat:m1:pt
914: subst:pl:dat:m2
915: subst:pl:dat:m3
916: subst:pl:dat:n:col
917: subst:pl:dat:n:ncol
918: subst:pl:dat:n:pt
919: subst:pl:gen:f
920: subst:pl:gen:m1
921: subst:pl:gen:m1:pt
922: subst:pl:gen:m2
923: subst:pl:gen:m3
924: subst:pl:gen:n:col
925: subst:pl:gen:n:ncol
926: subst:pl:gen:n:pt
927: subst:pl:inst:f
928: subst:pl:inst:m1
929: subst:pl:inst:m1:pt
930: subst:pl:inst:m2
931: subst:pl:inst:m3
932: subst:pl:inst:n:col
933: subst:pl:inst:n:ncol
934: subst:pl:inst:n:pt
935: subst:pl:loc:f
936: subst:pl:loc:m1
937: subst:pl:loc:m1:pt
938: subst:pl:loc:m2
939: subst:pl:loc:m3
940: subst:pl:loc:n:col
941: subst:pl:loc:n:ncol
942: subst:pl:loc:n:pt
943: subst:pl:nom:f
944: subst:pl:nom:m1
945: subst:pl:nom:m1:pt
946: subst:pl:nom:m2
947: subst:pl:nom:m3
948: subst:pl:nom:n:col
949: subst:pl:nom:n:ncol
950: subst:pl:nom:n:pt
951: subst:pl:voc:f
952: subst:pl:voc:m1
953: subst:pl:voc:m1:pt
954: subst:pl:voc:m3
955: subst:pl:voc:n:col
956: subst:pl:voc:n:ncol
957: subst:pl:voc:n:pt
958: subst:sg:acc:f
959: subst:sg:acc:m1
960: subst:sg:acc:m2
961: subst:sg:acc:m3
962: subst:sg:acc:n:col
963: subst:sg:acc:n:ncol
964: subst:sg:dat:f
965: subst:sg:dat:m1
966: subst:sg:dat:m2
967: subst:sg:dat:m3
968: subst:sg:dat:n:col
969: subst:sg:dat:n:ncol
970: subst:sg:gen:f
971: subst:sg:gen:m1
972: subst:sg:gen:m2
973: subst:sg:gen:m3
974: subst:sg:gen:n:col
975: subst:sg:gen:n:ncol
976: subst:sg:inst:f
977: subst:sg:inst:m1
978: subst:sg:inst:m2
979: subst:sg:inst:m3
980: subst:sg:inst:n:col
981: subst:sg:inst:n:ncol
982: subst:sg:loc:f
983: subst:sg:loc:m1
984: subst:sg:loc:m2
985: subst:sg:loc:m3
986: subst:sg:loc:n:col
987: subst:sg:loc:n:ncol
988: subst:sg:nom:f
989: subst:sg:nom:m1
990: subst:sg:nom:m2
991: subst:sg:nom:m3
992: subst:sg:nom:n:col
993: subst:sg:nom:n:ncol
994: subst:sg:voc:f
995: subst:sg:voc:m1
996: subst:sg:voc:m2
997: subst:sg:voc:m3
998: subst:sg:voc:n:col
999: subst:sg:voc:n:ncol
1000: sym
1001: winien:pl:f:imperf
1002: winien:pl:m1:imperf
1003: winien:pl:m2:imperf
1004: winien:pl:m3:imperf
1005: winien:pl:n:imperf
1006: winien:sg:f:imperf
1007: winien:sg:m1:imperf
1008: winien:sg:m2:imperf
1009: winien:sg:m3:imperf
1010: winien:sg:n:imperf
1011: xxs:acc
1012: xxs:dat
1013: xxs:gen
1014: xxs:inst
1015: xxs:loc
1016: xxs:nom
1017: xxs:voc
1018: xxx
- name: nps
sequence: bool
- name: nkjp_ids
sequence: string
config_name: nkjp1m
splits:
- name: test
num_bytes: 8324533
num_examples: 8964
- name: train
num_bytes: 65022406
num_examples: 68943
- name: validation
num_bytes: 7465442
num_examples: 7755
download_size: 16167009
dataset_size: 80812381
---
# Dataset Card for NKJP1M – The manually annotated subcorpus of the National Corpus of Polish
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NKJP1M](http://clip.ipipan.waw.pl/NationalCorpusOfPolish)
- **Repository:** [NKJP1M-SGJP](http://download.sgjp.pl/morfeusz/current/)
- **Paper:** [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf)
- **Point of Contact:** mailto:morfeusz@ipipan.waw.pl
### Dataset Summary
This is the official dataset for NKJP1M – the 1-million token balanced subcorpus of the National Corpus of Polish (Narodowy Korpus Języka Polskiego)
Besides the text (divided into paragraphs/samples and sentences) the set contains lemmas and morpho-syntactic tags for all tokens in the corpus.
This release, known as NKJP1M-SGJP, corresponds to the version 1.2 of the corpus with later corrections and improvements. In particular the morpho-syntactic annotation has been aligned with the present version of Morfeusz2 SGJP morphological analyser (as of 2022.12.04).
### Supported Tasks and Leaderboards
The main use of this resource lays in training models for lemmatisation and part of speech tagging of Polish.
### Languages
Polish (monolingual)
## Dataset Structure
### Data Instances
```
{'nkjp_text': 'NKJP_1M_1102000002',
'nkjp_par': 'morph_1-p',
'nkjp_sent': 'morph_1.18-s',
'tokens': ['-', 'Nie', 'mam', 'pieniędzy', ',', 'da', 'mi', 'pani', 'wywiad', '?'],
'lemmas': ['-', 'nie', 'mieć', 'pieniądz', ',', 'dać', 'ja', 'pani', 'wywiad', '?'],
'cposes': [8, 11, 10, 9, 8, 10, 9, 9, 9, 8],
'poses': [19, 25, 12, 35, 19, 12, 28, 35, 35, 19],
'tags': [266, 464, 213, 923, 266, 218, 692, 988, 961, 266],
'nps': [False, False, False, False, True, False, False, False, False, True],
'nkjp_ids': ['morph_1.9-seg', 'morph_1.10-seg', 'morph_1.11-seg', 'morph_1.12-seg', 'morph_1.13-seg', 'morph_1.14-seg', 'morph_1.15-seg', 'morph_1.16-seg', 'morph_1.17-seg', 'morph_1.18-seg']}
```
### Data Fields
- `nkjp_text`, `nkjp_par`, `nkjp_sent` (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.)
- `tokens` (sequence of strings): tokens of the text defined as in NKJP.
- `lemmas` (sequence of strings): lemmas corresponding to the tokens.
- `tags` (sequence of labels): morpho-syntactic tags according to Morfeusz2 tagset (1019 distinct tags).
- `poses` (sequence of labels): flexemic class (detailed part of speech, 40 classes) – the first element of the corresponding tag.
- `cposes` (sequence of labels): coarse part of speech (13 classes): all verbal and deverbal flexemic classes get mapped to a `V`, nominal – `N`, adjectival – `A`, “strange” (abbreviations, alien elements, symbols, emojis…) – `X`, rest as in `poses`.
- `nps` (sequence of booleans): `True` means that the corresponding token is not preceded by a space in the source text.
- `nkjp_ids` (sequence of strings): XML identifiers of particular tokens in NKJP (probably an overkill).
### Data Splits
| | Train | Validation | Test |
| ----- | ------ | ----- | ---- |
| sentences | 68943 | 7755 | 8964 |
| tokens | 978368 | 112454 | 125059 |
## Dataset Creation
### Curation Rationale
The National Corpus of Polish (NKJP) was envisioned as the reference corpus of contemporary Polish.
The manually annotated subcorpus (NKJP1M) was thought of as the training data for various NLP tasks.
### Source Data
NKJP is balanced with respect to Polish readership. The detailed rationale is described in Chapter 3 of the [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf) (roughly: 50% press, 30% books, 10% speech, 10% other). The corpus contains texts from the years 1945–2010 (with 80% of the text in the range 1990–2010). Only original Polish texts were gathered (no translations from other languages). The composition of NKJP1M follows this schema (see Chapter 5).
### Annotations
The rules of morphosyntactic annotation used for NKJP are discussed in Chapter 6 of the [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf). Presently (2020), the corpus uses a common tagset with the morphological analyzer [Morfeusz 2](http://morfeusz.sgjp.pl/).
#### Annotation process
The texts were processed with Morfeusz and then the resulting annotations were manually disambiguated and validated/corrected. Each text sample was independently processed by two annotators. In case of annotation conflicts an adjudicator stepped in.
### Licensing Information
 This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
Info on the source corpus: [link](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf)
```
@Book{nkjp:12,
editor = "Adam Przepiórkowski and Mirosław Bańko and Rafał
L. Górski and Barbara Lewandowska-Tomaszczyk",
title = "Narodowy Korpus Języka Polskiego",
year = 2012,
address = "Warszawa",
pdf = "http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf",
publisher = "Wydawnictwo Naukowe PWN"}
```
Current annotation scheme: [link](https://jezyk-polski.pl/index.php/jp/article/view/72)
```
@article{
kie:etal:21,
author = "Kieraś, Witold and Woliński, Marcin and Nitoń, Bartłomiej",
doi = "https://doi.org/10.31286/JP.101.2.5",
title = "Nowe wielowarstwowe znakowanie lingwistyczne zrównoważonego {N}arodowego {K}orpusu {J}ęzyka {P}olskiego",
url = "https://jezyk-polski.pl/index.php/jp/article/view/72",
journal = "Język Polski",
number = "2",
volume = "CI",
year = "2021",
pages = "59--70"
}
```
<!--
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
--> |
tarteel-ai/everyayah | 2022-12-09T19:33:08.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:mit",
"region:us"
] | tarteel-ai | null | null | null | 6 | 20 | ---
pretty_name: Tarteel AI - EveryAyah Dataset
dataset_info:
features:
- name: audio
dtype: audio
- name: duration
dtype: float64
- name: text
dtype: string
- name: reciter
dtype: string
splits:
- name: train
num_bytes: 262627688145.3
num_examples: 187785
- name: test
num_bytes: 25156009734.72
num_examples: 23473
- name: validation
num_bytes: 23426886730.218
num_examples: 23474
download_size: 117190597305
dataset_size: 311210584610.23804
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ar
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: tarteel-everyayah
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
train-eval-index:
- config: clean
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
eval_split: test
validation_split: validation
col_mapping:
audio: audio
text: text
reciter: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
---
﷽
# Dataset Card for Tarteel AI's EveryAyah Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tarteel AI](https://www.tarteel.ai/)
- **Repository:** [Needs More Information]
- **Point of Contact:** [Mohamed Saad Ibn Seddik](mailto:ms.ibnseddik@tarteel.ai)
### Dataset Summary
This dataset is a collection of Quranic verses and their transcriptions, with diacritization, by different reciters.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises the audio file `audio`, and its transcription called `text`.
The `duration` is in seconds, and the author is `reciter`.
An example from the dataset is:
```
{
'audio': {
'path': None,
'array': array([ 0. , 0. , 0. , ..., -0.00057983,
-0.00085449, -0.00061035]),
'sampling_rate': 16000
},
'duration': 6.478375,
'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ',
'reciter': 'abdulsamad'
}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: The transcription of the audio file.
- duration: The duration of the audio file.
- reciter: The reciter of the verses.
### Data Splits
| | Train | Test | Validation |
| ----- | ----- | ---- | ---------- |
| dataset | 187785 | 23473 | 23474 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
```
### Contributions
This dataset was created by:
|
graphs-datasets/CSL | 2023-02-07T16:37:07.000Z | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
] | graphs-datasets | null | null | null | 1 | 20 | ---
license: mit
task_categories:
- graph-ml
---
# Dataset Card for CSK
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The CSL dataset is a synthetic dataset, to test GNN expressivity.
### Supported Tasks and Leaderboards
`CSL` should be used for binary graph classification, on isomoprhism or not.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 150 |
| average #nodes | 41.0 |
| average #edges | 164.0 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
NeelNanda/wiki-10k | 2022-12-27T00:22:23.000Z | [
"region:us"
] | NeelNanda | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 222757944
num_examples: 10000
download_size: 129077566
dataset_size: 222757944
---
# Dataset Card for "wiki-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nedjmaou/MLMA_hate_speech | 2022-12-28T11:24:32.000Z | [
"license:mit",
"arxiv:1908.11049",
"region:us"
] | nedjmaou | null | null | null | 0 | 20 | ---
license: mit
---
# Disclaimer
*This is a hate speech dataset (in Arabic, French, and English).*
*Offensive content that does not reflect the opinions of the authors.*
# Dataset of our EMNLP 2019 Paper (Multilingual and Multi-Aspect Hate Speech Analysis)
For more details about our dataset, please check our paper:
@inproceedings{ousidhoum-etal-multilingual-hate-speech-2019,
title = "Multilingual and Multi-Aspect Hate Speech Analysis",
author = "Ousidhoum, Nedjma
and Lin, Zizheng
and Zhang, Hongming
and Song, Yangqiu
and Yeung, Dit-Yan",
booktitle = "Proceedings of EMNLP",
year = "2019",
publisher = "Association for Computational Linguistics",
}
(You can preview our paper on https://arxiv.org/pdf/1908.11049.pdf)
## Clarification
The multi-labelled tasks are *the hostility type of the tweet* and the *annotator's sentiment*. (We kept labels on which at least two annotators agreed.)
## Taxonomy
In further experiments that involved binary classification tasks of the hostility/hate/abuse type, we considered single-labelled *normal* instances to be *non-hate/non-toxic* and all the other instances to be *toxic*.
## Dataset
Our dataset is composed of three csv files sorted by language. They contain the tweets and the annotations described in our paper:
the hostility type *(column: tweet sentiment)*
hostility directness *(column: directness)*
target attribute *(column: target)*
target group *(column: group)*
annotator's sentiment *(column: annotator sentiment)*.
## Experiments
To replicate our experiments, please see https://github.com/HKUST-KnowComp/MLMA_hate_speech/blob/master/README.md
|
competitions/aiornot | 2023-03-30T12:32:32.000Z | [
"task_categories:image-classification",
"image-classification",
"autotrain",
"competitions",
"region:us"
] | competitions | null | null | null | 24 | 20 | ---
task_categories:
- image-classification
tags:
- image-classification
- autotrain
- competitions
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: label
dtype: int64
---
# Dataset Card for aiornot
Dataset for the [aiornot competition](https://hf.co/spaces/competitions/aiornot).
By accessing this dataset, you accept the rules of the AI or Not competition.
Please note that dataset may contain images which are not considered safe for work.
## Usage
### With Hugging Face Datasets 🤗
You can download and use this dataset using the `datasets` library.
📝 **Note:** You must be logged in to you Hugging Face account for the snippet below to work. You can do this with `huggingface-cli login` or `huggingface_hub.notebook_login` if you have the `huggingface_hub` python library installed (`pip install huggingface_hub`).
```python
from datasets import load_dataset
ds = load_dataset('competitions/aiornot')
```
### From Original Files
The original files and sample submission can be found in the `.extras` folder (under the files and versions tab of this repo). Feel free to download them and use them directly if you don't wish to use the `datasets` library.
|
metaeval/scinli | 2023-01-26T09:34:08.000Z | [
"license:apache-2.0",
"region:us"
] | metaeval | null | null | null | 0 | 20 | ---
license: apache-2.0
---
#SciNLI: A Corpus for Natural Language Inference on Scientific Text
https://github.com/msadat3/SciNLI
```bib
@inproceedings{sadat-caragea-2022-scinli,
title = "{S}ci{NLI}: A Corpus for Natural Language Inference on Scientific Text",
author = "Sadat, Mobashir and
Caragea, Cornelia",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.511",
pages = "7399--7409",
}
``` |
wwydmanski/colorectal-carcinoma-microbiome-fengq | 2023-02-25T15:34:21.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"microbiome",
"tabular",
"gut-microbiota",
"region:us"
] | wwydmanski | The dataset contains 16S rRNA gene sequencing data from healthy controls and colorectal cancer patients. The dataset was used in the paper "Gut microbiome development along the colorectal adenoma-carcinoma sequence" by Feng et al. (2015). | Feng Q, Liang S, Jia H, et al. Gut microbiome development along the colorectal adenoma-carcinoma sequence. Nat Commun. 2015;6:6528. Published 2015 Mar 11. doi:10.1038/ncomms7528 | null | 1 | 20 | ---
task_categories:
- tabular-classification
tags:
- microbiome
- tabular
- gut-microbiota
pretty_name: Colorectal Carcinoma Feng Q 2015
size_categories:
- n<1K
---
## Publication Abstract
Colorectal cancer, a commonly diagnosed cancer in the elderly, often develops slowly from benign polyps called adenoma. The gut microbiota is believed to be directly involved in colorectal carcinogenesis. The identity and functional capacity of the adenoma- or carcinoma-related gut microbe(s), however, have not been surveyed in a comprehensive manner. Here we perform a metagenome-wide association study (MGWAS) on stools from advanced adenoma and carcinoma patients and from healthy subjects, revealing microbial genes, strains and functions enriched in each group. An analysis of potential risk factors indicates that high intake of red meat relative to fruits and vegetables appears to associate with outgrowth of bacteria that might contribute to a more hostile gut environment. These findings suggest that faecal microbiome-based strategies may be useful for early diagnosis and treatment of colorectal adenoma or carcinoma.
## Dataset
156 metagenomic shotgun-sequenced faecal samples from colorectal adenoma and carcinoma patients and healthy controls
### Configurations
- `presence-absence`
- `CLR`
## Usage
```python
dataset = load_dataset("wwydmanski/colorectal-carcinoma-microbiome-fengq", "presence-absence")
train_dataset, test_dataset = dataset['train'], dataset['test']
X_train = np.array(train_dataset['values'])
y_train = np.array(train_dataset['target'])
X_test = np.array(test_dataset['values'])
y_test = np.array(test_dataset['target'])
``` |
adilbekovich/Sentiment140Twitter | 2023-03-05T20:16:56.000Z | [
"region:us"
] | adilbekovich | null | null | null | 0 | 20 | Entry not found |
sustcsenlp/bn_news_summarization | 2023-03-05T16:41:32.000Z | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:bn",
"license:cc0-1.0",
"region:us"
] | sustcsenlp | null | null | null | 0 | 20 | ---
license: cc0-1.0
task_categories:
- summarization
language:
- bn
size_categories:
- 1K<n<10K
---
# Bengali Abstractive News Summarization (BANS)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [BANS PAPER](https://doi.org/10.1007/978-981-33-4673-4_4)
- **Leaderboard:**
- **Point of Contact:** [Prithwiraj Bhattacharjee](prithwiraj_cse@lus.ac.bd)
### Dataset Summary
Nowadays news or text summarization becomes very popular in the NLP field. Both the extractive and abstractive approaches of summarization are implemented in different languages. A significant amount of data is a primary need for any summarization. For the Bengali language, there are only a few datasets are available. Our dataset is made for Bengali Abstractive News Summarization (BANS) purposes. As abstractive summarization is basically neural network-based it needs more and more data to perform well. So we made a standard Bengali abstractive summarization data crawling from online Bengali news portal bangla.bdnews24.com. We crawled more than 19k articles and summaries and standardized the data.
### Downloading the data
```
from datasets import load_dataset
train = load_dataset("sustcsenlp/bn_news_summarization",split="train")
```
### Dataset Description
| Description | Data Info. |
| ----------- | ----------- |
| Total no of articles | 19096 |
| Total no of summaries | 19096 |
| Maximum no of words in an article | 76 |
| Maximum no of words in a summary | 12 |
| Minimum no of words in an article | 5 |
| Minimum no of words in a summary | 3 |
### Languages
This dataset contains Bangla Text Data.
## Acknowledgement
We would like to thank Shahjalal University of Science and Technology (SUST) research center and SUST NLP research group for their support.
### Citation Information
```
@InProceedings{10.1007/978-981-33-4673-4_4,
author="Bhattacharjee, Prithwiraj
and Mallick, Avi
and Saiful Islam, Md.
and Marium-E-Jannat",
editor="Kaiser, M. Shamim
and Bandyopadhyay, Anirban
and Mahmud, Mufti
and Ray, Kanad",
title="Bengali Abstractive News Summarization (BANS): A Neural Attention Approach",
booktitle="Proceedings of International Conference on Trends in Computational and Cognitive Engineering",
year="2021",
publisher="Springer Singapore",
address="Singapore",
pages="41--51",
abstract="Bhattacharjee, PrithwirajMallick, AviSaiful Islam, Md.Marium-E-JannatAbstractive summarization is the process of generating novel sentences based on the information extracted from the original text document while retaining the context. Due to abstractive summarization's underlying complexities, most of the past research work has been done on the extractive summarization approach. Nevertheless, with the triumph of the sequence-to-sequence (seq2seq) model, abstractive summarization becomes more viable. Although a significant number of notable research has been done in the English language based on abstractive summarization, only a couple of works have been done on Bengali abstractive news summarization (BANS). In this article, we presented a seq2seq based Long Short-Term Memory (LSTM) network model with attention at encoder-decoder. Our proposed system deploys a local attention-based model that produces a long sequence of words with lucid and human-like generated sentences with noteworthy information of the original document. We also prepared a dataset of more than 19 k articles and corresponding human-written summaries collected from bangla.bdnews24.com (https://bangla.bdnews24.com/) which is till now the most extensive dataset for Bengali news document summarization and publicly published in Kaggle (https://www.kaggle.com/prithwirajsust/bengali-news-summarization-dataset) We evaluated our model qualitatively and quantitatively and compared it with other published results. It showed significant improvement in terms of human evaluation scores with state-of-the-art approaches for BANS.",
isbn="978-981-33-4673-4"
}
```
### Contributors
| Name | University |
| ----------- | ----------- |
| Prithwiraj Bhattacharjee | Shahjalal University of Science and Technology |
| Avi Mallick | Shahjalal University of Science and Technology |
| Md. Saiful Islam | Shahjalal University of Science and Technology |
| Marium-E-Jannat | Shahjalal University of Science and Technology | |
AnonymousSub/MedQuAD_Context_Question_Answer_Triples_TWO | 2023-03-14T18:17:38.000Z | [
"region:us"
] | AnonymousSub | null | null | null | 4 | 20 | ---
dataset_info:
features:
- name: Contexts
dtype: string
- name: Questions
dtype: string
- name: Answers
dtype: string
splits:
- name: train
num_bytes: 190839732
num_examples: 47441
download_size: 21760499
dataset_size: 190839732
---
# Dataset Card for "MedQuAD_Context_Question_Answer_Triples_TWO"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ArunaSaraswathy/pii_test | 2023-03-20T17:39:55.000Z | [
"region:us"
] | ArunaSaraswathy | null | null | null | 0 | 20 | Entry not found |
bigbio/bronco | 2023-04-01T16:47:31.000Z | [
"multilinguality:monolingual",
"language:de",
"region:us"
] | bigbio | BRONCO150 is a corpus containing selected sentences of 150 German discharge summaries of cancer patients (hepatocelluar
carcinoma or melanoma) treated at Charite Universitaetsmedizin Berlin or Universitaetsklinikum Tuebingen. All discharge
summaries were manually anonymized. The original documents were scrambled at the sentence level to make reconstruction
of individual reports impossible. | @article{10.1093/jamiaopen/ooab025,
author = {Kittner, Madeleine and Lamping, Mario and Rieke, Damian T and Götze, Julian and Bajwa, Bariya and
Jelas, Ivan and Rüter, Gina and Hautow, Hanjo and Sänger, Mario and Habibi, Maryam and Zettwitz, Marit and
Bortoli, Till de and Ostermann, Leonie and Ševa, Jurica and Starlinger, Johannes and Kohlbacher, Oliver and
Malek, Nisar P and Keilholz, Ulrich and Leser, Ulf},
title = "{Annotation and initial evaluation of a large annotated German oncological corpus}",
journal = {JAMIA Open},
volume = {4},
number = {2},
year = {2021},
month = {04},
issn = {2574-2531},
doi = {10.1093/jamiaopen/ooab025},
url = {https://doi.org/10.1093/jamiaopen/ooab025},
note = {ooab025},
eprint = {https://academic.oup.com/jamiaopen/article-pdf/4/2/ooab025/38830128/ooab025.pdf},
} | null | 2 | 20 | ---
language:
- de
bigbio_language:
- German
multilinguality: monolingual
pretty_name: BRONCO150
homepage: https://www2.informatik.hu-berlin.de/~leser/bronco/index.html
bigbio_pubmed: false
bigbio_public: false
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for BRONCO150
## Dataset Description
- **Homepage:** https://www2.informatik.hu-berlin.de/~leser/bronco/index.html
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER, NED
BRONCO150 is a corpus containing selected sentences of 150 German discharge summaries of cancer patients (hepatocelluar carcinoma or melanoma) treated at Charite Universitaetsmedizin Berlin or Universitaetsklinikum Tuebingen. All discharge summaries were manually anonymized. The original documents were scrambled at the sentence level to make reconstruction of individual reports impossible.
## Citation Information
```
@article{10.1093/jamiaopen/ooab025,
author = {Kittner, Madeleine and Lamping, Mario and Rieke, Damian T and Götze, Julian and Bajwa, Bariya and Jelas, Ivan and Rüter, Gina and Hautow, Hanjo and Sänger, Mario and Habibi, Maryam and Zettwitz, Marit and Bortoli, Till de and Ostermann, Leonie and Ševa, Jurica and Starlinger, Johannes and Kohlbacher, Oliver and Malek, Nisar P and Keilholz, Ulrich and Leser, Ulf},
title = "{Annotation and initial evaluation of a large annotated German oncological corpus}",
journal = {JAMIA Open},
volume = {4},
number = {2},
year = {2021},
month = {04},
issn = {2574-2531},
doi = {10.1093/jamiaopen/ooab025},
url = {https://doi.org/10.1093/jamiaopen/ooab025},
note = {ooab025},
eprint = {https://academic.oup.com/jamiaopen/article-pdf/4/2/ooab025/38830128/ooab025.pdf},
}
```
|
texturedesign/td02_urban-surface-textures | 2023-09-02T18:55:43.000Z | [
"task_categories:unconditional-image-generation",
"annotations_creators:expert-generated",
"size_categories:n<1K",
"source_datasets:original",
"license:cc-by-nc-4.0",
"texture-synthesis",
"photography",
"non-infringing",
"region:us"
] | texturedesign | null | null | null | 4 | 20 | ---
annotations_creators:
- expert-generated
language: []
language_creators: []
license:
- cc-by-nc-4.0
multilinguality: []
pretty_name: 'TD02: Urban Surface Texture Photos'
size_categories:
- n<1K
source_datasets:
- original
tags:
- texture-synthesis
- photography
- non-infringing
task_categories:
- unconditional-image-generation
task_ids: []
viewer: false
---
_The Dataset Teaser is now enabled instead! Isn't this better?_

# TD 02: Urban Surface Textures
This dataset contains multi-photo texture captures in outdoor urban scenes — many focusing on the ground and the others are walls. Each set has different photos that showcase texture variety, making them ideal for training a domain-specific image generator!
Overall information about this dataset:
* **Format** — JPEG-XL, lossless RGB
* **Resolution** — 4032 × 2268
* **Device** — mobile camera
* **Technique** — hand-held
* **Orientation** — landscape
* **Author**: Alex J. Champandard
* **Configurations**: 4K, 2K (default), 1K
To load the medium- and high-resolution images of the dataset, you'll need to install `jxlpy` from [PyPI](https://pypi.org/project/jxlpy/) with `pip install jxlpy`:
```python
# Recommended use, JXL at high-quality.
from jxlpy import JXLImagePlugin
from datasets import load_dataset
d = load_dataset('texturedesign/td02_urban-surface-textures', 'JXL@4K', num_proc=4)
print(len(d['train']), len(d['test']))
```
The lowest-resolution images are available as PNG with a regular installation of `pillow`:
```python
# Alternative use, PNG at low-quality.
from datasets import load_dataset
d = load_dataset('texturedesign/td02_urban-surface-textures', 'PNG@1K', num_proc=4)
# EXAMPLE: Discard all other sets except Set #1.
dataset = dataset.filter(lambda s: s['set'] == 1)
# EXAMPLE: Only keep images with index 0 and 2.
dataset = dataset.select([0, 2])
```
Use built-in dataset `filter()` and `select()` to narrow down the loaded dataset for training, or to ease with development.
## Set #1: Interlocked Paving with Mulch

* **Description**:
- grey H-shaped tiles interlocked together as the base
- brown mulch made of woodchips on top
- diffuse lighting, cloudy day
* **Number of Photos**:
- 18 train
- 5 test
* **Size**: 222 Mb
## Set #2: Outdoor Parking Area

* **Description**:
- diamond shape concrete floor tiles
- grass growing in the middle part
- sunny day, shadows and sunlight
* **Number of Photos**:
- 19 train
- 6 test
* **Size**: 270 Mb
## Set #3: Red Brick Wall Cliché

* **Description**:
- red brick wall with dark grey mortar
- signs of weathering and black splashes
- cloudy day, diffuse lighting
* **Number of Photos**:
- 10 train
- 4 test
* **Size**: 100 Mb
## Set #4: Not Asphalt, Sparkling Bitumen

* **Description**:
- asphalt with sparkeling gravel sticking out
- sunny day, strong light from right
* **Number of Photos**:
- 5 train
- 2 test
* **Size**: 65 Mb
## Set #5: Under The Dallage? Oui.

* **Description**:
- grey floor tiles of different square and rectangular shape
- cloudy day, diffuse lighting
* **Number of Photos**:
- 11 train
- 4 test
* **Size**: 103 Mb
## Set #6: Cracked Road and Scattered Gravel

* **Description**:
- asphalt road with cracks at varying scales
- occasionally gravel scattered around
- sunny day, varied lighting and soft shadows
* **Number of Photos**:
- 27 train
- 7 test
* **Size**: 281 Mb
## Set #7: Funky Grey/Red Brick Wall

* **Description**:
- red brick wall with grey mortar
- cool weathering patterns on the bricks, cement marks
- diffuse light from above
* **Number of Photos**:
- 6 train
- 2 test
* **Size**: 52 Mb
## Set #8: Corrugated Metal Wall

* **Description**:
- corrugated metal wall used on a construction site
- no significant effects from lighting
* **Number of Photos**:
- 6 train
- 2 test
* **Size**: 37 Mb
## Set #9: Colorful Aligned Paving Stones

* **Description**:
- square or rectangular pavement stones with rounded corners
- variety of different colors, dark gaps between
- neutral lighting in the shade
* **Number of Photos**:
- 5 train
- 1 test
* **Size**: 46 Mb
## Set #10: Fancy Footpath Concrete

* **Description**:
- fancy-looking concrete path in the park
- long straight gaps between the tiles
- strong sunlight from the left side
* **Number of Photos**:
- 11 train
- 4 test
* **Size**: 137 Mb
## Set #11: Asphalt Slowly Turning Green

* **Description**:
- dirty asphalt with darker patches
- cloudy day, diffuse lighting
* **Number of Photos**:
- 9 train
- 2 test
* **Size**: 95 Mb
## Set #12: Clay Tiles on Old Patio

* **Description**:
- clay tiles in grey cement
- drops of white dirt and rusty patches
- diffuse lighting in the shade
* **Number of Photos**:
- 8 train
- 2 test
* **Size**: 64.6 Mb
## Set #13: Brown Shed Wooden Planks

* **Description**:
- beige wooden planks with dark knots
- outside wall of a brown shed
- diffuse lighting in the shade
* **Number of Photos**:
- 5 train
- 1 test
* **Size**: 31.5 Mb
## Set #14: Hexagonal Plastic Grid for White Gravel

* **Description**:
- plastic grid, both white and black, with hexagonal tiles
- white decorative gravel within the alcoves
- areas mostly in the shade and more with the sun
* **Number of Photos**:
- 8 train
- 2 test
* **Size**: 87.5 Mb
## Set #15: Blue-Painted Wooden Shutters

* **Description**:
- blue-painted outdoor shutters made of wood
- weathered paint that's cracking and new-looking parts
- metal bars partly rusty, some well preserved
- diffuse lighting in the shade
* **Number of Photos**:
- 10 train
- 2 test
* **Size**: 42.7 Mb
## Set #16: Weathered Sand-Blasted Concrete Wall

* **Description**:
- concrete wall with a sand-blasted texture
- weathering that includes mold, water stains, broken concrete
- diffuse lighting in the shade
* **Number of Photos**:
- 10 train
- 2 test
* **Size**: 112.2 Mb
## Set #17: Dirty & Dusty Square Stone Paving

* **Description**:
- staggered grey square stone paving, light cement
- dirt and dust on the surface, weathering
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 14 train
- 4 test
* **Size**: 118.3 Mb
## Set #18: Old Clay Brick Wall

* **Description**:
- old clay bricks in a church wall
- different types of damage and weathering
- varied conditions depending on direction
- mixed lighting in the shade and sun
* **Number of Photos**:
- 14 train
- 3 test
* **Size**: 116.7 Mb
## Set #19: Cobblestone Pavement with Grass Clumps

* **Description**:
- old cobblestone pavement with aged stones
- grass or cement between the cobble stones
- diffuse lighting in the shade
* **Number of Photos**:
- 17 train
- 4 test
* **Size**: 179.8 Mb
## Set #20: Gosh, So Shiny! New Asphalt

* **Description**:
- new asphalt with dark bitumen with embedded bright grey gravel
- mixed lighting, mostly in shadow with specular secondary light
* **Number of Photos**:
- 13 train
- 3 test
* **Size**: 108.7 Mb
|
jellyChiru/SParC | 2023-04-17T23:00:38.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | jellyChiru | null | null | null | 1 | 20 | ---
license: cc-by-sa-4.0
---
@InProceedings{Yu&al.19,
title = {SParC: Cross-Domain Semantic Parsing in Context},
author = {Tao Yu and Rui Zhang and Michihiro Yasunaga and Yi Chern Tan and Xi Victoria Lin and Suyi Li and Heyang Er, Irene Li and Bo Pang and Tao Chen and Emily Ji and Shreya Dixit and David Proctor and Sungrok Shim and Jonathan Kraft, Vincent Zhang and Caiming Xiong and Richard Socher and Dragomir Radev},
booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year = {2019},
address = {Florence, Italy},
publisher = {Association for Computational Linguistics}
}
@inproceedings{Yu&al.18c,
title = {Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task},
author = {Tao Yu and Rui Zhang and Kai Yang and Michihiro Yasunaga and Dongxu Wang and Zifan Li and James Ma and Irene Li and Qingning Yao and Shanelle Roman and Zilin Zhang and Dragomir Radev}
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
year = 2018
}
Reference links
SParC task link: https://yale-lily.github.io/sparc
SParC Github page: https://github.com/taoyds/sparc
Spider task link: https://yale-lily.github.io/spider
Spider Github page: https://github.com/taoyds/spider |
jxu124/llava_conversation_58k | 2023-05-20T18:49:03.000Z | [
"region:us"
] | jxu124 | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: global_image_id
dtype: string
- name: image_path
dtype: string
- name: dialog
sequence:
sequence: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 97341653
num_examples: 56681
download_size: 49592162
dataset_size: 97341653
---
# Dataset Card for "llava_conversation_58k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MattiaL/tapir-cleaned-116k | 2023-05-09T07:59:44.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"region:us"
] | MattiaL | null | null | null | 1 | 20 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: Tapir-Cleaned
task_categories:
- text-generation
size_categories:
- 100K<n<1M
---
# Dataset Card for Tapir-Cleaned
This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning.
## Tapir Dataset Summary
Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the IFTTT platform.
After a thorough cleaning process that involved the removal of redundant and inconsistent recipes, the refined dataset was condensed to include 116,862 high-quality recipes.
This curated set of instruction data is particularly useful for conducting instruction-tuning exercises for language models,
allowing them to more accurately follow instructions and achieve superior performance.
The last version of Tapir includes a correlation score that helps to identify the most appropriate description-rule pairs for instruction tuning.
Description-rule pairs with a score greater than 0.75 are deemed good enough and are prioritized for further analysis and tuning.
### Supported Tasks and Leaderboards
The Tapir dataset designed for instruction training pretrained language models
### Languages
The data in Tapir are mainly in English (BCP-47 en).
# Dataset Structure
### Data Instances
```json
{
"instruction":"From the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.",
"input":"If lostphone is texted to my phone the volume will turn up to 100 so I can find it.",
"output":"IF Android SMS New SMS received matches search THEN Android Device Set ringtone volume",
"score":"0.804322",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nFrom the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.\n\n### Input:\nIf lostphone is texted to my phone the volume will turn up to 100 so I can find it.\n\n### Response:\nIF Android SMS New SMS received matches search THEN Android Device Set ringtone volume",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform.
* `input`: context or input for the task. Each of the 116K input is unique.
* `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe.
* `score`: the correlation score obtained via BertForNextSentencePrediction
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| tapir | 116862 |
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{tapir,
author = {Mattia Limone, Gaetano Cimino, Annunziata Elefante},
title = {TAPIR: Trigger Action Platform for Information Retrieval},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/MattiaLimone/ifttt_recommendation_system}},
}
``` |
claritylab/utcd | 2023-05-24T17:27:42.000Z | [
"task_categories:text-classification",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"arxiv:2005.00547",
"arxiv:2010.12421",
"arxiv:1509.01626",
"arxiv:1307.5336",
"arxiv:1909.05855",
"arxiv:1909.02027",
"arxiv:... | claritylab | UTCD is a compilation of 18 classification datasets spanning 3 categories of Sentiment,
Intent/Dialogue and Topic classification. UTCD focuses on the task of zero-shot text classification where the
candidate labels are descriptive of the text being classified. UTCD consists of ~ 6M/800K train/test examples. | null | null | 4 | 20 | ---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1M<n<10M
annotations_creators:
- no-annotation
multilinguality:
- monolingual
pretty_name: UTCD
dataset_info:
- config_name: in-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add Alarm
'1': Album
'2': Animal
'3': Artist
'4': Athlete
'5': Book Appointment
'6': Book House
'7': Building
'8': Business
'9': Business & Finance
'10': Buy Bus Ticket
'11': Buy Event Tickets
'12': Buy Movie Tickets
'13': Check Balance
'14': Company
'15': Computers & Internet
'16': Education & Reference
'17': Educational Institution
'18': Entertainment & Music
'19': Family & Relationships
'20': Film
'21': Find Apartment
'22': Find Attractions
'23': Find Bus
'24': Find Events
'25': Find Home By Area
'26': Find Movies
'27': Find Provider
'28': Find Restaurants
'29': Find Trains
'30': Get Alarms
'31': Get Available Time
'32': Get Cars Available
'33': Get Event Dates
'34': Get Events
'35': Get Ride
'36': Get Times For Movie
'37': Get Weather
'38': Health
'39': Lookup Music
'40': Lookup Song
'41': Make Payment
'42': Mean Of Transportation
'43': Natural Place
'44': Office Holder
'45': Plant
'46': Play Media
'47': Play Movie
'48': Play Song
'49': Politics & Government
'50': Request Payment
'51': Reserve Car
'52': Reserve Hotel
'53': Reserve One way Flight
'54': Reserve Restaurant
'55': Reserve Round trip Flights
'56': Schedule Visit
'57': Science & Mathematics
'58': Science & Technology
'59': Search Hotel
'60': Search House
'61': Search One way Flight
'62': Search Round trip Flights
'63': Society & Culture
'64': Sports
'65': Transfer Money
'66': Village
'67': World News
'68': Written Work
'69': accept reservations
'70': account blocked
'71': add contact
'72': admiration
'73': alarm
'74': alarm query
'75': alarm remove
'76': alarm set
'77': amusement
'78': anger
'79': annoyance
'80': application status
'81': approval
'82': apr
'83': are you a bot
'84': audio volume down
'85': audio volume mute
'86': audio volume other
'87': audio volume up
'88': balance
'89': bill balance
'90': bill due
'91': book flight
'92': book hotel
'93': calculator
'94': calendar
'95': calendar query
'96': calendar remove
'97': calendar set
'98': calendar update
'99': calories
'100': cancel
'101': cancel reservation
'102': car rental
'103': card declined
'104': caring
'105': carry on
'106': change accent
'107': change ai name
'108': change language
'109': change speed
'110': change user name
'111': change volume
'112': cleaning
'113': coffee
'114': confirm reservation
'115': confusion
'116': convert
'117': cook time
'118': cooking query
'119': cooking recipe
'120': create or add
'121': credit limit
'122': credit limit change
'123': credit score
'124': curiosity
'125': currency
'126': current location
'127': damaged card
'128': date
'129': date time convert
'130': date time query
'131': definition
'132': desire
'133': direct deposit
'134': directions
'135': disappointment
'136': disapproval
'137': disgust
'138': distance
'139': do you have pets
'140': email add contact
'141': email query
'142': email query contact
'143': email send email
'144': embarrassment
'145': events
'146': exchange rate
'147': excitement
'148': expiration date
'149': factoid
'150': fear
'151': find phone
'152': flight status
'153': flip coin
'154': food last
'155': freeze account
'156': fun fact
'157': game
'158': gas
'159': gas type
'160': general greet
'161': general joke
'162': general quirky
'163': goodbye
'164': gratitude
'165': greet
'166': greeting
'167': grief
'168': how busy
'169': how old are you
'170': hue light dim
'171': hue light off
'172': hue light up
'173': improve credit score
'174': income
'175': ingredient substitution
'176': ingredients list
'177': insurance
'178': insurance change
'179': interest rate
'180': international fees
'181': international visa
'182': iot cleaning
'183': iot coffee
'184': iot hue light change
'185': iot hue light dim
'186': iot hue light off
'187': iot hue light on
'188': iot hue light up
'189': iot wemo on
'190': iot wemo plug off
'191': joke
'192': joy
'193': jump start
'194': last maintenance
'195': lists create or add
'196': lists query
'197': lists remove
'198': lost luggage
'199': love
'200': make call
'201': maybe
'202': meal suggestion
'203': meaning of life
'204': measurement conversion
'205': meeting schedule
'206': min payment
'207': mpg
'208': music
'209': music dislike ness
'210': music likeness
'211': music query
'212': music settings
'213': negative
'214': nervousness
'215': neutral
'216': new card
'217': news query
'218': next holiday
'219': next song
'220': 'no'
'221': nutrition info
'222': oil change how
'223': oil change when
'224': optimism
'225': order
'226': order checks
'227': order status
'228': paid time off request status
'229': paid time off used
'230': pay bill
'231': payday
'232': pin change
'233': play audiobook
'234': play game
'235': play music
'236': play podcasts
'237': play radio
'238': plug type
'239': podcasts
'240': positive
'241': post
'242': pride
'243': pto balance
'244': pto request
'245': qa currency
'246': qa definition
'247': qa factoid
'248': qa maths
'249': qa stock
'250': query
'251': query contact
'252': quirky
'253': radio
'254': realization
'255': recipe
'256': recommendation events
'257': recommendation locations
'258': recommendation movies
'259': redeem rewards
'260': relief
'261': reminder
'262': reminder update
'263': remorse
'264': remove
'265': repeat
'266': replacement card duration
'267': report fraud
'268': report lost card
'269': reset settings
'270': restaurant reservation
'271': restaurant reviews
'272': restaurant suggestion
'273': rewards balance
'274': roll dice
'275': rollover 401k
'276': routing
'277': sadness
'278': schedule maintenance
'279': schedule meeting
'280': send email
'281': set
'282': settings
'283': share location
'284': shopping list
'285': shopping list update
'286': smart home
'287': social post
'288': social query
'289': spelling
'290': spending history
'291': surprise
'292': sync device
'293': take away order
'294': take away query
'295': taxes
'296': tell joke
'297': text
'298': thank you
'299': ticket
'300': time
'301': timer
'302': timezone
'303': tire change
'304': tire pressure
'305': todo list
'306': todo list update
'307': traffic
'308': transactions
'309': transfer
'310': translate
'311': transport query
'312': transport taxi
'313': transport ticket
'314': transport traffic
'315': travel alert
'316': travel notification
'317': travel suggestion
'318': uber
'319': update playlist
'320': user name
'321': vaccines
'322': volume other
'323': w2 wage and tax statement
'324': weather
'325': weather query
'326': wemo off
'327': wemo plug on
'328': what are your hobbies
'329': what can i ask you
'330': what is your name
'331': what song
'332': where are you from
'333': whisper mode
'334': who do you work for
'335': who made you
'336': 'yes'
- name: dataset_name
dtype:
class_label:
names:
'0': go_emotion
'1': sentiment_tweets_2020
'2': emotion
'3': sgd
'4': clinc_150
'5': slurp
'6': ag_news
'7': dbpedia
'8': yahoo
splits:
- name: train
num_bytes: 347382307
num_examples: 2192703
- name: test
num_bytes: 36063588
num_examples: 168365
download_size: 1744258165
dataset_size: 383445895
- config_name: aspect-normalized-in-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add Alarm
'1': Album
'2': Animal
'3': Artist
'4': Athlete
'5': Book Appointment
'6': Book House
'7': Building
'8': Business
'9': Business & Finance
'10': Buy Bus Ticket
'11': Buy Event Tickets
'12': Buy Movie Tickets
'13': Check Balance
'14': Company
'15': Computers & Internet
'16': Education & Reference
'17': Educational Institution
'18': Entertainment & Music
'19': Family & Relationships
'20': Film
'21': Find Apartment
'22': Find Attractions
'23': Find Bus
'24': Find Events
'25': Find Home By Area
'26': Find Movies
'27': Find Provider
'28': Find Restaurants
'29': Find Trains
'30': Get Alarms
'31': Get Available Time
'32': Get Cars Available
'33': Get Event Dates
'34': Get Events
'35': Get Ride
'36': Get Times For Movie
'37': Get Weather
'38': Health
'39': Lookup Music
'40': Lookup Song
'41': Make Payment
'42': Mean Of Transportation
'43': Natural Place
'44': Office Holder
'45': Plant
'46': Play Media
'47': Play Movie
'48': Play Song
'49': Politics & Government
'50': Request Payment
'51': Reserve Car
'52': Reserve Hotel
'53': Reserve One way Flight
'54': Reserve Restaurant
'55': Reserve Round trip Flights
'56': Schedule Visit
'57': Science & Mathematics
'58': Science & Technology
'59': Search Hotel
'60': Search House
'61': Search One way Flight
'62': Search Round trip Flights
'63': Society & Culture
'64': Sports
'65': Transfer Money
'66': Village
'67': World News
'68': Written Work
'69': accept reservations
'70': account blocked
'71': add contact
'72': admiration
'73': alarm
'74': alarm query
'75': alarm remove
'76': alarm set
'77': amusement
'78': anger
'79': annoyance
'80': application status
'81': approval
'82': apr
'83': are you a bot
'84': audio volume down
'85': audio volume mute
'86': audio volume other
'87': audio volume up
'88': balance
'89': bill balance
'90': bill due
'91': book flight
'92': book hotel
'93': calculator
'94': calendar
'95': calendar query
'96': calendar remove
'97': calendar set
'98': calendar update
'99': calories
'100': cancel
'101': cancel reservation
'102': car rental
'103': card declined
'104': caring
'105': carry on
'106': change accent
'107': change ai name
'108': change language
'109': change speed
'110': change user name
'111': change volume
'112': cleaning
'113': coffee
'114': confirm reservation
'115': confusion
'116': convert
'117': cook time
'118': cooking query
'119': cooking recipe
'120': create or add
'121': credit limit
'122': credit limit change
'123': credit score
'124': curiosity
'125': currency
'126': current location
'127': damaged card
'128': date
'129': date time convert
'130': date time query
'131': definition
'132': desire
'133': direct deposit
'134': directions
'135': disappointment
'136': disapproval
'137': disgust
'138': distance
'139': do you have pets
'140': email add contact
'141': email query
'142': email query contact
'143': email send email
'144': embarrassment
'145': events
'146': exchange rate
'147': excitement
'148': expiration date
'149': factoid
'150': fear
'151': find phone
'152': flight status
'153': flip coin
'154': food last
'155': freeze account
'156': fun fact
'157': game
'158': gas
'159': gas type
'160': general greet
'161': general joke
'162': general quirky
'163': goodbye
'164': gratitude
'165': greet
'166': greeting
'167': grief
'168': how busy
'169': how old are you
'170': hue light dim
'171': hue light off
'172': hue light up
'173': improve credit score
'174': income
'175': ingredient substitution
'176': ingredients list
'177': insurance
'178': insurance change
'179': interest rate
'180': international fees
'181': international visa
'182': iot cleaning
'183': iot coffee
'184': iot hue light change
'185': iot hue light dim
'186': iot hue light off
'187': iot hue light on
'188': iot hue light up
'189': iot wemo on
'190': iot wemo plug off
'191': joke
'192': joy
'193': jump start
'194': last maintenance
'195': lists create or add
'196': lists query
'197': lists remove
'198': lost luggage
'199': love
'200': make call
'201': maybe
'202': meal suggestion
'203': meaning of life
'204': measurement conversion
'205': meeting schedule
'206': min payment
'207': mpg
'208': music
'209': music dislike ness
'210': music likeness
'211': music query
'212': music settings
'213': negative
'214': nervousness
'215': neutral
'216': new card
'217': news query
'218': next holiday
'219': next song
'220': 'no'
'221': nutrition info
'222': oil change how
'223': oil change when
'224': optimism
'225': order
'226': order checks
'227': order status
'228': paid time off request status
'229': paid time off used
'230': pay bill
'231': payday
'232': pin change
'233': play audiobook
'234': play game
'235': play music
'236': play podcasts
'237': play radio
'238': plug type
'239': podcasts
'240': positive
'241': post
'242': pride
'243': pto balance
'244': pto request
'245': qa currency
'246': qa definition
'247': qa factoid
'248': qa maths
'249': qa stock
'250': query
'251': query contact
'252': quirky
'253': radio
'254': realization
'255': recipe
'256': recommendation events
'257': recommendation locations
'258': recommendation movies
'259': redeem rewards
'260': relief
'261': reminder
'262': reminder update
'263': remorse
'264': remove
'265': repeat
'266': replacement card duration
'267': report fraud
'268': report lost card
'269': reset settings
'270': restaurant reservation
'271': restaurant reviews
'272': restaurant suggestion
'273': rewards balance
'274': roll dice
'275': rollover 401k
'276': routing
'277': sadness
'278': schedule maintenance
'279': schedule meeting
'280': send email
'281': set
'282': settings
'283': share location
'284': shopping list
'285': shopping list update
'286': smart home
'287': social post
'288': social query
'289': spelling
'290': spending history
'291': surprise
'292': sync device
'293': take away order
'294': take away query
'295': taxes
'296': tell joke
'297': text
'298': thank you
'299': ticket
'300': time
'301': timer
'302': timezone
'303': tire change
'304': tire pressure
'305': todo list
'306': todo list update
'307': traffic
'308': transactions
'309': transfer
'310': translate
'311': transport query
'312': transport taxi
'313': transport ticket
'314': transport traffic
'315': travel alert
'316': travel notification
'317': travel suggestion
'318': uber
'319': update playlist
'320': user name
'321': vaccines
'322': volume other
'323': w2 wage and tax statement
'324': weather
'325': weather query
'326': wemo off
'327': wemo plug on
'328': what are your hobbies
'329': what can i ask you
'330': what is your name
'331': what song
'332': where are you from
'333': whisper mode
'334': who do you work for
'335': who made you
'336': 'yes'
- name: dataset_name
dtype:
class_label:
names:
'0': go_emotion
'1': sentiment_tweets_2020
'2': emotion
'3': sgd
'4': clinc_150
'5': slurp
'6': ag_news
'7': dbpedia
'8': yahoo
splits:
- name: train
num_bytes: 28974188
num_examples: 115127
- name: validation
num_bytes: 3213586
num_examples: 12806
- name: test
num_bytes: 36063590
num_examples: 168365
download_size: 1744258165
dataset_size: 68251364
- config_name: out-of-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add To Playlist
'1': Bank account or service
'2': Book Restaurant
'3': Checking or savings account
'4': Chemistry; Metallurgy
'5': Consumer Loan
'6': Credit card
'7': Credit card or prepaid card
'8': Credit reporting
'9': Credit reporting, credit repair services, or other personal consumer
reports
'10': Debt collection
'11': EUROPEAN UNION
'12': Electricity
'13': Fixed Constructions
'14': General tagging of new or cross-sectional technology
'15': Get Weather
'16': Human Necessities
'17': Mechanical Engineering; Lightning; Heating; Weapons; Blasting
'18': Money transfer, virtual currency, or money service
'19': Money transfers
'20': Mortgage
'21': Other financial service
'22': Payday loan
'23': Payday loan, title loan, or personal loan
'24': Performing Operations; Transporting
'25': Physics
'26': Play Music
'27': Prepaid card
'28': Rate Book
'29': Refund not showing up
'30': Search Creative Work
'31': Search Screening Event
'32': Student loan
'33': Textiles; Paper
'34': Vehicle loan or lease
'35': Virtual currency
'36': activate my card
'37': age limit
'38': agri-foodstuffs
'39': agriculture, forestry and fisheries
'40': alarm query
'41': alarm remove
'42': alarm set
'43': apple pay or google pay
'44': atm support
'45': audio volume down
'46': audio volume mute
'47': audio volume other
'48': audio volume up
'49': automatic top up
'50': balance not updated after bank transfer
'51': balance not updated after cheque or cash deposit
'52': beneficiary not allowed
'53': business and competition
'54': calendar query
'55': calendar remove
'56': calendar set
'57': cancel transfer
'58': card about to expire
'59': card acceptance
'60': card arrival
'61': card delivery estimate
'62': card linking
'63': card not working
'64': card payment fee charged
'65': card payment not recognised
'66': card payment wrong exchange rate
'67': card swallowed
'68': cash withdrawal charge
'69': cash withdrawal not recognised
'70': change pin
'71': compromised card
'72': contactless not working
'73': cooking query
'74': cooking recipe
'75': country support
'76': datetime convert
'77': datetime query
'78': declined card payment
'79': declined cash withdrawal
'80': declined transfer
'81': direct debit payment not recognised
'82': disposable card limits
'83': economics
'84': edit personal details
'85': education and communications
'86': email addcontact
'87': email query
'88': email querycontact
'89': email sendemail
'90': employment and working conditions
'91': energy
'92': environment
'93': exchange charge
'94': exchange rate
'95': exchange via app
'96': extra charge on statement
'97': failed transfer
'98': fiat currency support
'99': finance
'100': general affirm
'101': general commandstop
'102': general confirm
'103': general dontcare
'104': general explain
'105': general greet
'106': general joke
'107': general negate
'108': general praise
'109': general quirky
'110': general repeat
'111': geography
'112': get disposable virtual card
'113': get physical card
'114': getting spare card
'115': getting virtual card
'116': industry
'117': international organisations
'118': international relations
'119': iot cleaning
'120': iot coffee
'121': iot hue lightchange
'122': iot hue lightdim
'123': iot hue lightoff
'124': iot hue lighton
'125': iot hue lightup
'126': iot wemo off
'127': iot wemo on
'128': law
'129': lists createoradd
'130': lists query
'131': lists remove
'132': lost or stolen card
'133': lost or stolen phone
'134': music dislikeness
'135': music likeness
'136': music query
'137': music settings
'138': negative
'139': neutral
'140': news query
'141': order physical card
'142': passcode forgotten
'143': pending card payment
'144': pending cash withdrawal
'145': pending top up
'146': pending transfer
'147': pin blocked
'148': play audiobook
'149': play game
'150': play music
'151': play podcasts
'152': play radio
'153': politics
'154': positive
'155': production, technology and research
'156': qa currency
'157': qa definition
'158': qa factoid
'159': qa maths
'160': qa stock
'161': receiving money
'162': recommendation events
'163': recommendation locations
'164': recommendation movies
'165': request refund
'166': reverted card payment?
'167': science
'168': social post
'169': social query
'170': social questions
'171': supported cards and currencies
'172': takeaway order
'173': takeaway query
'174': terminate account
'175': top up by bank transfer charge
'176': top up by card charge
'177': top up by cash or cheque
'178': top up failed
'179': top up limits
'180': top up reverted
'181': topping up by card
'182': trade
'183': transaction charged twice
'184': transfer fee charged
'185': transfer into account
'186': transfer not received by recipient
'187': transfer timing
'188': transport
'189': transport query
'190': transport taxi
'191': transport ticket
'192': transport traffic
'193': unable to verify identity
'194': verify my identity
'195': verify source of funds
'196': verify top up
'197': virtual card not working
'198': visa or mastercard
'199': weather query
'200': why verify identity
'201': wrong amount of cash received
'202': wrong exchange rate for cash withdrawal
- name: dataset_name
dtype:
class_label:
names:
'0': amazon_polarity
'1': finance_sentiment
'2': yelp
'3': banking77
'4': snips
'5': nlu_evaluation
'6': multi_eurlex
'7': patent
'8': consumer_finance
splits:
- name: train
num_bytes: 3608196895
num_examples: 4996673
- name: test
num_bytes: 541174753
num_examples: 625911
download_size: 1744258165
dataset_size: 4149371648
- config_name: aspect-normalized-out-of-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add To Playlist
'1': Bank account or service
'2': Book Restaurant
'3': Checking or savings account
'4': Chemistry; Metallurgy
'5': Consumer Loan
'6': Credit card
'7': Credit card or prepaid card
'8': Credit reporting
'9': Credit reporting, credit repair services, or other personal consumer
reports
'10': Debt collection
'11': EUROPEAN UNION
'12': Electricity
'13': Fixed Constructions
'14': General tagging of new or cross-sectional technology
'15': Get Weather
'16': Human Necessities
'17': Mechanical Engineering; Lightning; Heating; Weapons; Blasting
'18': Money transfer, virtual currency, or money service
'19': Money transfers
'20': Mortgage
'21': Other financial service
'22': Payday loan
'23': Payday loan, title loan, or personal loan
'24': Performing Operations; Transporting
'25': Physics
'26': Play Music
'27': Prepaid card
'28': Rate Book
'29': Refund not showing up
'30': Search Creative Work
'31': Search Screening Event
'32': Student loan
'33': Textiles; Paper
'34': Vehicle loan or lease
'35': Virtual currency
'36': activate my card
'37': age limit
'38': agri-foodstuffs
'39': agriculture, forestry and fisheries
'40': alarm query
'41': alarm remove
'42': alarm set
'43': apple pay or google pay
'44': atm support
'45': audio volume down
'46': audio volume mute
'47': audio volume other
'48': audio volume up
'49': automatic top up
'50': balance not updated after bank transfer
'51': balance not updated after cheque or cash deposit
'52': beneficiary not allowed
'53': business and competition
'54': calendar query
'55': calendar remove
'56': calendar set
'57': cancel transfer
'58': card about to expire
'59': card acceptance
'60': card arrival
'61': card delivery estimate
'62': card linking
'63': card not working
'64': card payment fee charged
'65': card payment not recognised
'66': card payment wrong exchange rate
'67': card swallowed
'68': cash withdrawal charge
'69': cash withdrawal not recognised
'70': change pin
'71': compromised card
'72': contactless not working
'73': cooking query
'74': cooking recipe
'75': country support
'76': datetime convert
'77': datetime query
'78': declined card payment
'79': declined cash withdrawal
'80': declined transfer
'81': direct debit payment not recognised
'82': disposable card limits
'83': economics
'84': edit personal details
'85': education and communications
'86': email addcontact
'87': email query
'88': email querycontact
'89': email sendemail
'90': employment and working conditions
'91': energy
'92': environment
'93': exchange charge
'94': exchange rate
'95': exchange via app
'96': extra charge on statement
'97': failed transfer
'98': fiat currency support
'99': finance
'100': general affirm
'101': general commandstop
'102': general confirm
'103': general dontcare
'104': general explain
'105': general greet
'106': general joke
'107': general negate
'108': general praise
'109': general quirky
'110': general repeat
'111': geography
'112': get disposable virtual card
'113': get physical card
'114': getting spare card
'115': getting virtual card
'116': industry
'117': international organisations
'118': international relations
'119': iot cleaning
'120': iot coffee
'121': iot hue lightchange
'122': iot hue lightdim
'123': iot hue lightoff
'124': iot hue lighton
'125': iot hue lightup
'126': iot wemo off
'127': iot wemo on
'128': law
'129': lists createoradd
'130': lists query
'131': lists remove
'132': lost or stolen card
'133': lost or stolen phone
'134': music dislikeness
'135': music likeness
'136': music query
'137': music settings
'138': negative
'139': neutral
'140': news query
'141': order physical card
'142': passcode forgotten
'143': pending card payment
'144': pending cash withdrawal
'145': pending top up
'146': pending transfer
'147': pin blocked
'148': play audiobook
'149': play game
'150': play music
'151': play podcasts
'152': play radio
'153': politics
'154': positive
'155': production, technology and research
'156': qa currency
'157': qa definition
'158': qa factoid
'159': qa maths
'160': qa stock
'161': receiving money
'162': recommendation events
'163': recommendation locations
'164': recommendation movies
'165': request refund
'166': reverted card payment?
'167': science
'168': social post
'169': social query
'170': social questions
'171': supported cards and currencies
'172': takeaway order
'173': takeaway query
'174': terminate account
'175': top up by bank transfer charge
'176': top up by card charge
'177': top up by cash or cheque
'178': top up failed
'179': top up limits
'180': top up reverted
'181': topping up by card
'182': trade
'183': transaction charged twice
'184': transfer fee charged
'185': transfer into account
'186': transfer not received by recipient
'187': transfer timing
'188': transport
'189': transport query
'190': transport taxi
'191': transport ticket
'192': transport traffic
'193': unable to verify identity
'194': verify my identity
'195': verify source of funds
'196': verify top up
'197': virtual card not working
'198': visa or mastercard
'199': weather query
'200': why verify identity
'201': wrong amount of cash received
'202': wrong exchange rate for cash withdrawal
- name: dataset_name
dtype:
class_label:
names:
'0': amazon_polarity
'1': finance_sentiment
'2': yelp
'3': banking77
'4': snips
'5': nlu_evaluation
'6': multi_eurlex
'7': patent
'8': consumer_finance
splits:
- name: train
num_bytes: 109566474
num_examples: 119167
- name: validation
num_bytes: 12432497
num_examples: 13263
- name: test
num_bytes: 541174753
num_examples: 625911
download_size: 1744258165
dataset_size: 663173724
---
# Universal Text Classification Dataset (UTCD)
## Load dataset
```python
from datasets import load_dataset
dataset = load_dataset('claritylab/utcd', name='in-domain')
```
## Description
UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. TUTCD consists of ~ 6M/800K train/test examples.
UTCD was introduced in the Findings of ACL'23 Paper **Label Agnostic Pre-training for Zero-shot Text Classification** by ***Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars***. [Project Homepage](https://github.com/ChrisIsKing/zero-shot-text-classification/tree/master).
UTCD Datasets & Principles:
In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain \& aspect transfer. As such, in the construction of UTCD we enforce the following principles:
- **Textual labels**: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language.
- **Diverse domains and Sequence lengths**: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above.
- Sentiment
- GoEmotions introduced in [GoEmotions: A Dataset of Fine-Grained Emotions](https://arxiv.org/pdf/2005.00547v2.pdf)
- TweetEval introduced in [TWEETEVAL: Unified Benchmark and Comparative Evaluation for Tweet Classification](https://arxiv.org/pdf/2010.12421v2.pdf) (Sentiment subset)
- Emotion introduced in [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404.pdf)
- Amazon Polarity introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- Finance Phrasebank introduced in [Good debt or bad debt: Detecting semantic orientations in economic texts](https://arxiv.org/pdf/1307.5336.pdf)
- Yelp introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- Intent/Dialogue
- Schema-Guided Dialogue introduced in [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/pdf/1909.05855v2.pdf)
- Clinc-150 introduced in [An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction](https://arxiv.org/pdf/1909.02027v1.pdf)
- SLURP SLU introduced in [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/pdf/2011.13205.pdf)
- Banking77 introduced in [Efficient Intent Detection with Dual Sentence Encoders](https://arxiv.org/abs/2003.04807](https://arxiv.org/pdf/2003.04807.pdf)
- Snips introduced in [Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces](https://arxiv.org/pdf/1805.10190.pdf)
- NLU Evaluation introduced in [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/pdf/1903.05566.pdf)
- Topic
- AG News introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- DBpedia 14 introduced in [DBpedia: A Nucleus for a Web of Open Data](https://link.springer.com/chapter/10.1007/978-3-540-76298-0_52)
- Yahoo Answer Topics introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- MultiEurlex introduced in [MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer](https://aclanthology.org/2021.emnlp-main.559v2.pdf)
- BigPatent introduced in [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://aclanthology.org/P19-1212.pdf)
- Consumer Finance introduced in [Consumer Complaint Database](https://www.consumerfinance.gov/data-research/consumer-complaints/)
## Structure
### Data Samples
Each dataset sample contains the text, the label encoded as an integer, and the dataset name encoded as an integer.
```python
{
'text': "My favourite food is anything I didn't have to cook myself.",
'labels': [215],
'dataset_name': 0
}
```
### Datasets Contained
The UTCD dataset contains 18 datasets, 9 `in-domain`, 9 `out-of-domain`, spanning 3 aspects: `sentiment`, `intent` and `topic`.
Below are statistics on the datasets.
**In-Domain Datasets**
| Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text in Train/Test |
| ---------- | --------- | ---------------------- | ------- | ------------------------------------ |
| GoEmotions | sentiment | 43K/5.4K | 28 | 12/12 |
| TweetEval | sentiment | 45K/12K | 3 | 19/14 |
| Emotion | sentiment | 16K/2K | 6 | 17/17 |
| SGD | intent | 16K/4.2K | 26 | 8/9 |
| Clinc-150 | intent | 15K/4.5K | 150 | 8/8 |
| SLURP | intent | 12K/2.6K | 75 | 7/7 |
| AG News | topic | 120K7.6K | 4 | 38/37 |
| DBpedia | topic | 560K/70K | 14 | 45/45 |
| Yahoo | topic | 1.4M/60K | 10 | 10/10 |
**Out-of-Domain Datasets**
| Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text |
| --------------------- | --------- | ---------------------- | ------- | ---------------------- |
| Amazon Polarity | sentiment | 3.6M/400K | 2 | 71/71 |
| Financial Phrase Bank | sentiment | 1.8K/453 | 3 | 19/19 |
| Yelp | sentiment | 650K/50K | 3 | 128/128 |
| Banking77 | intent | 10K/3.1K | 77 | 11/10 |
| SNIPS | intent | 14K/697 | 7 | 8/8 |
| NLU Eval | intent | 21K/5.2K | 68 | 7/7 |
| MultiEURLEX | topic | 55K/5K | 21 | 1198/1853 |
| Big Patent | topic | 25K/5K | 9 | 2872/2892 |
| Consumer Finance | topic | 630K/160K | 18 | 190/189 |
### Configurations
The `in-domain` and `out-of-domain` configurations has 2 splits: `train` and `test`.
The aspect-normalized configurations (`aspect-normalized-in-domain`, `aspect-normalized-out-of-domain`) has 3 splits: `train`, `validation` and `test`.
Below are statistics on the configuration splits.
**In-Domain Configuration**
| Split | #samples |
| ----- | --------- |
| Train | 2,192,703 |
| Test | 168,365 |
**Out-of-Domain Configuration**
| Split | #samples |
| ----- | --------- |
| Train | 4,996,673 |
| Test | 625,911 |
**Aspect-Normalized In-Domain Configuration**
| Split | #samples |
| ---------- | -------- |
| Train | 115,127 |
| Validation | 12,806 |
| Test | 168,365 |
**Aspect-Normalized Out-of-Domain Configuration**
| Split | #samples |
| ---------- | -------- |
| Train | 119,167 |
| Validation | 13,263 |
| Test | 625,911 |
|
biu-nlp/abstract-sim-pubmed | 2023-05-13T17:49:55.000Z | [
"region:us"
] | biu-nlp | null | null | null | 2 | 20 | Entry not found |
edarchimbaud/earnings-stocks | 2023-10-07T23:13:47.000Z | [
"region:us"
] | edarchimbaud | null | null | null | 1 | 20 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: timestamp[ns, tz=EST]
- name: eps_estimate
dtype: float64
- name: reported_eps
dtype: float64
- name: surprise
dtype: float64
splits:
- name: train
num_bytes: 274840
num_examples: 6924
download_size: 133883
dataset_size: 274840
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "earnings-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The earnings-sp500 dataset provides information on earnings reports of companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to analyze and predict earnings surprises for companies in the S&P 500 index. It can be used to develop models for financial analysis and trading strategies.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (timestamp[ns, tz=EST]): A timestamp indicating the date of the earnings report. The timestamps are in the Eastern Standard Time (EST) timezone.
- eps_estimate (float64): A floating-point number representing the estimated earnings per share (EPS) for the company.
- reported_eps (float64): A floating-point number representing the reported earnings per share (EPS) for the company.
- surprise (float64): A floating-point number representing the surprise factor, calculated as the difference between the reported EPS and the estimated EPS.
### Data Splits
A single split, called train.
## Dataset Creation
### Curation Rationale
The earnings-sp500 dataset was created to provide data on earnings reports of companies in the S&P 500 index for research and analysis purposes.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from various financial sources and normalized for consistency.
### Annotations
#### Annotation Process
[N/A]
#### Who are the Annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The earnings-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The earnings-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, earnings-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions |
tasksource/nlgraph | 2023-05-23T07:36:04.000Z | [
"region:us"
] | tasksource | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: difficulty
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 6593495
num_examples: 5022
- name: test
num_bytes: 1270263
num_examples: 1000
download_size: 1448275
dataset_size: 7863758
---
# Dataset Card for "nlgraph"
```bib
@article{wang2023can,
title={Can Language Models Solve Graph Problems in Natural Language?},
author={Wang, Heng and Feng, Shangbin and He, Tianxing and Tan, Zhaoxuan and Han, Xiaochuang and Tsvetkov, Yulia},
journal={arXiv preprint arXiv:2305.10037},
year={2023}
}
``` |
tiange/Cap3D | 2023-09-12T09:53:51.000Z | [
"license:odc-by",
"arxiv:2306.07279",
"arxiv:2212.08051",
"arxiv:2110.06199",
"region:us"
] | tiange | null | null | null | 30 | 20 | ---
license: odc-by
viewer: false
---
# Scalable 3D Captioning with Pretrained Models
## Dataset Description
- **Paper:** [Scalable 3D Captioning with Pretrained Models](https://arxiv.org/abs/2306.07279)
- **Repository**: [Github](https://github.com/crockwell/Cap3D)
This repository hosts data for [**Scalable 3D Captioning with Pretrained Models**](https://cap3d-um.github.io/), including descriptive captions for 3D objects in [Objaverse](https://arxiv.org/abs/2212.08051) and [ABO](https://arxiv.org/abs/2110.06199) ([**example captioning**](https://tiangeluo.github.io/projectpages/imgs/Cap3D/000.html)). It also includes point clouds and rendered images of Objaverse objects, as well as their Shap-E latent codes. **All captions and related 3D objects here have commercial-friendly licenses** (including CC-BY 4.0, CC-BY-SA 4.0, and CC0 1.0).
## Usage
Please download and unzip files from [**Page**](https://huggingface.co/datasets/tiange/Cap3D/tree/main) according to your usage. The different files, along with their descriptions, are listed in the table below. **Example python scripts to load our data are listed after the table.**
For **Point-E** usage, you can sample `1,024` points from `16,384` via FPS or randomly sampling `[:,torch.randperm(16384)[:1024]]` for finetuning the OpenAI diffusion model.
| Filename | Description |
| -------------------------------------- | ------------------------------------------------------------ |
| Cap3D_automated_{Objaverse, ABO}.csv | Text descriptions generated by Cap3D. `661,577` and `6,440` 3D-caption pairs for Objaverse and ABO datasets, respectively. All 3D objects contained here have a commercial friendly license. |
| Cap3D_automated_Objaverse_no3Dword.csv | Our current 3D captions are densely packed with "3D-model" terminology, potentially limiting their utility in applications like embodied AI. As such, we've created a version with minimized 3D-related words. |
| Cap3D_automated_Objaverse_allviews.pkl | Objaverse text descriptions of before putting into CLIP and GPT4. `pickle.load(open('Cap3D_automated_Objaverse_allviews.pkl','rb'))['view0'][UID]` |
| Cap3D_human_{Objaverse, ABO}.pkl | Our collected human-authored text descriptions (Mostly English, very few non-English). The pickle file with `shape_id` as key. |
| PointCloud_zips | `661,577` PointClouds (`16,384` colorful points) extracted from Objaverse objects. Saved as `.ply` file. |
| PointCloud_pt_zips | PointClouds saved as torch.Tensor `.pt` files, providing faster loading speed than `.ply`. |
| ShapELatentCode_zips | Extracted latent codes for Objaverse objects by using pretrained [Shap-E transmitter](https://github.com/openai/shap-e/blob/main/model-card.md). |
| RenderedImage_zips | Rendered images for Objaverse objects: eight unique zip files each contain `661,577` JPEG images, representing different views. |
| RenderedImage_CamMatrix_zips | The `3x4` transformation matrix of the camera used to render the image. |
| RenderedImage_Mask_zips | Background mask (IMPERFECT) for rendered images: `bg_mask = np.load('xxx.npz')['bg_mask']` |
| our_finetuned_models | Model checkpoints for our finetuned models (point-E, shape-E, etc). Our also provide our finetune codes in [Github](https://github.com/crockwell/Cap3D/tree/main/text-to-3D). |
| blender.zip | The blender we used in this project. |
| text-to-3D_test | Including files that used to evaluate text-to-3D performance. |
```
# load our captions
import pandas as pd
captions = pd.read_csv('Cap3D_automated_Objaverse_no3Dword.csv', header=None)
## captions:
## 0 1
## 0 ed51a51909ee46c780db3a85e821feb2 A green and white rifle.
## 1 9110b606f6c547b2980fcb3c8c4b6a1c a small building with a roof, accompanied by b...
## 2 80d9caaa1fa04502af666135196456e1 a pair of purple and black swords with white h...
## 3 28d43a218cd8466a8c1f82b29b71e314 a small house, island, road with trash, trash ...
## 4 75582285fab442a2ba31733f9c8fae66 a small, grassy hill.
## ... ... ...
## 661572 ccee95eac2fb48ec92d357e3d853f2bd a tall, white tower featuring a door, stairs, ...
## 661573 f02f574e85e94b879c1b54f4d3aa4b35 A yellow disc with a hole in the middle.
## 661574 f1d3d36114d34d29a18a8ed1516bf355 pink and white ball with a face and spikes.
## 661575 32d1927d6c0e4d9b9c445fc5988ec6c6 a white and pink bedroom and bathroom, featuri...
## 661576 2fa12ba0af5442c9af8f9bead1a7d020 Monstera plant with green leaves.
## if u want to obtain the caption for specific UID
caption = captions[captions[0] == '80d9caaa1fa04502af666135196456e1'][1].values[0]
# load point clouds (unzip https://huggingface.co/datasets/tiange/Cap3D/tree/main/PointCloud_pt_zips)
import torch
pts = torch.load('Cap3D_pcs_pt/80d9caaa1fa04502af666135196456e1.pt')
## pts.shape == torch.Size([6, 16384])
```
If you have any questions, please contact [Tiange](mailto:tiange.cs@gmail.com) or [Chris](mailto:cnris@umich.edu).
## Citation Information
If you find our data or code useful, please consider citing:
```bibtex
@article{luo2023scalable,
title={Scalable 3D Captioning with Pretrained Models},
author={Luo, Tiange and Rockwell, Chris and Lee, Honglak and Johnson, Justin},
journal={arXiv preprint arXiv:2306.07279},
year={2023}
}
```
Please cite ***Objaverse*** and ***ABO*** paper accordingly, when using this data.
```
@inproceedings{deitke2023objaverse,
title={Objaverse: A universe of annotated 3d objects},
author={Deitke, Matt and Schwenk, Dustin and Salvador, Jordi and Weihs, Luca and Michel, Oscar and VanderBilt, Eli and Schmidt, Ludwig and Ehsani, Kiana and Kembhavi, Aniruddha and Farhadi, Ali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13142--13153},
year={2023}
}
```
```
@inproceedings{collins2022abo,
title={Abo: Dataset and benchmarks for real-world 3d object understanding},
author={Collins, Jasmine and Goel, Shubham and Deng, Kenan and Luthra, Achleshwar and Xu, Leon and Gundogdu, Erhan and Zhang, Xi and Vicente, Tomas F Yago and Dideriksen, Thomas and Arora, Himanshu and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21126--21136},
year={2022}
}
```
|
MocktaiLEngineer/qmsum-processed | 2023-05-29T10:46:54.000Z | [
"license:mit",
"region:us"
] | MocktaiLEngineer | null | null | null | 0 | 20 | ---
license: mit
---
|
codeparrot/conala-mined-curated | 2023-06-13T15:56:31.000Z | [
"doi:10.57967/hf/0755",
"region:us"
] | codeparrot | null | null | null | 5 | 20 | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: parent_answer_post_id
dtype: int64
- name: prob
dtype: float64
- name: snippet
dtype: string
- name: intent
dtype: string
- name: rewritten_intent
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 136332874
num_examples: 593891
download_size: 94688053
dataset_size: 136332874
---
# Conala-mined-curated
Conala-mined-curatedd is a dataset that is based on the mined subset of the [CoNaLa dataset](https://huggingface.co/datasets/neulab/conala/viewer/mined/train).
conala is a dataset crawled from Stack Overflow. Part of it is filtered and curated to from a training set and a test set. However, the mined part is not comparably
post-processed. It is a set of 600K examples that we decided to work on.
## Dataset description
The conala datasets have 3 columns of interest. We give their description as provided by the [authors](https://conala-corpus.github.io)
- *intent* : Natural Language intent (i.e., the title of a Stack Overflow question)
- *snippet* : A code snippet that implements the intent. This is the output of systems in the challenge.
- *rewritten_intent* : Crowdsourced revised intents that try to better reflect the full meaning of the code, typically done by incorporating variable names and
- function arguments that appeared in the code into the intent. This is the input to be used by systems in the CoNaLa challenge.
For instruction fine-tuning, we would be interested to train a model to map the *rewritten_intent* to the *snippet*. However, the mined subset does not have the
column *rewritten_intent*. *intent* is to vague to be describe as an instruction so we have to find a way to build the column *rewritten_intent* for the mined subset.
That is exactly what was done in order to build this dataset.
## Method
The most valuable information that we have in order to recover the column *rewritten_intent* are the columns *intent* and *snippet*. Fortunately we also have the training set and the test set
of conala which are labeled. It means that we have a view of what a high quality triplet (*intent*, *rewritten_intent*, *snippet*) look like. We had the idea to build a Seq2Seq model whose role
would be to reconstruct the *rewritten_intent* based on the concatenation [*intent*, *snippet*].
More precisely, we fine-tuned [google UL2](https://huggingface.co/google/ul2) to solve this task.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("codeparrot/conala-mined-curated")
dataset
DatasetDict({
train: Dataset({
features: ['question_id', 'parent_answer_post_id', 'prob', 'snippet', 'intent', 'rewritten_intent', 'id'],
num_rows: 593891
})
})
```
## Additional resources
- Official site of the [CoNala-corpus](https://conala-corpus.github.io).
- [CoNaLa's card](https://huggingface.co/datasets/neulab/conala).
- [Github repository](https://github.com/ArmelRandy/Conala) of our method.
|
chargoddard/QuALITY-instruct | 2023-07-14T00:29:45.000Z | [
"language:en",
"region:us"
] | chargoddard | null | null | null | 1 | 20 | ---
language: en
pretty_name: https://github.com/nyu-mll/quality
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 62125756
num_examples: 2523
- name: dev
num_bytes: 50877356
num_examples: 2086
download_size: 5451636
dataset_size: 113003112
---
# QuALITY: Question Answering with Long Input Texts, Yes!
This is the QuALITY v1.0.1 training set converted to instruction-style prompts. All credit to the original authors. See https://github.com/nyu-mll/quality for details. |
PKU-Alignment/BeaverTails-Evaluation | 2023-07-20T15:31:24.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-4.0",
"safe",
"safety",
"ai-safety",
"moderation",
"rejection-sampling",
"llm",
"lm",
"human-feedback",
"arxiv:2307.04657",
"region:us"
] | PKU-Alignment | null | null | null | 0 | 20 | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
language:
- en
tags:
- safe
- safety
- ai-safety
- moderation
- rejection-sampling
- llm
- lm
- human-feedback
size_categories:
- n<1K
---
# Dataset Card for BeaverTails-Evaluation
BeaverTails is an AI safety-focused collection comprising a series of datasets.
This repository contains test prompts specifically designed for evaluating language model safety.
It is important to note that although each prompt can be connected to multiple categories, only one category is labeled for each prompt.
The 14 harm categories are defined as follows:
1. **Animal Abuse:** This involves any form of cruelty or harm inflicted on animals, including physical abuse, neglect, or any behavior that causes distress or harm to an animal. It also includes content that promotes such behavior.
1. **Child Abuse:** This encompasses any form of physical, emotional, or sexual abuse directed toward children. It can also include neglect, exploitation, or any behavior that harms a child or violates their rights. Content that promotes or glorifies such behavior also falls under this category.
1. **Controversial Topics, Politics:** This refers to discussions or statements related to controversial topics, political ideologies, or sensitive issues that could lead to disagreement or conflict. These topics can be divisive and could potentially cause harm if not handled respectfully and thoughtfully.
1. **Discrimination, Stereotype, Injustice:** Discrimination refers to treating people unfairly or unequally based on their personal characteristics or affiliations such as race, age, gender, religion, etc. Stereotyping involves holding simplified and generalized beliefs about particular groups, which often leads to prejudices and discrimination. Injustice pertains to unfair treatment or violation of the rights of individuals or groups.
1. **Drug Abuse, Weapons, Banned Substance:** This category includes the misuse or abuse of illegal drugs, prescription drugs, or any other banned substances. It also involves the unauthorized or illegal possession, use, or trade of weapons.
1. **Financial Crime, Property Crime, Theft:** This includes all forms of illegal activities that lead to financial loss such as fraud, embezzlement, money laundering, bribery, and other forms of corruption. Property crime and theft involve the act of taking someone's property without their consent.
1. **Hate Speech, Offensive Language:** This category encompasses any form of communication or content that expresses intense hostility or violence towards an individual or group based on aspects like race, religion, ethnicity, nationality, gender, sexual orientation, or any other defining characteristics. This also includes language that is considered vulgar, crude, or offensive.
1. **Misinformation Regarding ethics, laws, and safety:** This category involves the spreading of incorrect or misleading information related to ethical issues, laws, regulations, or any matter related to safety. Misinformation can cause harm by influencing people's beliefs or actions negatively.
1. **Non-Violent Unethical Behavior:** This refers to any behavior that is morally or ethically wrong, but does not involve violence. Examples can include dishonest behaviors like lying, cheating, deceiving, spreading rumors, betraying trust, or any other form of unethical conduct.
1. **Privacy Violation:** This refers to actions that invade an individual's privacy, such as unauthorized access, use, disclosure, or collection of personal data. It also includes stalking, doxing, or any other form of intrusion into someone's personal life without their consent.
1. **Self-Harm:** This includes any form of self-inflicted harm, whether physical or emotional, or any content that promotes, glorifies, or encourages self-harm. This can also include discussions of suicidal thoughts or plans.
1. **Sexually Explicit, Adult Content:** This category includes any content that depicts explicit sexual activities or adult themes that are considered inappropriate or offensive. This can include pornography, explicit sexual text or messages, and any other form of sexually explicit material.
1. **Terrorism, Organized Crime:** This pertains to any form of content or action related to terrorism or organized crime, including endorsing or promoting terrorist activities, participating in organized criminal activities, or spreading propaganda for such groups.
1. **Violence, Aiding and Abetting, Incitement:** This involves any form of physical harm, threat, or violent behavior towards individuals or groups. Aiding and abetting refers to the act of helping, supporting, or encouraging such violent behaviors or illegal activities. Incitement pertains to the act of provoking or stirring up harmful, violent, or illegal actions.
**Disclaimer**: The BeaverTails dataset and its family contain content that may be offensive or upsetting.
Topics covered in the dataset include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially distressing subject matter.
Please engage with the dataset responsibly and in accordance with your own personal risk tolerance.
The dataset is intended for research purposes, specifically for research aimed at creating safer and less harmful AI systems.
The views and opinions expressed in the dataset do not represent the views of the PKU-Alignment Team or any of its members.
It is important to emphasize that the dataset should not be used for training dialogue agents, as doing so may likely result in harmful model behavior.
The primary objective of this dataset is to facilitate research that could minimize or prevent the harm caused by AI systems.
## Usage
The code snippet below demonstrates how to load the evaluation dataset:
```python
from datasets import load_dataset
# Load the whole dataset
dataset = load_dataset('PKU-Alignment/BeaverTails-Evaluation')
# Load only the v1 dataset
round0_dataset = load_dataset('PKU-Alignment/BeaverTails', data_dir='v1')
```
## Paper
You can find more information in our paper.
- **Dataset Paper:** <https://arxiv.org/abs/2307.04657>
## Contact
The original authors host this dataset on GitHub here: https://github.com/PKU-Alignment/beavertails
## License
BeaverTails dataset and its family are released under the CC BY-NC 4.0 License.
|
KaiLv/UDR_Amazon | 2023-06-21T12:23:17.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: headline
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 13936883
num_examples: 30000
- name: test
num_bytes: 1382953
num_examples: 3000
- name: debug
num_bytes: 2318411
num_examples: 5000
download_size: 11799872
dataset_size: 17638247
---
# Dataset Card for "UDR_Amazon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MLCommons/speech-wikimedia | 2023-06-29T18:28:23.000Z | [
"region:us"
] | MLCommons | null | null | null | 5 | 20 | # Dataset Card for Speech Wikimedia
## Table of Contents
- [Dataset Card for Speech Wikimedia](#dataset-card-for-speech-wikimedia)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Transcription languages](#transcription-languages)
- [Hours of Audio for each language](#hours-of-audio-for-each-language)
- [Hours of language pairs for speech translation](#hours-of-language-pairs-for-speech-translation)
- [Dataset Structure](#dataset-structure)
- [reformat](#reformat)
- [transcription and transcription_2](#transcription-and-transcription_2)
- [real_correspondence.json](#real_correspondencejson)
- [license.json](#licensejson)
- [Data License](#data-license)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Preprocessing](#preprocessing)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Point of Contact:** [datasets@mlcommons.org](mailto:datasets@mlcommons.org)
### Dataset Summary
The Speech Wikimedia Dataset is a compilation of audiofiles with transcriptions extracted from wikimedia commons that is licensed for academic and commercial usage under CC and Public domain. It includes 2,000+ hours of transcribed speech in different languages with a diverse set of speakers.
Each audiofile should have one or more transcriptions in different languages.
### Transcription languages
- English
- German
- Dutch
- Arabic
- Hindi
- Portuguese
- Spanish
- Polish
- French
- Russian
- Esperanto
- Swedish
- Korean
- Bengali
- Hungarian
- Oriya
- Thai
### Hours of Audio for each language
We determine the amount of data available for the ASR tasks by extractign the total duration of audio we have where we also have the transcription in the same langauge. We present the duration for the top 10 languages besides from English, from which we have a total of 1488 hours of audio:

### Hours of language pairs for speech translation
Our dataset contains some audios with more than one transcription, all of which correspond to a different language transcription. In total, we have 628 hours fo audio with transcripts in different languages. We present the hours of audio for the 20 most common language pairs:

## Dataset Structure
### audios
Folder with audios in flac format and sampling_rate=16,000 Hz.
### transcription and transcription2
Folders with transcriptions in srt format.
We split this into two directories because Hugging Face does not support more than 10,000 files in a single directory.
### real_correspondence.json
File with relationship between audios and transcriptions, as one large json dictionary.
Key is the name of an audio file in the "reformat" directory, value is the list of corresponding transcript files, which sit in either the transcription or transcription2 directory.
### license.json
File with license information. The key is the name of the original audio file on Wikimedia Commons.
### Data License
Here is an excerpt from license.json:
"""
'"Berlin Wall" Speech - President Reagan\'s Address at the Brandenburg Gate - 6-12-87.webm': {'author': '<td>\n<a class="external text" href="https://www.youtube.com/user/ReaganFoundation" rel="nofollow">ReaganFoundation</a></td>',
'source': '<td>\n<bdo dir="ltr" lang="en"><a href="/wiki/Commons:YouTube_files" [...],
'html_license': '[\'<table class="layouttemplate mw-content-ltr" lang="en" style="width:100%; [...],
'license': 'Public Domain'},
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded from https://commons.wikimedia.org/.
#### Preprocessing
As the original format of most of the files was video, we decided to convert them to flac format with samplerate=16_000 using ffmpeg
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
In particular, there is no "forced alignment" or "segmentation" done on this dataset.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken stories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Discussion of Biases
Our data is downloaded from commons.wikimedia.org. As such, the data is biased towards whatever users decide to upload there.
The data is also mostly English, though this has potential for multitask learning because several audio files have more than one transcription.
## Additional Information
### Licensing Information
The source data contains data under Public Domain and Creative Commons Licenses.
We license this dataset under https://creativecommons.org/licenses/by-sa/4.0/
The appropriate attributions are in license.json
|
oscar-corpus/colossal-oscar-1.0 | 2023-08-10T14:20:14.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:n>1T",
"source_datasets:original",
"license:cc0-1.0",
"arxiv:2212.10440",
"arxiv:2010.14571",
"region:us"
] | oscar-corpus | null | null | null | 17 | 20 | ---
license: cc0-1.0
size_categories:
- n>1T
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
extra_gated_prompt: "By filling the form below I understand that Colossal OSCAR 1 is just a partial annotation of the WET files of 10 Common Crawl snapshots, the original data is included here **only for convenience**, and specially for researchers looking for data in lower resource languages. **Only the annotations are distributed under a cc0-1.0 license**, for the rest of the content I have read the [Common Crawl Terms of use](https://commoncrawl.org/terms-of-use/) and I will abide by them. I understand that all uses of the textual content in Colossal OSCAR 1 are subject to the [Common Crawl Terms of use](https://commoncrawl.org/terms-of-use/). I understand that reusing the textual content in Colossal OSCAR 1 might not be legal in all countries/regions and for all use cases. I understand that Colossal OSCAR 1 is mainly targeted towards researchers and meant to be used in research. The OSCAR Project reserves the right to revoke my access to this data. The OSCAR Project reserves the right to modify this data at any time in accordance to take down requests."
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
Usecase: text
I have explicitly checked that downloading Colossal OSCAR 1 is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the Common Crawl Terms of use: checkbox
---
# Dataset Card for Colossal OSCAR 1
## IMPORTANT NOTE: THIS DATASET CARD IS STILL BEING WRITTEN, PLEASE BE PATIENT WHILE WE COMPLETE ALL THE INFORMATION ABOUT THE CORPUS
## Table of Contents
- [Dataset Card for Colossal OSCAR 1](#dataset-card-for-colossal-oscar-1)
- [IMPORTANT NOTE: THIS DATASET CARD IS STILL BEING WRITTEN, PLEASE BE PATIENT WHILE WE COMPLETE ALL THE INFORMATION ABOUT THE CORPUS](#important-note-this-dataset-card-is-still-being-written-please-be-patient-while-we-complete-all-the-information-about-the-corpus)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Issues](#issues)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Layout](#layout)
- [Data Splits](#data-splits)
- [Table](#table)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://oscar-project.org](https://oscar-project.org)
- **Repository:** [https://github.com/oscar-project](https://github.com/oscar-project)
- **Papers:** [Towards a Cleaner Document-Oriented Multilingual Crawled Corpus](https://aclanthology.org/2022.lrec-1.463/), [Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data](https://arxiv.org/abs/2212.10440)
- **Point of Contact:** [Contact](https://oscar-project.org/#contact)
### Dataset Summary
The OSCAR project (**O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus) is an Open Source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications. The project focuses specifically in providing large quantities of unannotated raw data that is commonly used in the pre-training of large deep learning models. The OSCAR project has developed [high-performance data pipelines](https://github.com/oscar-corpus/ungoliant) specifically conceived to classify and filter large amounts of [web data](https://commoncrawl.org/). The project has also put special attention in improving the data quality of web-based corpora as well as providing data for low-resource languages, so that these new ML/AI technologies are accessible to as many communities as possible.
Colossal OSCAR 1 is the largest release of the OSCAR Corpus based on the based on 10 different monthly snapshots of Common Crawl. It currently contains all the features present in OSCAR 23.01, the main difference being its size.
### Downloading the Data
For the moment we haven't finished the python script to use Colossal OSCAR 1 with `datasets`, so we recommend you use the `huggingface_hub` [python library](https://huggingface.co/docs/huggingface_hub/index).
If you want to download a considerable amount of data we recomend you use `hf_transfer` python package and set the environment variable `HF_HUB_ENABLE_HF_TRANSFER=1`.
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pre-train language models and word representations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
### Issues
Colossal OSCAR 1 may have quality issues on low size subcorpora, as it has been the case before. Please consider taking a look at [_Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets_](https://aclanthology.org/2022.tacl-1.4/) to get a better understanding of the current limitations of our language classifier.
Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.
As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.
**If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.**
| Language code | Language | Issues |
| ------------- | -------- | ------ |
| | | |
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
TODO
### Layout
```js
{
"content":"English sentence\nphrase en français\n????????????", // (1)
"warc_headers":{ // (2)
"warc-identified-content-language":"fra,eng",
"warc-target-uri":"https://fr.wikipedia.org/wiki/...",
"warc-record-id":"<urn:uuid:29eaa920-d299-4b1d-b687-c72bd8d68116>",
"warc-type":"conversion",
"content-length":"35298", // (3)
"warc-refers-to":"<urn:uuid:39e42055-0d94-4e45-9c6c-9e7056635d64>",
"warc-block-digest":"sha1:WFH2A5WHCS2H365GIAFYQPI7UOAMFGHB", // (3)
"warc-date":"2022-11-26T09:45:47Z",
"content-type":"text/plain"
},
"metadata":{
"identification":{ // (4)
"label":"fr",
"prob":0.8938327
},
"harmful_pp":4063.1814, // (5)
"tlsh":"tlsh:T125315FF2B6088901EEA097015DB39B4600B...", // (6)
"quality_warnings":[ // (7)
"short_sentences",
"header",
"footer"
],
"categories":[ // (8)
"examen_pix",
"liste_bu"
],
"sentence_identifications":[ // (9)
{
"label":"fr",
"prob":0.99837273
},
{
"label":"en",
"prob":0.9992377
},
null
]
}
}
```
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
</details>
## Table
## Dataset Creation
### Curation Rationale
OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText).
The pipeline works on documents rather than lines.
`Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy.
Threading is done at shard, record and sentence level, making the whole generation process much more efficient.
Filtering will be explained in a future blog post at our [website](https://oscar-project.org)
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organization's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of Colossal OSCAR 1 the following snapshots were used:
- 05-06-23
- 06-07-22
- 11-12-21
- 10-20
- 05-06-20
- 05-19
- 11-18
- 11-17
- 03-15
- 09-16
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of languages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models. We have added annotations to Common Crawl, so please consider using them to select the data that you would like to use for your particular use case.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource languages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
Colossal OSCAR 1 was put together by [Pedro Ortiz Suarez](https://portizs.eu/) while working as a researcher at the [Speech and Language Technology Team](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology) at [DFKI GmbH](https://www.dfki.de/en/web) Berlin.
Colossal OSCAR 1 is part of the work done by [Pedro Ortiz Suarez](https://portizs.eu/) for the [OpenGPT-X Project](https://opengpt-x.de/en/) which is funded by the German Federal Ministry for Economic Affairs and Climate Action ([BMWK](https://www.bmwk.de/Navigation/EN/Home/home.html)).
This release of OSCAR was also made possible by the continous support of the OSCAR team at [Inria](https://www.inria.fr/en) (project-team [ALMAnaCH](https://almanach.inria.fr/index-en.html)), specially by [Julien Abadji](https://ujj.space), [Rua Ismail](https://oscar-project.org/authors/rua/) and [Benoit Sagot](http://pauillac.inria.fr/~sagot/), as well as by members of the OSCAR community, in particular [Sotaro Takeshita](https://sotaro.io/about), [Sebastian Nagel](https://www.polver.uni-konstanz.de/cnc/people/nagel/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging, the metadata and the annotations of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, the OSCAR project, DFKI GmbH and Inria have waived all copyright and related or neighboring rights to OSCAR
This work is published from: France and Germany.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources.
Please use the [contact information](https://oscar-project.org/#contact) on our website for take down requests.
We strongly advise users to submit take down request to Common Crawl. For more information please read their [Terms of Use](https://commoncrawl.org/terms-of-use/)
### Citation Information
```
@ARTICLE{2022arXiv221210440J,
author = {{Jansen}, Tim and {Tong}, Yangling and {Zevallos}, Victoria and {Ortiz Suarez}, Pedro},
title = "{Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = dec,
eid = {arXiv:2212.10440},
pages = {arXiv:2212.10440},
doi = {10.48550/arXiv.2212.10440},
archivePrefix = {arXiv},
eprint = {2212.10440},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv221210440J},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{abadji-etal-2022-towards,
title = "Towards a Cleaner Document-Oriented Multilingual Crawled Corpus",
author = "Abadji, Julien and
Ortiz Suarez, Pedro and
Romary, Laurent and
Sagot, Beno{\^\i}t",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.463",
pages = "4344--4355",
abstract = "The need for large corpora raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.",
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@article{kreutzer-etal-2022-quality,
title = "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets",
author = {Kreutzer, Julia and
Caswell, Isaac and
Wang, Lisa and
Wahab, Ahsan and
van Esch, Daan and
Ulzii-Orshikh, Nasanbayar and
Tapo, Allahsera and
Subramani, Nishant and
Sokolov, Artem and
Sikasote, Claytone and
Setyawan, Monang and
Sarin, Supheakmungkol and
Samb, Sokhar and
Sagot, Beno{\^\i}t and
Rivera, Clara and
Rios, Annette and
Papadimitriou, Isabel and
Osei, Salomey and
Suarez, Pedro Ortiz and
Orife, Iroro and
Ogueji, Kelechi and
Rubungo, Andre Niyongabo and
Nguyen, Toan Q. and
M{\"u}ller, Mathias and
M{\"u}ller, Andr{\'e} and
Muhammad, Shamsuddeen Hassan and
Muhammad, Nanda and
Mnyakeni, Ayanda and
Mirzakhalov, Jamshidbek and
Matangira, Tapiwanashe and
Leong, Colin and
Lawson, Nze and
Kudugunta, Sneha and
Jernite, Yacine and
Jenny, Mathias and
Firat, Orhan and
Dossou, Bonaventure F. P. and
Dlamini, Sakhile and
de Silva, Nisansa and
{\c{C}}abuk Ball{\i}, Sakine and
Biderman, Stella and
Battisti, Alessia and
Baruwa, Ahmed and
Bapna, Ankur and
Baljekar, Pallavi and
Azime, Israel Abebe and
Awokoya, Ayodele and
Ataman, Duygu and
Ahia, Orevaoghene and
Ahia, Oghenefego and
Agrawal, Sweta and
Adeyemi, Mofetoluwa},
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.4",
doi = "10.1162/tacl_a_00447",
pages = "50--72",
abstract = "With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.",
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
``` |
shumpei2525/OpenOrca-train-ja | 2023-07-15T10:42:15.000Z | [
"license:mit",
"region:us"
] | shumpei2525 | null | null | null | 4 | 20 | ---
license: mit
---
# OpenOrca-train-ja
This dataset is a translation of OpenOrca into Japanese. It is based on the output data from GPT-3.5 and GPT-4. Please feel free to use it as you wish.
* There are a few mistakes observed in the translation task. It might be better to exclude the translation task from use.
# Since I'm not entirely clear on OpenAI's terms of service, please be cautious when using it for commercial purposes. There may be exceptions for non-commercial use.
# other dataset
This dataset has a higher quality.https://huggingface.co/datasets/shumpei2525/fine_tuning521k-ja
shumpei2525/fine_tuning521k-ja
# OpenOrca test dataset
Pyutaさん has kindly translated the test dataset of OpenOrca into Japanese. Here is the dataset: pyutax68/OpenOrca-test-jp, https://huggingface.co/datasets/pyutax68/OpenOrca-test-jp
# original datasets
Open-Orca/OpenOrca https://huggingface.co/datasets/Open-Orca/OpenOrca
Lisence:mit |
FanChen0116/19100_chat_64x_slot_pvi | 2023-09-26T13:38:40.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-time
'2': B-date
'3': B-last_name
'4': B-people
'5': I-date
'6': I-people
'7': I-last_name
'8': I-first_name
'9': B-first_name
'10': B-time
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 565744
num_examples: 3072
- name: validation
num_bytes: 5405
num_examples: 32
- name: test
num_bytes: 646729
num_examples: 3731
download_size: 85284
dataset_size: 1217878
---
# Dataset Card for "19100_chat_64x_slot_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sngsfydy/aptos_train | 2023-07-19T19:52:54.000Z | [
"region:us"
] | sngsfydy | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
splits:
- name: train
num_bytes: 7204351279.337521
num_examples: 2929
download_size: 7192333107
dataset_size: 7204351279.337521
---
# Dataset Card for "aptos_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sngsfydy/aptos_test | 2023-07-19T19:19:46.000Z | [
"region:us"
] | sngsfydy | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
splits:
- name: train
num_bytes: 1802932566.6624794
num_examples: 733
download_size: 1800938316
dataset_size: 1802932566.6624794
---
# Dataset Card for "aptos_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
d0rj/dolphin-ru | 2023-07-26T14:54:29.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | d0rj | null | null | null | 0 | 20 | ---
language_creators:
- translated
language:
- ru
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
pretty_name: Dolphin (ru)
source_datasets:
- ehartford/dolphin
license: apache-2.0
tags:
- ChatGPT
- instruct
- instruct-tune
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8037639673
num_examples: 2840090
download_size: 3900911155
dataset_size: 8037639673
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
---
# Dolphin-ru 🐬
## Dataset Description
- **Homepage:** https://erichartford.com/dolphin
This is translated version of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) into Russian. |
Shrey-1329/cxiu_hf_dataset | 2023-07-25T15:05:56.000Z | [
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:en",
"medical",
"region:us"
] | Shrey-1329 | null | null | null | 1 | 20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1108945726.54
num_examples: 6060
download_size: 1108991167
dataset_size: 1108945726.54
task_categories:
- image-to-text
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
---
# Indiana University Chest Xray Dataset Card
## Data sources:
This is a converted and processed version of the open access pneumonia chest x-ray dataset provided by the indiana university.<br>
You can see its information page [here](https://openi.nlm.nih.gov/faq).<br>
The compressed images in the png format were downloaded from [here](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_png.tgz) and the corresponding reports from [here](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_reports.tgz).
## Data fields:
There are two fields: image and text.
The images are the x-rays and the texts are their associated findings.
## Preprocessing done:
1. **Make all text lowercase**: Convert all text to lowercase to ensure consistent and case-insensitive processing.
2. **Remove all punctuation**: Eliminate any punctuation marks (e.g., periods, commas, exclamation marks) from the text to avoid interference in language analysis.
3. **Remove all numbers**: Eliminate all numeric characters from the text since they might not be relevant for certain natural language processing tasks.
4. **Remove all words with 2 or more Xs in a row**: Remove any words that contain two or more consecutive occurrences of the letter "X" as they may not contribute meaningful information.
5. **Remove the bottom and top 2% of text by length**: Discard the shortest and longest text samples, removing the bottom 2% and top 2% of the text's length, respectively. This step is aimed at reducing the impact of outliers and ensuring a more balanced dataset. |
satpalsr/indicCorpv2 | 2023-07-31T08:34:06.000Z | [
"task_categories:text-generation",
"language:as",
"language:brx",
"language:bn",
"language:doi",
"language:en",
"language:gom",
"language:gu",
"language:hi",
"language:kha",
"language:kn",
"language:ks",
"language:mai",
"language:ml",
"language:mni",
"language:mr",
"language:ne",
"... | satpalsr | IndicCORPV2 is the largest collection of texts for Indic langauges consisting of 20.9 Billion tokens of which 14.4B tokens correspond to 23 Indic languages and 6.B tokens of Indian English content curated from Indian websites. | @article{Doddapaneni2022towards,
title={Towards Leaving No Indic Language Behind: Building Monolingual Corpora, Benchmark and Models for Indic Languages},
author={Sumanth Doddapaneni and Rahul Aralikatte and Gowtham Ramesh and Shreyansh Goyal and Mitesh M. Khapra and Anoop Kunchukuttan and Pratyush Kumar},
journal={ArXiv},
year={2022},
volume={abs/2212.05409}
} | null | 1 | 20 | ---
license: cc0-1.0
language:
- as
- brx
- bn
- doi
- en
- gom
- gu
- hi
- kha
- kn
- ks
- mai
- ml
- mni
- mr
- ne
- or
- pa
- sa
- sat
- sd
- ta
- te
- ur
task_categories:
- text-generation
---
# IndicCorpv2
[Github](https://github.com/AI4Bharat/IndicBERT) | [Paper](https://arxiv.org/abs/2212.05409)
Languages: 24
The largest collection of texts for Indic languages consisting of 20.9 billion tokens of which 14.4B tokens correspond to 23 Indic languages and 6.5B tokens of Indian English content curated from Indian websites.
<img src="https://huggingface.co/datasets/satpalsr/indicCorpv2/resolve/main/languages.png">
## Citation
```
@article{Doddapaneni2022towards,
title={Towards Leaving No Indic Language Behind: Building Monolingual Corpora, Benchmark and Models for Indic Languages},
author={Sumanth Doddapaneni and Rahul Aralikatte and Gowtham Ramesh and Shreyansh Goyal and Mitesh M. Khapra and Anoop Kunchukuttan and Pratyush Kumar},
journal={ArXiv},
year={2022},
volume={abs/2212.05409}
}
``` |
shanover/disease_symptoms_prec_full | 2023-08-01T06:26:27.000Z | [
"license:mit",
"region:us"
] | shanover | null | null | null | 2 | 20 | ---
license: mit
---
|
qmeeus/slurp | 2023-08-01T11:27:35.000Z | [
"region:us"
] | qmeeus | null | null | null | 1 | 20 | ---
dataset_info:
features:
- name: slurp_id
dtype: int64
- name: sentence
dtype: string
- name: annotation
dtype: string
- name: intent
dtype:
class_label:
names:
'0': addcontact
'1': alarm_query
'2': alarm_remove
'3': alarm_set
'4': audio_volume_down
'5': audio_volume_mute
'6': audio_volume_other
'7': audio_volume_up
'8': calendar_query
'9': calendar_remove
'10': calendar_set
'11': cleaning
'12': coffee
'13': convert
'14': cooking_query
'15': cooking_recipe
'16': createoradd
'17': currency
'18': datetime_convert
'19': datetime_query
'20': definition
'21': email_addcontact
'22': email_query
'23': email_querycontact
'24': email_sendemail
'25': events
'26': factoid
'27': game
'28': general_affirm
'29': general_commandstop
'30': general_confirm
'31': general_dontcare
'32': general_explain
'33': general_greet
'34': general_joke
'35': general_negate
'36': general_praise
'37': general_quirky
'38': general_repeat
'39': greet
'40': hue_lightdim
'41': hue_lightoff
'42': hue_lightup
'43': iot_cleaning
'44': iot_coffee
'45': iot_hue_lightchange
'46': iot_hue_lightdim
'47': iot_hue_lightoff
'48': iot_hue_lighton
'49': iot_hue_lightup
'50': iot_wemo_off
'51': iot_wemo_on
'52': joke
'53': likeness
'54': lists_createoradd
'55': lists_query
'56': lists_remove
'57': locations
'58': music
'59': music_dislikeness
'60': music_likeness
'61': music_query
'62': music_settings
'63': news_query
'64': play_audiobook
'65': play_game
'66': play_music
'67': play_podcasts
'68': play_radio
'69': podcasts
'70': post
'71': qa_currency
'72': qa_definition
'73': qa_factoid
'74': qa_maths
'75': qa_stock
'76': query
'77': querycontact
'78': quirky
'79': radio
'80': recommendation_events
'81': recommendation_locations
'82': recommendation_movies
'83': remove
'84': sendemail
'85': set
'86': settings
'87': social_post
'88': social_query
'89': takeaway_order
'90': takeaway_query
'91': ticket
'92': traffic
'93': transport_query
'94': transport_taxi
'95': transport_ticket
'96': transport_traffic
'97': volume_other
'98': weather_query
'99': wemo_off
'100': wemo_on
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 2920956911.136
num_examples: 50628
- name: devel
num_bytes: 477355969.9
num_examples: 8690
- name: test
num_bytes: 709706969.726
num_examples: 13078
- name: train_synthetic
num_bytes: 2571103452.542
num_examples: 69253
download_size: 6753580307
dataset_size: 6679123303.304
---
# Dataset Card for "slurp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gaodrew/pubmedQA-Llama-2-format | 2023-08-03T09:06:44.000Z | [
"region:us"
] | gaodrew | null | null | null | 1 | 20 | Entry not found |
AlanRobotics/saiga | 2023-08-04T12:04:53.000Z | [
"region:us"
] | AlanRobotics | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: instructions
dtype: string
- name: outputs
dtype: string
splits:
- name: train
num_bytes: 75993179
num_examples: 36320
download_size: 36278303
dataset_size: 75993179
---
# Dataset Card for "saiga"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FanChen0116/19100_chat_40x_slot_pvi | 2023-09-26T03:49:12.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-time
'2': B-date
'3': B-last_name
'4': B-people
'5': I-date
'6': I-people
'7': I-last_name
'8': I-first_name
'9': B-first_name
'10': B-time
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 447702
num_examples: 2458
- name: validation
num_bytes: 5405
num_examples: 32
- name: test
num_bytes: 646729
num_examples: 3731
download_size: 72365
dataset_size: 1099836
---
# Dataset Card for "19100_chat_40x_slot_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Binaryy/travel-wikipedia | 2023-08-06T15:13:11.000Z | [
"region:us"
] | Binaryy | null | null | null | 0 | 20 | Entry not found |
Universal-NER/Pile-NER-definition | 2023-08-07T17:08:06.000Z | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | Universal-NER | null | null | null | 9 | 20 | ---
language:
- en
size_categories:
- 10K<n<100K
---
# Intro
Pile-NER-definition is a set of GPT-generated data for named entity recognition using the definition-based data construction prompt. It was collected by prompting gpt-3.5-turbo-0301 and augmented by negative sampling. Check our [project page](https://universal-ner.github.io/) for more information.
# License
Attribution-NonCommercial 4.0 International |
PL-MTEB/sickr-pl-sts | 2023-08-10T13:16:52.000Z | [
"license:cc-by-nc-sa-3.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 20 | ---
license: cc-by-nc-sa-3.0
---
|
PL-MTEB/ppc-pairclassification | 2023-08-11T11:00:22.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 20 | ---
license: cc-by-nc-sa-4.0
---
|
PL-MTEB/cdsce-pairclassification | 2023-08-11T11:47:46.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 20 | ---
license: cc-by-nc-sa-4.0
---
|
PL-MTEB/cdscr-sts | 2023-08-11T11:53:53.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 20 | ---
license: cc-by-nc-sa-4.0
---
|
AIBrainBox/guanaco-llama2-200 | 2023-08-16T10:42:18.000Z | [
"license:apache-2.0",
"region:us"
] | AIBrainBox | null | null | null | 0 | 20 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 338808
num_examples: 200
download_size: 201257
dataset_size: 338808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
malaysia-ai/dedup-text-dataset | 2023-09-07T06:48:29.000Z | [
"region:us"
] | malaysia-ai | null | null | null | 0 | 20 | Entry not found |
asas-ai/okapi_ar_arc | 2023-08-22T17:23:44.000Z | [
"region:us"
] | asas-ai | null | null | null | 0 | 20 | Entry not found |
rwkv-x-dev/openorca-gpt4 | 2023-08-28T14:44:06.000Z | [
"region:us"
] | rwkv-x-dev | null | null | null | 3 | 20 | ---
pretty_name: OpenOrca
configs:
- config_name: default
default: true
data_files:
- split: train
path:
- "*.parquet"
---
OpenOrca but just the GPT4 bits. |
vikp/evol_instruct_code_filtered_39k | 2023-08-29T17:35:13.000Z | [
"region:us"
] | vikp | null | null | null | 3 | 20 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 56854896.038860105
num_examples: 39078
download_size: 37822990
dataset_size: 56854896.038860105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "evol_instruct_code_filtered_38k"
Filtered version of `nickrosh/Evol-Instruct-Code-80k-v1`, with manual filtering, and automatic filtering based on quality and learning value classifiers. |
vladisha3000/Icons | 2023-08-31T14:15:43.000Z | [
"region:us"
] | vladisha3000 | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2195425.0
num_examples: 999
download_size: 2268449
dataset_size: 2195425.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Icons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BodoZnipes/Dual-Use-Goods | 2023-09-23T08:20:24.000Z | [
"license:other",
"region:us"
] | BodoZnipes | null | null | null | 0 | 20 | ---
license: other
---
|
dbaezaj/test_dataset | 2023-09-14T18:13:26.000Z | [
"task_categories:token-classification",
"size_categories:n<1K",
"region:us"
] | dbaezaj | null | null | null | 0 | 20 | ---
task_categories:
- token-classification
size_categories:
- n<1K
--- |
OneFly7/llama2-politosphere-fine-tuning-system-prompt | 2023-09-10T09:03:41.000Z | [
"region:us"
] | OneFly7 | null | null | null | 0 | 20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 184692
num_examples: 113
- name: validation
num_bytes: 182440
num_examples: 113
download_size: 66387
dataset_size: 367132
---
# Dataset Card for "llama2-politosphere-fine-tuning-system-prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlerner27/asapp2a | 2023-09-13T01:31:57.000Z | [
"region:us"
] | mlerner27 | null | null | null | 0 | 20 | Entry not found |
serbog/job_listing_german_cleaned_bert | 2023-09-10T10:30:11.000Z | [
"region:us"
] | serbog | null | null | null | 0 | 20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: cleaned_description
dtype: string
- name: title
dtype: string
- name: C1
dtype: int64
- name: C2
dtype: int64
- name: C3
dtype: int64
- name: C4
dtype: int64
- name: C5
dtype: int64
- name: C6
dtype: int64
- name: C7
dtype: int64
- name: C8
dtype: int64
- name: C9
dtype: int64
splits:
- name: train
num_bytes: 1119693900
num_examples: 509834
- name: eval
num_bytes: 261886055
num_examples: 104864
- name: test
num_bytes: 234468000
num_examples: 102796
download_size: 670254315
dataset_size: 1616047955
---
# Dataset Card for "job_listing_german_cleaned_bert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jtatman/empathetic_dialogues_summary | 2023-09-11T11:12:41.000Z | [
"region:us"
] | jtatman | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: conv_id
dtype: string
- name: utterance_idx
dtype: int32
- name: context
dtype: string
- name: prompt
dtype: string
- name: speaker_idx
dtype: int32
- name: utterance
dtype: string
- name: selfeval
dtype: string
- name: tags
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 23032529
num_examples: 76673
download_size: 6595374
dataset_size: 23032529
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "empathetic_dialogues_summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dodon/PlotQA_dataset | 2023-09-12T04:28:09.000Z | [
"license:cc-by-4.0",
"region:us"
] | Dodon | PlotQA dataset
Chart images, tables, image annotations, questions, answers | @inproceedings{methani2020plotqa,
title={Plotqa: Reasoning over scientific plots},
author={Methani, Nitesh and Ganguly, Pritha and Khapra, Mitesh M and Kumar, Pratyush},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={1527--1536},
year={2020}
} | null | 0 | 20 | ---
license: cc-by-4.0
---
|
Nicolas-BZRD/Original_Songs_Lyrics_with_French_Translation | 2023-09-22T12:09:10.000Z | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:fr",
"language:en",
"language:es",
"language:it",
"language:de",
"language:ko",
"language:id",
"language:pt",
"language:no",
"language:fi",
"language:sv",
"language:sw",
"language:... | Nicolas-BZRD | null | null | null | 3 | 20 | ---
license: unknown
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: artist_name
dtype: string
- name: album_name
dtype: string
- name: year
dtype: int64
- name: title
dtype: string
- name: number
dtype: int64
- name: original_version
dtype: string
- name: french_version
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 250317845
num_examples: 99289
download_size: 122323323
dataset_size: 250317845
task_categories:
- translation
- text-generation
language:
- fr
- en
- es
- it
- de
- ko
- id
- pt
- 'no'
- fi
- sv
- sw
- hr
- so
- ca
- tl
- ja
- nl
- ru
- et
- tr
- ro
- cy
- vi
- af
- hu
- sk
- sl
- cs
- da
- pl
- sq
- el
- he
- zh
- th
- bg
- ar
tags:
- music
- parallel
- parallel data
pretty_name: SYFT
size_categories:
- 10K<n<100K
---
# Original Songs Lyrics with French Translation
### Dataset Summary
Dataset of 99289 songs containing their metadata (author, album, release date, song number), original lyrics and lyrics translated into French.
Details of the number of songs by language of origin can be found in the table below:
| Original language | Number of songs |
|---|:---|
| en | 75786 |
| fr | 18486 |
| es | 1743 |
| it | 803 |
| de | 691 |
| sw | 529 |
| ko | 193 |
| id | 169 |
| pt | 142 |
| no | 122 |
| fi | 113 |
| sv | 70 |
| hr | 53 |
| so | 43 |
| ca | 41 |
| tl | 36 |
| ja | 35 |
| nl | 32 |
| ru | 29 |
| et | 27 |
| tr | 22 |
| ro | 19 |
| cy | 14 |
| vi | 14 |
| af | 13 |
| hu | 10 |
| sk | 10 |
| sl | 10 |
| cs | 7 |
| da | 6 |
| pl | 5 |
| sq | 4 |
| el | 4 |
| he | 3 |
| zh-cn | 2 |
| th | 1 |
| bg | 1 |
| ar | 1 |
### Citation
```
@misc{nicolasbzrd2023oslft,
title={Original Songs Lyrics with French Translation},
author={Nicolas Boizard, Manuel Faysse},
year={2023}
}
``` |
deven367/babylm-10M-children-stories | 2023-09-15T17:06:11.000Z | [
"region:us"
] | deven367 | null | null | null | 0 | 20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1827662
num_examples: 5737
- name: valid
num_bytes: 1425137
num_examples: 5996
- name: test
num_bytes: 1804421
num_examples: 7959
download_size: 3064805
dataset_size: 5057220
---
# Dataset Card for "babylm-10M-children-stories"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nikchar/claim_detection_paper_test_roberta | 2023-09-16T17:18:58.000Z | [
"region:us"
] | nikchar | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_wiki_url
dtype: string
- name: Is_Claim
dtype: string
- name: Claim_detection_result
dtype: string
splits:
- name: train
num_bytes: 1175913
num_examples: 11073
download_size: 0
dataset_size: 1175913
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "claim_detection_paper_test_roberta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
duckaiml/Polylingual_Id | 2023-10-06T16:47:30.000Z | [
"license:other",
"region:us"
] | duckaiml | \ | null | 0 | 20 | ---
license: other
---
# Polylingual Indonesia Dataset/Model Card
## Description
Polylingual Indonesia is a diverse dataset composed of a collection of publicly available data and some self-crawled data in the Indonesian language. The data originates from various sources including news articles, books, forums, and stories. This dataset is processed and filtered through specific techniques detailed in the sections below.
## Prerequisites
To run and utilize this dataset, make sure you have the `zstandard` package installed in your environment.
## Format
The original format of the dataset is JSONL with zstandard compressed
## Dataset Details
### Filtering
The dataset undergoes a filtration process using a specific filter from BigScience. Detailed information about the filter used can be found here: [BigScience Filter](https://drive.google.com/file/d/1cCJ8sWE88TRLDAa3eHLmXO4JlkR2QzLY/view?usp=sharing).
### Data Sources
The data originates from various sources, and each folder in the dataset represents the source from where the original data came. Here are the details of each folder/source:
#### 1. HPLT_filtered
- Link: [HPLT Project](https://hplt-project.org/datasets/v1)
- Source : Internet Archive Snapshot WIDE15 and WIDE17 and CC-MAIN-2022-40
#### 2. Mc4-und-id
- Data is filtered from the undefined language segment of MC4 (c4-und) using FastText, BigScience filters.
- Sample: [MC4 Sample](https://huggingface.co/datasets/allenai/c4/blob/mC4310/multilingual/c4-und.00000-00001-00002-00003-00004-00005-00006-00007.json.gz)
#### 3. Indonesia-Crawl
This folder contains a collection of the Common Crawl dataset and self crawled data, specific for the Indonesian language, accumulated from various snapshots. The data is divided into several sections:
- **Mc4 original(dedup)**: Ranging from snapshot CC-2013-20 to CC-2020-34.
- **Kopi-CC(dedup)**: This covers data from snapshot CC-2020-34 to CC-2023-06. More details can be found [here](https://huggingface.co/datasets/acul3/KoPI-CC) (Note: Last snapshot to be uploaded).
- **KoPI-CC_News**: This includes the Common Crawl News Dataset ranging from the year 2016 to 2022. Detailed information can be accessed [here](https://huggingface.co/datasets/acul3/KoPI-CC) (Note: Last snapshot to be uploaded).
- **Self Crawled Data**: consists of data crawled from various platforms including news sites, story sites, forums, and others.
## Usage
install zstandard first
```
from datasets import load_dataset
hplt = load_dataset('duckaiml/Polylingual_Id','hplt') #hplt only
mc4_und = load_dataset('duckaiml/Polylingual_Id','mc4_und') #mc4_und only
indonesia_crawl = load_dataset('duckaiml/Polylingual_Id','indonesia_crawl') #indonesia_crawl only
load_dataset('duckaiml/Polylingual_Id','full') #load all
```
## Limitation/Issue
- Although some source data already dedup separately, some duplicate might be found as dataset need dedup as whole dataset
## Contributing
Feel free to contribute to the dataset by adding more diverse sources or helping in enhancing the filtration process.
## License
The data is collected from public sources, and it's recommended to refer to the original data sources for information on licensing and usage restrictions.
## Contact
For any queries or contributions to the dataset, please feel free to reach out (contact information to be added).
| |
HuggingFaceH4/lima_llama2 | 2023-09-17T04:03:38.000Z | [
"region:us"
] | HuggingFaceH4 | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: conversations
sequence: string
- name: source
dtype: string
- name: length
dtype: int64
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: meta
struct:
- name: category
dtype: string
- name: source
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8806712
num_examples: 1000
- name: test
num_bytes: 188848
num_examples: 300
download_size: 5237615
dataset_size: 8995560
---
# Dataset Card for "lima_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hynky/elon_conversations | 2023-09-17T10:53:58.000Z | [
"region:us"
] | hynky | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 2196270
num_examples: 4821
download_size: 513850
dataset_size: 2196270
---
# Dataset Card for "elon_conversations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arifzanko/donut-dummy | 2023-09-19T02:38:27.000Z | [
"region:us"
] | arifzanko | null | null | null | 0 | 20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 103705.0
num_examples: 2
- name: validation
num_bytes: 46768.0
num_examples: 1
- name: test
num_bytes: 48489.0
num_examples: 1
download_size: 109961
dataset_size: 198962.0
---
# Dataset Card for "donut-dummy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yanabels/padthinkraw | 2023-09-20T05:07:55.000Z | [
"license:apache-2.0",
"region:us"
] | yanabels | null | null | null | 0 | 20 | ---
license: apache-2.0
---
|
aloobun/ccpemv2 | 2023-09-29T10:31:35.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | aloobun | null | null | null | 3 | 20 | ---
license: cc-by-4.0
language:
- en
size_categories:
- n<1K
pretty_name: movies dialog dataset
task_categories:
- text-generation
---
A modified dataset of English dialogs between a user and an assistant discussing movie preferences in natural language. In the foundational dataset, it initially had 502 records.
Dataset Information
- Name: ccpemv2.jsonl
- Version: Version 0.2
- Modifications Included: Transformed to ### User:... ### Assistant:... format
- Language: English
- License: Creative Commons Attribution 4.0
Original Dataset Citation
```
@inproceedings{radlinski-etal-2019-ccpe,
title = {Coached Conversational Preference Elicitation: A Case Study in Understanding Movie Preferences},
author = {Filip Radlinski and Krisztian Balog and Bill Byrne and Karthik Krishnamoorthi},
booktitle = {Proceedings of the Annual Meeting of the Special Interest Group on Discourse and Dialogue ({SIGDIAL})},
year = 2019
}
``` |
FanChen0116/bus_few4_128x_pvi | 2023-09-26T13:45:27.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-from_location
'2': B-from_location
'3': B-leaving_date
'4': I-leaving_date
'5': I-to_location
'6': B-to_location
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 1122840
num_examples: 4480
- name: validation
num_bytes: 6900
num_examples: 35
- name: test
num_bytes: 70618
num_examples: 377
download_size: 131008
dataset_size: 1200358
---
# Dataset Card for "bus_few4_128x_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
goendalf666/sales-textbook_for_convincing_and_selling | 2023-10-04T20:38:03.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"sales",
"arxiv:2306.11644",
"region:us"
] | goendalf666 | null | null | null | 1 | 20 | ---
task_categories:
- text-generation
language:
- en
tags:
- sales
size_categories:
- 100K<n<1M
---
# Dataset Card for sales-textbook_for_convincing_and_selling
A textbook create for the purpose of training a sales chatbot.
Inspiration come from: Textbooks is all you need https://arxiv.org/abs/2306.11644
The data was generated by gpt-3.5-turbo
#Structure
A simpel textbook that has subheadlines and headlines.
Chapters and Subheadlines are mentioned in the dataset. Look at the first two examples.
# Data Generation
The following code was used for the text generation:
#include github link
Out of the textbook conversation examples were generated
https://huggingface.co/datasets/goendalf666/sales-conversations
Here is the prompt that was used for the data generation.
For the exact data generation code look up the following repo:
#a structure with headlines and subheadlines was generated before https://github.com/tom813/salesGPT_foundation/blob/main/data_generation/textbook_and_conversation_gen.py
```
prompt = f"""
I want to write a book about sales and convincing techniques. Here is the outline of the chapters:
1. Building Rapport and Capturing Attention
2. Developing Exceptional Communication Skills
3. Discovering Customer Needs and Pain Points
4. Presenting Solutions and Benefits
5. Overcoming Resistance and Objections
6. Closing the Sale
Here is the outline of the current chapter that:
{headline}
Write me a long and detailed text for the subpoint: {subheadline} of the current chapter and only write a text for this subpoint.
Ignore points like body language or tone of voice. Focus on the
Start by mentioning the Chapter and the subpoint.
The overall aim is to write a textbook.
to teach someone with less experience how to convince people and sell stuff.
"""
``` |
Duxiaoman-DI/FinanceIQ | 2023-09-22T14:50:42.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | Duxiaoman-DI | null | null | null | 6 | 20 | ---
license: cc-by-nc-sa-4.0
---
|
AnhTong/truyenfull | 2023-09-23T10:26:20.000Z | [
"region:us"
] | AnhTong | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: title
dtype: string
- name: link
dtype: string
- name: content
dtype: string
splits:
- name: ds_1
num_bytes: 688765404
num_examples: 47546
- name: ds_2
num_bytes: 686452540
num_examples: 49325
- name: ds_3
num_bytes: 662112505
num_examples: 46766
- name: ds_4
num_bytes: 631547999
num_examples: 47222
- name: ds_5
num_bytes: 645861526
num_examples: 49358
- name: ds_6
num_bytes: 669993661
num_examples: 49112
- name: ds_7
num_bytes: 662999345
num_examples: 48904
- name: ds_8
num_bytes: 713727150
num_examples: 49245
- name: ds_9
num_bytes: 651720408
num_examples: 48605
- name: ds_10
num_bytes: 966575566
num_examples: 48809
- name: ds_12
num_bytes: 762515180
num_examples: 49725
- name: ds_13
num_bytes: 686909655
num_examples: 48973
- name: ds_14
num_bytes: 610358320
num_examples: 48564
- name: ds_15
num_bytes: 616740599
num_examples: 49389
download_size: 4862424797
dataset_size: 9656279858
---
# Dataset Card for "truyenfull"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/dialogsum_ru | 2023-09-25T13:59:33.000Z | [
"region:us"
] | dim | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 19115158
num_examples: 12460
download_size: 9286024
dataset_size: 19115158
---
# Dataset Card for "dialogsum_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yirenlu/heroicons | 2023-09-26T23:11:38.000Z | [
"region:us"
] | yirenlu | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4277197.0
num_examples: 292
download_size: 4220955
dataset_size: 4277197.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "heroicons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hanifabdlh/quac-jr-indoqa | 2023-10-02T02:05:04.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 3243139
num_examples: 4314
download_size: 1619205
dataset_size: 3243139
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-jr-indoqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Alexandre-Numind/ValIE2 | 2023-10-03T12:11:15.000Z | [
"region:us"
] | Alexandre-Numind | null | null | null | 0 | 20 | Entry not found |
Alexandre-Numind/TrainIE2 | 2023-10-03T12:12:03.000Z | [
"region:us"
] | Alexandre-Numind | null | null | null | 0 | 20 | Entry not found |
Alexandre-Numind/TestIE2 | 2023-10-03T12:12:36.000Z | [
"region:us"
] | Alexandre-Numind | null | null | null | 0 | 20 | Entry not found |
danielpark/smiles_plc50 | 2023-10-03T20:07:01.000Z | [
"license:unknown",
"region:us"
] | danielpark | null | null | null | 0 | 20 | ---
license: unknown
---
Github Repository: [SMILES Featurizer](https://github.com/dsdanielpark/SMILES-featurizer)
```
curl -LO https://raw.githubusercontent.com/dsdanielpark/smiles-featurizer/master/dataset/smiles-plc50-train.csv
```
Alternatively, you can use the dataset uploaded to Hugging Face as follows.
```python
from datasets import load_dataset
dataset = load_dataset("danielpark/smiles_plc50", "csv")
df = pd.DataFrame(dataset['train'])
```
|
Alamerton/small-sycophancy-dataset | 2023-10-04T09:26:38.000Z | [
"region:us"
] | Alamerton | null | null | null | 0 | 20 | Entry not found |
AayushShah/SQL_SparC_Dataset_With_Schema | 2023-10-06T11:46:48.000Z | [
"region:us"
] | AayushShah | null | null | null | 0 | 20 | ---
dataset_info:
features:
- name: database_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 3249206
num_examples: 3456
download_size: 288326
dataset_size: 3249206
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SQL_SparC_Dataset_With_Schema"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.