author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hugginglearners | null | null | null | false | 1 | false | hugginglearners/twitter-dataset-tesla | 2022-08-18T04:35:32.000Z | null | false | 51a56ad8fb8f136d3c068a56a842dc65fec09ec2 | [] | [
"license:cc0-1.0",
"kaggle_id:vishesh1412/twitter-dataset-tesla"
] | https://huggingface.co/datasets/hugginglearners/twitter-dataset-tesla/resolve/main/README.md | ---
license:
- cc0-1.0
kaggle_id: vishesh1412/twitter-dataset-tesla
---
# Dataset Card for Twitter Dataset: Tesla
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/vishesh1412/twitter-dataset-tesla
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains all the Tweets regarding #Tesla or #tesla till 12/07/2022 (dd-mm-yyyy). It can be used for sentiment analysis research purpose or used in other NLP tasks or just for fun.
It contains 10,000 recent Tweets with the user ID, the hashtags used in the Tweets, and other important features.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@vishesh1412](https://kaggle.com/vishesh1412)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
asaxena1990 | null | null | null | false | 1 | false | asaxena1990/NSME-COM | 2022-08-18T07:26:54.000Z | acronym-identification | false | cb3ebb1e94d100854a2fdf305474b6530007f992 | [] | [
"annotations_creators:other",
"language_creators:other",
"language:en",
"expert-generated license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:text2text-... | https://huggingface.co/datasets/asaxena1990/NSME-COM/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
expert-generated license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- question-answering
- text-retrieval
- text2text-generation
- other
- translation
- conversational
task_ids:
- extractive-qa
- closed-domain-qa
- utterance-retrieval
- document-retrieval
- closed-domain-qa
- open-book-qa
- closed-book-qa
paperswithcode_id: acronym-identification
pretty_name: Massive E-commerce Dataset for Retail and Insurance domain.
train-eval-index:
- config: nsds
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
sentence: text
label: target
metrics:
- type: nsme-com
name: NSME-COM
config:
nsds
tags:
- chatbots
- e-commerce
- retail
- insurance
- consumer
- consumer goods
configs:
- nsds
---
# Dataset Card for NSME-COM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: [https://huggingface.co/asaxena1990)
- **Repository:** [https://huggingface.co/datasets/asaxena1990/NSME-COM)
- **Point of Contact:** (Ayushman Dash <ayushman@neuralspace.ai>, Ankur Saxena <ankursaxena@neuralspace.ai>)
- **Size of downloaded dataset files:** 10.86 KB
### Dataset Summary
NSME-COM, the NeuralSpace Massive E-commerce Dataset is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### nsds
A manually-curated domain specific dataset by Data Engineers at NeuralSpace for rare E-commerce domains such as Insurance and Retail for NL researchers and practitioners to evaluate state of the art models [here](https://www.neuralspace.ai/) in 100+ languages. The dataset files are available in JSON format.
### Languages
The language data in NSME-COM is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 10.86 KB
An example of 'test' looks as follows.
``` {
"text": "is it good to add roadside assistance?",
"intent": "Add",
"type": "Test"
}
```
An example of 'train' looks as follows.
```{
"text": "how can I add my spouse as a nominee?",
"intent": "Add",
"type": "Train"
},
```
### Data Fields
The data fields are the same among all splits.
#### nsds
- `text`: a `string` feature.
- `intent`: a `string` feature.
- `type`: a classification label, with possible values including `train` or `test`.
### Data Splits
#### nsds
| |train|test|
|----|----:|---:|
|nsds| 1725| 406|
### Contributions
Ankur Saxena (ankursaxena@neuralspace.ai) |
biglam | null | null | null | false | 188 | false | biglam/oldbookillustrations | 2022-08-22T14:32:05.000Z | null | true | f9f260909bef5972c4ee28a34aaad2b644c2781f | [] | [
"annotations_creators:expert-generated",
"language:en",
"language:fr",
"language:de",
"language_creators:expert-generated",
"license:cc-by-nc-4.0",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:lam",
"tags:1800-1900",
"task_categories:text-to-ima... | https://huggingface.co/datasets/biglam/oldbookillustrations/resolve/main/README.md | |
lhoestq | null | null | null | false | 1 | false | lhoestq/nllb | 2022-08-18T10:24:52.000Z | null | false | a5b0063204603a74232d1990ea5029171beabc27 | [] | [
"arxiv:2205.12654",
"arxiv:2207.04672"
] | https://huggingface.co/datasets/lhoestq/nllb/resolve/main/README.md | # Dataset Card for No Language Left Behind (NLLB - 200vo)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/pdf/2207.0467
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset was created based on [metadata](https://github.com/facebookresearch/fairseq/tree/nllb) for mined bitext released by Meta AI. It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders (Heffernan et al., 2022).
#### How to use the data
There are two ways to access the data:
* Via the Hugging Face Python datasets library
```
from datasets import load_dataset
dataset = load_dataset("allenai/nllb")
```
For accessing a particular [language pair]((https://huggingface.co/datasets/allenai/nllb/blob/main/nllb_lang_pairs.py)):
```
from datasets import load_dataset
dataset = load_dataset("allenai/nllb", "ace_Latn-ban_Latn")
```
* Clone the git repo
```
git lfs install
git clone https://huggingface.co/datasets/allenai/nllb
```
### Supported Tasks and Leaderboards
N/A
### Languages
Language pairs can be found [here](https://huggingface.co/datasets/allenai/nllb/blob/main/nllb_lang_pairs.py).
## Dataset Structure
The dataset contains gzipped tab delimited text files for each direction. Each text file contains lines with parallel sentences.
### Data Instances
[More Information Needed]
### Data Fields
Every instance for a language pair contains the following fields: 'translation' (containing sentence pairs), 'laser_score', 'source_sentence_lid', 'target_sentence_lid', where 'lid' is language classification probability, 'source_sentence_source', 'source_sentence_url', 'target_sentence_source', 'target_sentence_url'.
* Sentence in first language
* Sentence in second language
* LASER score
* Language ID score for first sentence
* Language ID score for second sentence
* First sentence source (https://github.com/facebookresearch/LASER/tree/main/data/nllb200)
* First sentence URL if the source is crawl-data/\*; _ otherwise
* Second sentence source
* Second sentence URL if the source is crawl-data/\*; _ otherwise
The lines are sorted by LASER3 score in decreasing order.
Example:
```
{'translation': {'ace_Latn': 'Gobnyan hana geupeukeucewa gata atawa geutinggai meunan mantong gata."',
'ban_Latn': 'Ida nenten jaga manggayang wiadin ngutang semeton."'},
'laser_score': 1.2499876022338867,
'source_sentence_lid': 1.0000100135803223,
'target_sentence_lid': 0.9991400241851807,
'source_sentence_source': 'paracrawl9_hieu',
'source_sentence_url': '_',
'target_sentence_source': 'crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/wet/CC-MAIN-20200219153707-20200219183707-00232.warc.wet.gz',
'target_sentence_url': 'https://alkitab.mobi/tb/Ula/31/6/\n'}
```
### Data Splits
The data is not split. Given the noisy nature of the overall process, we recommend using the data only for training and use other datasets like [Flores-200](https://github.com/facebookresearch/flores) for the evaluation. The data includes some development and test sets from other datasets, such as xlsum. In addition, sourcing data from multiple web crawls is likely to produce incidental overlap with other test sets.
## Dataset Creation
### Curation Rationale
Data was filtered based on language identification, emoji based filtering, and for some high-resource languages using a language model. For more details on data filtering please refer to Section 5.2 (NLLB Team et al., 2022).
### Source Data
#### Initial Data Collection and Normalization
Monolingual data was collected from the following sources:
| Name in data | Source |
|------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| afriberta | https://github.com/castorini/afriberta |
| americasnlp | https://github.com/AmericasNLP/americasnlp2021/ |
| bho_resources | https://github.com/shashwatup9k/bho-resources |
| crawl-data/* | WET files from https://commoncrawl.org/the-data/get-started/ |
| emcorpus | http://lepage-lab.ips.waseda.ac.jp/en/projects/meiteilon-manipuri-language-resources/ |
| fbseed20220317 | https://github.com/facebookresearch/flores/tree/main/nllb_seed |
| giossa_mono | https://github.com/sgongora27/giossa-gongora-guarani-2021 |
| iitguwahati | https://github.com/priyanshu2103/Sanskrit-Hindi-Machine-Translation/tree/main/parallel-corpus |
| indic | https://indicnlp.ai4bharat.org/corpora/ |
| lacunaner | https://github.com/masakhane-io/lacuna_pos_ner/tree/main/language_corpus |
| leipzig | Community corpora from https://wortschatz.uni-leipzig.de/en/download for each year available |
| lowresmt2020 | https://github.com/panlingua/loresmt-2020 |
| masakhanener | https://github.com/masakhane-io/masakhane-ner/tree/main/MasakhaNER2.0/data |
| nchlt | https://repo.sadilar.org/handle/20.500.12185/299 <br>https://repo.sadilar.org/handle/20.500.12185/302 <br>https://repo.sadilar.org/handle/20.500.12185/306 <br>https://repo.sadilar.org/handle/20.500.12185/308 <br>https://repo.sadilar.org/handle/20.500.12185/309 <br>https://repo.sadilar.org/handle/20.500.12185/312 <br>https://repo.sadilar.org/handle/20.500.12185/314 <br>https://repo.sadilar.org/handle/20.500.12185/315 <br>https://repo.sadilar.org/handle/20.500.12185/321 <br>https://repo.sadilar.org/handle/20.500.12185/325 <br>https://repo.sadilar.org/handle/20.500.12185/328 <br>https://repo.sadilar.org/handle/20.500.12185/330 <br>https://repo.sadilar.org/handle/20.500.12185/332 <br>https://repo.sadilar.org/handle/20.500.12185/334 <br>https://repo.sadilar.org/handle/20.500.12185/336 <br>https://repo.sadilar.org/handle/20.500.12185/337 <br>https://repo.sadilar.org/handle/20.500.12185/341 <br>https://repo.sadilar.org/handle/20.500.12185/343 <br>https://repo.sadilar.org/handle/20.500.12185/346 <br>https://repo.sadilar.org/handle/20.500.12185/348 <br>https://repo.sadilar.org/handle/20.500.12185/353 <br>https://repo.sadilar.org/handle/20.500.12185/355 <br>https://repo.sadilar.org/handle/20.500.12185/357 <br>https://repo.sadilar.org/handle/20.500.12185/359 <br>https://repo.sadilar.org/handle/20.500.12185/362 <br>https://repo.sadilar.org/handle/20.500.12185/364 |
| paracrawl-2022-* | https://data.statmt.org/paracrawl/monolingual/ |
| paracrawl9* | https://paracrawl.eu/moredata the monolingual release |
| pmi | https://data.statmt.org/pmindia/ |
| til | https://github.com/turkic-interlingua/til-mt/tree/master/til_corpus |
| w2c | https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9 |
| xlsum | https://github.com/csebuetnlp/xl-sum |
#### Who are the source language producers?
Text was collected from the web and various monolingual data sets, many of which are also web crawls. This may have been written by people, generated by templates, or in some cases be machine translation output.
### Annotations
#### Annotation process
Parallel sentences in the monolingual data were identified using LASER3 encoders. (Heffernan et al., 2022)
#### Who are the annotators?
The data was not human annotated.
### Personal and Sensitive Information
Data may contain personally identifiable information, sensitive content, or toxic content that was publicly shared on the Internet.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides data for training machine learning systems for many languages that have low resources available for NLP.
### Discussion of Biases
Biases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques; lower resource languages generally have lower accuracy.
### Other Known Limitations
Some of the translations are in fact machine translations. While some website machine translation tools are identifiable from HTML source, these tools were not filtered out en mass because raw HTML was not available from some sources and CommonCrawl processing started from WET files.
## Additional Information
### Dataset Curators
The data was not curated.
### Licensing Information
The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound to the respective Terms of Use and License of the original source.
### Citation Information
Hefferman et al, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages. Arxiv https://arxiv.org/abs/2205.12654, 2022.<br>
NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv https://arxiv.org/abs/2207.04672, 2022.
### Contributions
We thank the NLLB Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Bapi Akula, Pierre Andrews, Onur Çelebi, Sergey Edunov, Kenneth Heafield, Philipp Koehn, Alex Mourachko, Safiyyah Saleem, Holger Schwenk, and Guillaume Wenzek. We also thank the AllenNLP team at AI2 for hosting and releasing this data, including Akshita Bhagia (for engineering efforts to host the data, and create the huggingface dataset), and Jesse Dodge (for organizing the connection). |
sfurkan | null | null | null | false | 1 | false | sfurkan/Kanun-Yonetmelik-Tuzuk | 2022-08-18T14:02:19.000Z | null | false | aa70586d2497e4ae6477874d8de2d0d30fa7ac48 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/sfurkan/Kanun-Yonetmelik-Tuzuk/resolve/main/README.md | ---
license: apache-2.0
---
|
SLPL | null | @misc{https://doi.org/10.48550/arxiv.2208.13486,
doi = {10.48550/ARXIV.2208.13486},
url = {https://arxiv.org/abs/2208.13486},
author = {Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {naab: A ready-to-use plug-and-play corpus for Farsi},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
} | Huge corpora of textual data are always known to be a crucial need for training deep models such as transformer-based ones. This issue is emerging more in lower resource languages - like Farsi. We propose naab, the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. | false | 18 | false | SLPL/naab | 2022-11-03T06:33:48.000Z | null | false | c0ffda60b8b5a0e9ec63360548be8d53f955246f | [] | [
"arxiv:2208.13486",
"language:fa",
"license:mit",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling"
] | https://huggingface.co/datasets/SLPL/naab/resolve/main/README.md | ---
language:
- fa
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: naab (A ready-to-use plug-and-play corpus in Farsi)
---
# naab: A ready-to-use plug-and-play corpus in Farsi
_[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
- **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486)
- **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com)
### Dataset Summary
naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus.
You can use this corpus by the commands below:
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab")
```
You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab", split="train[:10%]")
```
**Note: be sure that your machine has at least 130 GB free space, also it may take a while to download. If you are facing disk or internet shortage, you can use below code snippet helping you download your costume sections of the naab:**
```python
from datasets import load_dataset
# ==========================================================
# You should just change this part in order to download your
# parts of corpus.
indices = {
"train": [5, 1, 2],
"test": [0, 2]
}
# ==========================================================
N_FILES = {
"train": 126,
"test": 3
}
_BASE_URL = "https://huggingface.co/datasets/SLPL/naab/resolve/main/data/"
data_url = {
"train": [_BASE_URL + "train-{:05d}-of-{:05d}.txt".format(x, N_FILES["train"]) for x in range(N_FILES["train"])],
"test": [_BASE_URL + "test-{:05d}-of-{:05d}.txt".format(x, N_FILES["test"]) for x in range(N_FILES["test"])],
}
for index in indices['train']:
assert index < N_FILES['train']
for index in indices['test']:
assert index < N_FILES['test']
data_files = {
"train": [data_url['train'][i] for i in indices['train']],
"test": [data_url['test'][i] for i in indices['test']]
}
print(data_files)
dataset = load_dataset('text', data_files=data_files, use_auth_token=True)
```
### Supported Tasks and Leaderboards
This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective.
- `language-modeling`
- `masked-language-modeling`
## Dataset Structure
Each row of the dataset will look like something like the below:
```json
{
'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.",
}
```
+ `text` : the textual paragraph.
### Data Splits
This dataset includes two splits (`train` and `test`). We split these two by dividing the randomly permuted version of the corpus into (95%, 5%) division respected to (`train`, `test`). Since `validation` is usually occurring during training with the `train` dataset we avoid proposing another split for it.
| | train | test |
|-------------------------|------:|-----:|
| Input Sentences | 225892925 | 11083849 |
| Average Sentence Length | 61 | 25 |
Below you can see the log-based histogram of word/paragraph over the two splits of the dataset.
<div align="center">
<img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-hist.png">
</div>
## Dataset Creation
### Curation Rationale
Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science.
The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides **126GB** (including more than **224 million** sequences and nearly **15 billion** words) as the training corpus and **2.3GB** (including nearly **11 million** sequences and nearly **300 million** words) as the test corpus.
### Source Data
The textual corpora that we used as our source data are illustrated in the figure below. It contains 5 corpora which are linked in the coming sections.
<div align="center">
<img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png">
</div>
#### Persian NLP
[This](https://github.com/persiannlp/persian-raw-text) corpus includes eight corpora that are sorted based on their volume as below:
- [Common Crawl](https://commoncrawl.org/): 65GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt))
- [MirasText](https://github.com/miras-tech/MirasText): 12G
- [W2C – Web to Corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9): 1GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/w2c_merged.txt))
- Persian Wikipedia (March 2020 dump): 787MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/fawiki_merged.txt))
- [Leipzig Corpora](https://corpora.uni-leipzig.de/): 424M ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/LeipzigCorpus.txt))
- [VOA corpus](https://jon.dehdari.org/corpora/): 66MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/voa_persian_2003_2008_cleaned.txt))
- [Persian poems corpus](https://github.com/amnghd/Persian_poems_corpus): 61MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/poems_merged.txt))
- [TEP: Tehran English-Persian parallel corpus](http://opus.nlpl.eu/TEP.php): 33MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/TEP_fa.txt))
#### AGP
This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media.
#### OSCAR-fa
[OSCAR](https://oscar-corpus.com/) or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining.
#### Telegram
Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data.
#### LSCP
[The Large Scale Colloquial Persian Language Understanding dataset](https://iasbs.ac.ir/~ansari/lscp/) has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable.
#### Initial Data Collection and Normalization
The data collection process was separated into two parts. In the first part, we searched for existing corpora. After downloading these corpora we started to crawl data from some social networks. Then thanks to [ASR Gooyesh Pardaz](https://asr-gooyesh.com/en/) we were provided with enough textual data to start the naab journey.
We used a preprocessor based on some stream-based Linux kernel commands so that this process can be less time/memory-consuming. The code is provided [here](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess).
### Personal and Sensitive Information
Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP.
We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful.
## Additional Information
### Dataset Curators
+ Sadra Sabouri (Sharif University of Technology)
+ Elnaz Rahmati (Sharif University of Technology)
### Licensing Information
mit?
### Citation Information
```
@article{sabouri2022naab,
title={naab: A ready-to-use plug-and-play corpus for Farsi},
author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
journal={arXiv preprint arXiv:2208.13486},
year={2022}
}
```
DOI: [https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486)
### Contributions
Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset.
### Keywords
+ Farsi
+ Persian
+ raw text
+ پیکره فارسی
+ پیکره متنی
+ آموزش مدل زبانی
|
SLPL | null | @misc{https://doi.org/10.48550/arxiv.2208.13486,
doi = {10.48550/ARXIV.2208.13486},
url = {https://arxiv.org/abs/2208.13486},
author = {Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {naab: A ready-to-use plug-and-play corpus for Farsi},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
} | Huge corpora of textual data are always known to be a crucial need for training deep models such as transformer-based ones. This issue is emerging more in lower resource languages - like Farsi. We propose naab, the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. This corpus contains the raw (uncleaned) version of it. | false | 7 | false | SLPL/naab-raw | 2022-11-03T06:34:28.000Z | null | false | 447ead3773dc665d37157e84483e5235f8aeb4ad | [] | [
"arxiv:2208.13486",
"language:fa",
"license:mit",
"multilinguality:monolingual",
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling"
] | https://huggingface.co/datasets/SLPL/naab-raw/resolve/main/README.md | ---
language:
- fa
license:
- mit
multilinguality:
- monolingual
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: naab-raw (raw version of the naab corpus)
---
# naab-raw (raw version of the naab corpus)
_[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Changelog](#changelog)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Contribution Guideline](#contribution-guideline)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
- **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486)
- **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com)
### Dataset Summary
This is the raw (uncleaned) version of the [naab](https://huggingface.co/datasets/SLPL/naab) corpus. You can use also customize our [preprocess script](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess) and make your own cleaned corpus. This repository is a hub for all Farsi corpora. Feel free to add your corpus following the [contribution guidelines](#contribution-guideline).
You can download the dataset by the command below:
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab-raw")
```
If you wanted to download a specific part of the corpus you can set the config name to the specific corpus name:
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab-raw", "CC-fa")
```
### Supported Tasks and Leaderboards
This corpus can be used for training all language models trained by Masked Language Modeling (MLM) or any other self-supervised objective.
- `language-modeling`
- `masked-language-modeling`
### Changelog
It's crucial to log changes on the projects which face changes periodically. Please refer to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md) for more details.
## Dataset Structure
Each row of the dataset will look like something like the below:
```json
{
'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.",
}
```
+ `text` : the textual paragraph.
### Data Splits
This corpus contains only a split (the `train` split).
## Dataset Creation
### Curation Rationale
Here are some details about each part of this corpus.
#### CC-fa
The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata, and text extractions. We use the Farsi part of it here.
#### W2C
The W2C stands for Web to Corpus and it contains several corpera. We contain the Farsi part of it in this corpus.
### Contribution Guideline
In order to add your dataset, you should follow the below steps and make a pull request in order to be merged with the _naab-raw_:
1. Add your dataset to `_CORPUS_URLS` in `naab-raw.py` like:
```python
...
"DATASET_NAME": "LINK_TO_A_PUBLIC_DOWNLOADABLE_FILE.txt"
...
```
2. Add a log of your changes to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md).
3. Add some minor descriptions to the [Curation Rationale](#curation-rationale) under a subsection with your dataset name.
### Personal and Sensitive Information
Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP.
We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful.
## Additional Information
### Dataset Curators
+ Sadra Sabouri (Sharif University of Technology)
+ Elnaz Rahmati (Sharif University of Technology)
### Licensing Information
mit
### Citation Information
```
@article{sabouri2022naab,
title={naab: A ready-to-use plug-and-play corpus for Farsi},
author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
journal={arXiv preprint arXiv:2208.13486},
year={2022}
}
```
DOI:[https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486).
### Contributions
Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset.
### Keywords
+ Farsi
+ Persian
+ raw text
+ پیکره فارسی
+ پیکره متنی
+ آموزش مدل زبانی
|
projecte-aina | null | WikiCAT: Text Classification Catalan dataset from the Viquipedia | false | 6 | false | projecte-aina/WikiCAT_ca | 2022-11-16T15:33:34.000Z | null | false | a7da4079b185e7e0e405045aa6f64d8588553a3d | [] | [
"annotations_creators:auromatically-generated",
"language_creators:found",
"language:ca",
"license:cc-by-sa-3.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/projecte-aina/WikiCAT_ca/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- auromatically-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wikicat_ca
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# WikiCAT_ca: Catalan Text Classification dataset
## Dataset Description
- **Paper:**
- **Point of Contact:** Carlos Rodríguez-Penagos (carlos.rodriguez1@bsc.es)
**Repository**
https://github.com/TeMU-BSC/WikiCAT
### Dataset Summary
WikiCAT_ca is a Catalan corpus for thematic Text Classification tasks. It is created automagically from Wikipedia and Wikidata sources, and contains 13201 articles from the Viquipedia classified under 19 different categories.
This dataset was developed by BSC TeMU as part of the AINA project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
CA- Catalan
## Dataset Structure
### Data Instances
Two json files, one for each split.
### Data Fields
We used a simple model with the article text and associated labels, without further metadata.
#### Example:
<pre>
{"version": "1.1.0",
"data":
[
{
'sentence': ' Celsius és conegut com l\'inventor de l\'escala centesimal del termòmetre. Encara que aquest instrument és un invent molt antic, la història de la seva gradació és molt més capritxosa. Durant el segle xvi era graduat com "fred" col·locant-lo (...)',
'label': 'Ciència'
},
.
.
.
]
}
</pre>
#### Labels
'Història', 'Tecnologia', 'Humanitats', 'Economia', 'Dret', 'Esport', 'Política', 'Govern', 'Entreteniment', 'Natura', 'Exèrcit', 'Salut_i_benestar_social', 'Matemàtiques', 'Filosofia', 'Ciència', 'Música', 'Enginyeria', 'Empresa', 'Religió'
### Data Splits
* hfeval_ca.json: 3970 label-document pairs
* hftrain_ca.json: 9231 label-document pairs
## Dataset Creation
### Methodology
“Category” starting pages are chosen to represent the topics in each language.
We extract, for each category, the main pages, as well as the subcategories ones, and the individual pages under this first level.
For each page, the "summary" provided by Wikipedia is also extracted as the representative text.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The source data are thematic categories in the different Wikipedias
#### Who are the source language producers?
### Annotations
#### Annotation process
Automatic annotation
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>.
### Contributions
[N/A]
| |
narad | null | \ | \ | false | 263 | false | narad/ravdess | 2022-11-02T03:21:19.000Z | null | false | 2894394c52a8621bf8bb2e4d7c3b9cf77f6fa80e | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:audio-classification",
"task_ids:audio-emotion-recognition"
] | https://huggingface.co/datasets/narad/ravdess/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- audio-classification
task_ids:
- audio-emotion-recognition
---
# Dataset Card for RAVDESS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://www.kaggle.com/datasets/uwrfkaggler/ravdess-emotional-speech-audio
- **Repository:**
- **Paper:**
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0196391
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS)
Speech audio-only files (16bit, 48kHz .wav) from the RAVDESS. Full dataset of speech and song, audio and video (24.8 GB) available from Zenodo. Construction and perceptual validation of the RAVDESS is described in our Open Access paper in PLoS ONE.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
The dataset repository contains only preprocessing scripts. When loaded and a cached version is not found, the dataset will be automatically downloaded and a .tsv file created with all data instances saved as rows in a table.
### Data Instances
[More Information Needed]
### Data Fields
- "audio": a datasets.Audio representation of the spoken utterance,
- "text": a datasets.Value string representation of spoken utterance,
- "labels": a datasets.ClassLabel representation of the emotion label,
- "speaker_id": a datasets.Value string representation of the speaker ID,
- "speaker_gender": a datasets.Value string representation of the speaker gender
### Data Splits
All data is in the train partition.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Original Data from the Zenodo release of the RAVDESS Dataset:
Files
This portion of the RAVDESS contains 1440 files: 60 trials per actor x 24 actors = 1440. The RAVDESS contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech emotions includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression.
File naming convention
Each of the 1440 files has a unique filename. The filename consists of a 7-part numerical identifier (e.g., 03-01-06-01-02-01-12.wav). These identifiers define the stimulus characteristics:
Filename identifiers
Modality (01 = full-AV, 02 = video-only, 03 = audio-only).
Vocal channel (01 = speech, 02 = song).
Emotion (01 = neutral, 02 = calm, 03 = happy, 04 = sad, 05 = angry, 06 = fearful, 07 = disgust, 08 = surprised).
Emotional intensity (01 = normal, 02 = strong). NOTE: There is no strong intensity for the 'neutral' emotion.
Statement (01 = "Kids are talking by the door", 02 = "Dogs are sitting by the door").
Repetition (01 = 1st repetition, 02 = 2nd repetition).
Actor (01 to 24. Odd numbered actors are male, even numbered actors are female).
Filename example: 03-01-06-01-02-01-12.wav
Audio-only (03)
Speech (01)
Fearful (06)
Normal intensity (01)
Statement "dogs" (02)
1st Repetition (01)
12th Actor (12)
Female, as the actor ID number is even.
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
(CC BY-NC-SA 4.0)[https://creativecommons.org/licenses/by-nc-sa/4.0/]
### Citation Information
How to cite the RAVDESS
Academic citation
If you use the RAVDESS in an academic publication, please use the following citation: Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5): e0196391. https://doi.org/10.1371/journal.pone.0196391.
All other attributions
If you use the RAVDESS in a form other than an academic publication, such as in a blog post, school project, or non-commercial product, please use the following attribution: "The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS)" by Livingstone & Russo is licensed under CC BY-NA-SC 4.0.
### Contributions
Thanks to [@narad](https://github.com/narad) for adding this dataset. |
winvoker | null | @inproceedings{gupta2019lvis,
title={ LVIS: A Dataset for Large Vocabulary Instance Segmentation},
author={Gupta, Agrim and Dollar, Piotr and Girshick, Ross},
booktitle={Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition},
year={2019}
} | Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced `el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ~2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. | false | 1 | false | winvoker/lvis | 2022-08-22T15:57:57.000Z | null | false | b4553ee0b6e28797af2d78fc9ea24edd71a9270c | [] | [
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"tags:segmentation",
"tags:coco",
"task_categories:image-segmentation",
"task_ids:instance-segmentation"
] | https://huggingface.co/datasets/winvoker/lvis/resolve/main/README.md | ---
viewer: false
annotations_creators: []
language: []
language_creators: []
license:
- cc-by-4.0
pretty_name: lvis
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- segmentation
- coco
task_categories:
- image-segmentation
task_ids:
- instance-segmentation
---
# LVIS
### Dataset Summary
This dataset is the implementation of LVIS dataset into Hugging Face datasets. Please visit the original website for more information.
- https://www.lvisdataset.org/
### Loading
This code returns train, validation and test generators.
```python
from datasets import load_dataset
dataset = load_dataset("winvoker/lvis")
```
Objects is a dictionary which contains annotation information like bbox, class.
```
DatasetDict({
train: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 100170
})
validation: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 4809
})
test: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 19822
})
})
```
### Access Generators
```python
train = dataset["train"]
validation = dataset["validation"]
test = dataset["test"]
```
An example row is as follows.
```json
{ 'id': 0,
'image': '000000437561.jpg',
'height': 480,
'width': 640,
'objects': {
'bboxes': [[[392, 271, 14, 3]],
'classes': [117],
'segmentation': [[376, 272, 375, 270, 372, 269, 371, 269, 373, 269, 373]]
}
}
``` |
alvations | null | null | null | false | 1 | false | alvations/stash | 2022-10-27T17:42:38.000Z | null | false | 789485c0380adfd5827130240fcb0f254ae08d0b | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/alvations/stash/resolve/main/README.md | ---
license: cc0-1.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-jnlpba-3af3e90f-1276248800 | 2022-08-18T18:35:34.000Z | null | false | 11ec4d8b90a795c91a8589d209e4738ded3529be | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jnlpba"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-jnlpba-3af3e90f-1276248800/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jnlpba
eval_info:
task: entity_extraction
model: siddharthtumre/biobert-finetuned-jnlpba
metrics: []
dataset_name: jnlpba
dataset_config: jnlpba
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: siddharthtumre/biobert-finetuned-jnlpba
* Dataset: jnlpba
* Config: jnlpba
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@siddharthtumre](https://huggingface.co/siddharthtumre) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-jnlpba-37dc127e-1276948841 | 2022-08-18T20:29:10.000Z | null | false | 77cf2b93667ded5b4fb8024ac0796cc062fe59a9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jnlpba"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-jnlpba-37dc127e-1276948841/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jnlpba
eval_info:
task: entity_extraction
model: siddharthtumre/biobert-finetuned-jnlpba-ner
metrics: []
dataset_name: jnlpba
dataset_config: jnlpba
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: siddharthtumre/biobert-finetuned-jnlpba-ner
* Dataset: jnlpba
* Config: jnlpba
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@siddharthtumre](https://huggingface.co/siddharthtumre) for evaluating this model. |
bccnf | null | null | null | false | 1 | false | bccnf/MeLiDC-shuffled-completo | 2022-08-18T21:46:33.000Z | null | false | 95f31ff9689ea4e38926ac1f41c7b6a27ec87695 | [] | [] | https://huggingface.co/datasets/bccnf/MeLiDC-shuffled-completo/resolve/main/README.md | MeLiDC COM shuffle e SEM retirar categorias menos comuns. |
allenai | null | null | null | false | 2 | false | allenai/multixscience_sparse_oracle | 2022-11-03T21:37:40.000Z | multi-xscience | false | fa7f08668bc5ae9f0f0b1241ce1114fb35dca3d1 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization",
"task_ids:summarization-other-paper-abstract-generation"
] | https://huggingface.co/datasets/allenai/multixscience_sparse_oracle/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- summarization-other-paper-abstract-generation
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.548 | 0.2272 | 0.2272 | 0.2272 | |
ASCCCCCCCC | null | null | null | false | 1 | false | ASCCCCCCCC/bill | 2022-08-24T06:40:24.000Z | null | false | b432438c663e1c7dc4639fe6dda452021b6f2797 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ASCCCCCCCC/bill/resolve/main/README.md | ---
license: apache-2.0
---
|
0x7194633 | null | ..... | ..... | false | 22 | false | 0x7194633/ru-mc4-clean | 2022-08-22T08:41:42.000Z | null | false | e737dce8a76541a828c694906ac99be1abf72e72 | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:ru",
"license:apache-2.0",
"multilinguality:monolingual",
"task_categories:text-generation"
] | https://huggingface.co/datasets/0x7194633/ru-mc4-clean/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ru
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: Ru Pile
task_categories:
- text-generation
---
| Subset | Size |
| ------ | ---- |
| micro | 900MB |
| tiny | 2,63GB |
| small | 8,78GB |
| medium | 26,36GB |
| large | 58,59GB |
| full | 117,77GB | |
AllenGeng | null | null | null | false | 1 | false | AllenGeng/NATEdataset | 2022-08-19T03:56:30.000Z | null | false | 36dc528c5957e4f584c593a618fcbf3ad1a1a7b7 | [] | [] | https://huggingface.co/datasets/AllenGeng/NATEdataset/resolve/main/README.md | |
shreyas-singh | null | null | null | false | 1 | false | shreyas-singh/autotrain-data-MedicalTokenClassification | 2022-08-19T06:52:29.000Z | null | false | 0bcde014603bb09066ea8f441edda07bbd08a4d0 | [] | [] | https://huggingface.co/datasets/shreyas-singh/autotrain-data-MedicalTokenClassification/resolve/main/README.md | ---
{}
---
# AutoTrain Dataset for project: MedicalTokenClassification
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project MedicalTokenClassification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": "13104",
"tokens": [
"Jackie",
"Frank"
],
"feat_pos_tags": [
21,
21
],
"feat_chunk_tags": [
5,
16
],
"tags": [
3,
7
]
},
{
"feat_id": "9297",
"tokens": [
"U.S.",
"lauds",
"Russian-Chechen",
"deal",
"."
],
"feat_pos_tags": [
21,
20,
15,
20,
7
],
"feat_chunk_tags": [
5,
16,
16,
16,
22
],
"tags": [
0,
8,
1,
8,
8
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='string', id=None)",
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_pos_tags": "Sequence(feature=ClassLabel(num_classes=47, names=['\"', '#', '$', \"''\", '(', ')', ',', '.', ':', 'CC', 'CD', 'DT', 'EX', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNP', 'NNPS', 'NNS', 'NN|SYM', 'PDT', 'POS', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'SYM', 'TO', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP', 'WP$', 'WRB', '``'], id=None), length=-1, id=None)",
"feat_chunk_tags": "Sequence(feature=ClassLabel(num_classes=23, names=['B-ADJP', 'B-ADVP', 'B-CONJP', 'B-INTJ', 'B-LST', 'B-NP', 'B-PP', 'B-PRT', 'B-SBAR', 'B-UCP', 'B-VP', 'I-ADJP', 'I-ADVP', 'I-CONJP', 'I-INTJ', 'I-LST', 'I-NP', 'I-PP', 'I-PRT', 'I-SBAR', 'I-UCP', 'I-VP', 'O'], id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(num_classes=9, names=['B-LOC', 'B-MISC', 'B-ORG', 'B-PER', 'I-LOC', 'I-MISC', 'I-ORG', 'I-PER', 'O'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 10014 |
| valid | 4028 |
|
PlanTL-GOB-ES | null | null | null | false | 4 | false | PlanTL-GOB-ES/WikiCAT_es | 2022-11-15T17:43:18.000Z | null | false | a06b32334da2ab8cfdd1b955996729e224869b82 | [] | [
"annotations_creators:automatically-generated",
"language_creators:found",
"language:es",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/PlanTL-GOB-ES/WikiCAT_es/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- automatically-generated
language_creators:
- found
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: wikicat_es
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# WikiCAT_es (Text Classification) Spanish dataset
## Dataset Description
- **Paper:**
- **Point of Contact:**
carlos.rodriguez1@bsc.es
**Repository**
https://github.com/TeMU-BSC/WikiCAT
### Dataset Summary
WikiCAT_es is a Spanish corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 11311 articles from the Wikipedia classified under 19 different categories.
This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
ES - Spanish
## Dataset Structure
### Data Instances
Two json files, one for each split.
### Data Fields
We used a simple model with the summary text and associated label, without further metadata.
#### Example:
<pre>
{"version": "1.1.0",
"data":
[
{
{'sentence': 'La investigación de mercados es la herramienta necesaria para la identificación, acopio, análisis, difusión y aprovechamiento sistemático y objetivo de la información (...)',
'label': 'Negocios'
},
.
.
.
]
}
</pre>
#### Labels
'Deporte', 'Negocios', 'Tecnología', 'Historia', 'Humanidades', 'Entretenimiento', 'Filosofía', 'Naturaleza', 'Gobierno', 'Música', 'Ingeniería_por_tipo', 'Derecho', 'Ciencia', 'Guerra', 'Economía', 'Salud', 'Religión', 'Política', 'Matemáticas'
### Data Splits
* hftrain_es.json: 7909 label-document pairs
* hfeval_es.json: 3970 label-document pairs
## Dataset Creation
### Methodology
Se eligen páginas de partida “Category:” para representar los temas en cada lengua.
Se extrae para cada categoría las páginas principales, así como las subcategorías, y las páginas individuales bajo estas subcategorías de primer nivel.
Para cada página, se extrae también el “summary” que proporciona Wikipedia.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The source data are Wikipedia pages and thematic categories
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
Automatic annotation
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
### Citation Information
```
```
## Contact Information
For further information, send an email to encargo-pln-life@bsc.es.
## Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
## Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
## Funding
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
|
PlanTL-GOB-ES | null | null | null | false | 1 | false | PlanTL-GOB-ES/WikiCAT_en | 2022-11-15T17:44:17.000Z | null | false | e808df6558b7da64528253e41f1cfe3c55eaf571 | [] | [
"annotations_creators:automatically-generated",
"language_creators:found",
"language:en",
"license:cc-by-sa-3.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/PlanTL-GOB-ES/WikiCAT_en/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- automatically-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wikicat_en
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# WikiCAT_en (Text Classification) English dataset
## Dataset Description
- **Paper:**
- **Point of Contact:**
carlos.rodriguez1@bsc.es
**Repository**
https://github.com/TeMU-BSC/WikiCAT
### Dataset Summary
WikiCAT_en is a English corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 28921 article summaries from the Wikiipedia classified under 19 different categories.
This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
EN - English
## Dataset Structure
### Data Instances
Two json files, one for each split.
### Data Fields
We used a simple model with the article text and associated labels, without further metadata.
#### Example:
<pre>
{"version": "1.1.0",
"data":
[
{
{'sentence': 'The IEEE Donald G. Fink Prize Paper Award was established in 1979 by the board of directors of the Institute of Electrical and Electronics Engineers (IEEE) in honor of Donald G. Fink. He was a past president of the Institute of Radio Engineers (IRE), and the first general manager and executive director of the IEEE. Recipients of this award received a certificate and an honorarium. The award was presented annually since 1981 and discontinued in 2016.', 'label': 'Engineering'
},
.
.
.
]
}
</pre>
#### Labels
'Health', 'Law', 'Entertainment', 'Religion', 'Business', 'Science', 'Engineering', 'Nature', 'Philosophy', 'Economy', 'Sports', 'Technology', 'Government', 'Mathematics', 'Military', 'Humanities', 'Music', 'Politics', 'History'
### Data Splits
* hftrain_en.json: 20237 label-document pairs
* hfeval_en.json: 8684 label-document pairs
## Dataset Creation
### Methodology
Se eligen páginas de partida “Category:” para representar los temas en cada lengua.
Se extrae para cada categoría las páginas principales, así como las subcategorías, y las páginas individuales bajo estas subcategorías de primer nivel.
Para cada página, se extrae también el “summary” que proporciona Wikipedia.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The source data are Wikipedia page summaries and thematic categories
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
Automatic annotation
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
### Citation Information
```
```
### Contact Information
For further information, send an email to encargo-pln-life@bsc.es.
## Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
## Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
## Funding
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
|
IDEA-CCNL | null | \ | \ | false | 17 | false | IDEA-CCNL/PretrainCorpusDemo | 2022-09-28T18:11:38.000Z | null | false | 568988dc0cfa7506819b0f54cd2b6d27ce73b557 | [] | [
"arxiv:2209.02970",
"license:apache-2.0"
] | https://huggingface.co/datasets/IDEA-CCNL/PretrainCorpusDemo/resolve/main/README.md | ---
license: apache-2.0
---
Only use for Demo
# PretrainCorpusDemo
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
} |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-emotion-a34266d3-1280948985 | 2022-08-19T11:42:12.000Z | null | false | 8923e1a7979d14ef39b339b0191260fd5fd725d2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-emotion-a34266d3-1280948985/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: Ahmed007/distilbert-base-uncased-finetuned-emotion
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Ahmed007/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
NX2411 | null | null | null | false | 10 | false | NX2411/mydataset-only-test | 2022-08-19T12:03:00.000Z | null | false | ced6ce1642942f9b258becd0914554cc8e6808bf | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/NX2411/mydataset-only-test/resolve/main/README.md | ---
license: apache-2.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-07c07057-797e-4d34-8fcb-023957860774-7467 | 2022-08-19T12:04:17.000Z | null | false | 90261ba9395fb29be9287b5b961a6908f01a0cc6 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-07c07057-797e-4d34-8fcb-023957860774-7467/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: autoevaluate/natural-language-inference
metrics: []
dataset_name: glue
dataset_config: mrpc
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: autoevaluate/natural-language-inference
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6e415fa8-612b-4f91-8605-a10cd0c88147-7568 | 2022-08-19T12:08:09.000Z | null | false | 4a1c01327dac9ee8a68f09a4b4d6611a853aa180 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6e415fa8-612b-4f91-8605-a10cd0c88147-7568/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: autoevaluate/natural-language-inference
metrics: []
dataset_name: glue
dataset_config: mrpc
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: autoevaluate/natural-language-inference
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-dd7fa31c-e9a7-4d4e-81bc-102bff5d38c4-3721 | 2022-08-19T12:57:42.000Z | null | false | 20e767bc523d5a5e7044e14ee332f8f1b5e5e2a1 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-dd7fa31c-e9a7-4d4e-81bc-102bff5d38c4-3721/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: autoevaluate/natural-language-inference
metrics: []
dataset_name: glue
dataset_config: mrpc
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: autoevaluate/natural-language-inference
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-6258c8ab-61ff-4bb1-984c-d291ce97e844-3923 | 2022-08-19T13:29:48.000Z | null | false | ad59c039a59e7e4c757dc44fa9e9aaaea8d7a4e7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6258c8ab-61ff-4bb1-984c-d291ce97e844-3923/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: autoevaluate/natural-language-inference
metrics: []
dataset_name: glue
dataset_config: mrpc
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: autoevaluate/natural-language-inference
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-samsum-ede55545-13415852 | 2022-08-19T13:57:07.000Z | null | false | ca3c9475c9b6443bf5aa58b433dfd9fa1dc334fd | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-samsum-ede55545-13415852/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: google/bigbird-pegasus-large-arxiv
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-arxiv
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
tartuNLP | null | null | null | false | 1 | false | tartuNLP/finno-ugric-benchmark | 2022-08-19T14:59:01.000Z | null | false | a57ff00e419ae9df924eec3006b3afd573fe3d80 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/tartuNLP/finno-ugric-benchmark/resolve/main/README.md | ---
license: cc-by-4.0
---
|
jakartaresearch | null | null | This dataset is built as a playground for beginner to make a translation model for Indonesian and English. | false | 1 | false | jakartaresearch/inglish | 2022-08-19T15:23:15.000Z | null | false | 460772fb9f8ebdea9a826a863f8d08f398ecca89 | [] | [
"annotations_creators:machine-generated",
"language:id",
"language:en",
"language_creators:machine-generated",
"license:cc-by-4.0",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:indonesian",
"tags:english",
"tags:translation",
"task_categories:t... | https://huggingface.co/datasets/jakartaresearch/inglish/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- id
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- translation
pretty_name: 'Inglish: Indonesian English Machine Translation Dataset'
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- indonesian
- english
- translation
task_categories:
- translation
task_ids: []
---
# Dataset Card for Inglish: Indonesian English Translation Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The original dataset is from MSRP dataset. The translation was generated from google translate.
Feel free to check the translation if you find any error and open new discussion.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
English - Indonesian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. |
BigBang | null | null | null | false | 1 | false | BigBang/galaxyzoo-decals | 2022-08-29T18:03:24.000Z | null | false | 66772c4cf2360e5fdd3a974883fe12d3a64a0038 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/BigBang/galaxyzoo-decals/resolve/main/README.md | ---
license: cc-by-4.0
---
# Galaxy Zoo DECaLS: Detailed Visual Morphology Measurements from Volunteers and Deep Learning for 314,000 Galaxies
- https://github.com/mwalmsley/zoobot
- https://zenodo.org/record/4573248
# Dataset Schema
This schema describes the columns in the GZ DECaLS catalogues; `gz_decals_auto_posteriors`, `gz_decals_volunteers_1_and_2`, and `gz_decals_volunteers_5`.
In all catalogues, galaxies are identified by their `iauname`. Galaxies are unique within a catalogue. `gz_decals_auto_posteriors` contains all galaxies with appropriate imaging and photometry in DECaLS DR5, while `gz_decals_volunteers_1_and_2`, and `gz_decals_volunteers_5` contain subsets classified by volunteers in the respective campaigns.
The columns reporting morphology measurements are named like `{some-question}_{an-answer}`. For example, for the first question, both volunteer catalogues include the following:
| Column | Description |
| ----------- | ----------- |
| smooth-or-featured_total | Total number of volunteers who answered the "Smooth of Featured" question |
| smooth-or-featured_smooth | Count of volunteers who responded "Smooth" to the "Smooth or Featured" question |
| smooth-or-featured_featured-or-disk | Count of volunteers who responded "Featured or Disk", similarly |
| smooth-or-featured_artifact | Count of volunteers who responded "Artifact", similarly |
| smooth-or-featured_smooth_fraction | Fraction of volunteers who responded "Smooth" to the "Smooth or Featured" question, out of all respondes (i.e. smooth count / total) |
| smooth-or-featured_featured-or-disk_fraction | Fraction of volunteers who responded "Featured or Disk", similarly |
| smooth-or-featured_artifact_fraction | Fraction of volunteers who responded "Artifact", similarly |
The questions and answers are slightly different for `gz_decals_volunteers_1_and_2` than `gz_decals_volunteers_5`. See the paper for more.
The volunteer catalogues include `{question}_{answer}_debiased` columns which attempt to estimate what the vote fractions would be if the same galaxy were imaged at lower redshift. See the paper for more. Note that the debiased measurements are highly uncertain on an individual galaxy basis and therefore should be used with caution. Debiased estimates are only available for galaxies with 0.02<z<0.15, -21.5>M_r>-23, and at least 30 votes for the first question (`Smooth or Featured') after volunteer weighting.
The automated catalogue, `gz_decals_auto_posteriors`, includes predictions for all galaxies and all questions even when that question may not be appropriate (e.g. number of spiral arms for a smooth elliptical). To assess relevance, we include `{question}_proportion_volunteers_asked` columns showing the estimated fraction of volunteers that would have been asked each question (i.e. the product of the vote fractions for the preceding answers). We suggest a cut of `{question}_proportion_volunteers_asked` > 0.5 as a starting point.
The automated catalogue does not include volunteer counts or totals (naturally).
Each catalogue includes a pair of columns to warn where galaxies may have been classified using an inappropriately large field-of-view (due to incorrect radii measurements in the NSA, on which the field-of-view is calculated). We suggest excluding galaxies (<1%) with such warnings.
| Column | Description |
| ----------- | ----------- |
| wrong_size_statistic | Mean distance from center of all pixels above double the 20th percentile (i.e. probable source pixels) |
| wrong_size_warning | True if wrong_size_statistic > 161.0, our suggested starting cut. Approximately the mean distance of all pixels from center|
For convenience, each catalogue includes the same set of basic astrophysical measurements copied from the NASA Sloan Atlas (NSA). Additional measurements can be added my crossmatching on `iauname` with the NSA. See [here](https://data.sdss.org/datamodel/files/ATLAS_DATA/ATLAS_MAJOR_VERSION/nsa.html) for the NSA schema. If you use these columns, you should cite the NSA.
| Column | Description |
| ----------- | ----------- |
| ra | Right ascension (degrees) |
| dec | Declination (degrees) |
| iauname | Unique identifier listed in NSA v1.0.1 |
| petro_theta | "Azimuthally-averaged SDSS-style Petrosian radius (derived from r band" |
| petro_th50 | "Azimuthally-averaged SDSS-style 50% light radius (r-band)" |
| petro_th90 | "Azimuthally-averaged SDSS-style 50% light radius (r-band)" |
| elpetro_absmag_r | "Absolute magnitude from elliptical Petrosian fluxes in rest-frame" in SDSS r |
| sersic_nmgy_r | "Galactic-extinction corrected AB flux" in SDSS r |
| redshift | "Heliocentric redshift" ("z" column in NSA) |
| mag_r | 22.5 - 2.5 log10(sersic_nmgy_r). *Not* the same as the NSA mag column! |
```
@dataset{walmsley_mike_2020_4573248,
author = {Walmsley, Mike and
Lintott, Chris and
Tobias, Geron and
Kruk, Sandor J and
Krawczyk, Coleman and
Willett, Kyle and
Bamford, Steven and
Kelvin, Lee S and
Fortson, Lucy and
Gal, Yarin and
Keel, William and
Masters, Karen and
Mehta, Vihang and
Simmons, Brooke and
Smethurst, Rebecca J and
Smith, Lewis and
Baeten, Elisabeth M L and
Macmillan, Christine},
title = {{Galaxy Zoo DECaLS: Detailed Visual Morphology
Measurements from Volunteers and Deep Learning for
314,000 Galaxies}},
month = dec,
year = 2020,
publisher = {Zenodo},
version = {0.0.2},
doi = {10.5281/zenodo.4573248},
url = {https://doi.org/10.5281/zenodo.4573248}
}
``` |
npc-engine | null | null | null | false | 1 | false | npc-engine/light-batch-summarize-dialogue | 2022-08-20T18:18:10.000Z | null | false | 4c2d2919d8e2292de2350c931758c7c24a0c51d7 | [] | [
"license:mit",
"language:en"
] | https://huggingface.co/datasets/npc-engine/light-batch-summarize-dialogue/resolve/main/README.md | ---
license: mit
language: en
---
# [Light dataset](https://parl.ai/projects/light/) prepared for zero-shot summarization.
Dialogues are preprocessed into a form:
```
<Character name>: <character line>
...
<Character name>: <character line>
Summarize the document
```
|
tartuNLP | null | null | null | false | 1 | false | tartuNLP/EstCOPA | 2022-10-31T10:17:40.000Z | null | false | e293f374f7091dadb2c96a9f44f830dc9c7bbe31 | [] | [
"annotations_creators:expert-generated",
"language:et",
"language_creators:expert-generated",
"language_creators:machine-generated",
"license:cc-by-4.0",
"multilinguality:monolingual",
"multilinguality:translation",
"size_categories:n<1K",
"source_datasets:extended|xcopa",
"task_categories:questio... | https://huggingface.co/datasets/tartuNLP/EstCOPA/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- et
language_creators:
- expert-generated
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
- translation
pretty_name: EstCOPA
size_categories:
- n<1K
source_datasets:
- extended|xcopa
tags: []
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for EstCOPA
### Dataset Summary
EstCOPA is an extended version of [XCOPA](https://huggingface.co/datasets/xcopa) that was created with a goal to further investigate Estonian language understanding of large language models. EstCOPA provides two new versions of train, eval and test datasets in Estonian: firstly, a machine translated (En->Et) version of original English COPA ([Roemmele et al., 2011](http://commonsensereasoning.org/2011/papers/Roemmele.pdf)) and secondly, a manually post-edited version of the same machine translated data.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- et
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use the dataset in your work, please cite
```
@article{kuulmets_estcopa_2022,
title={Estonian Language Understanding: a Case Study on the COPA Task},
volume={10},
DOI={https://doi.org/10.22364/bjmc.2022.10.3.19}, number={3},
journal={Baltic Journal of Modern Computing},
author={Kuulmets, Hele-Andra and Tättar, Andre and Fishel, Mark},
year={2022},
pages={470–480}
}
```
### Contributions
Thanks to [@helehh](https://github.com/helehh) for adding this dataset.
|
nanelimon | null | null | null | false | 1 | false | nanelimon/turkish-social-media-bullying-dataset | 2022-08-20T09:57:56.000Z | null | false | f083f58ded9e934c906dac78fd03f13421221544 | [] | [
"license:mit"
] | https://huggingface.co/datasets/nanelimon/turkish-social-media-bullying-dataset/resolve/main/README.md | ---
license: mit
---
# Overwiev
It is a 4-class Turkish bullying data set obtained from Twitter.
| Cinsiyetçilik | Irkçılık | Kızdırma | Nötr | Sum |
| ------ | ------ | ------ | ------ | ------ |
| 601 | 490 | 910 | 1387 | 3388 |
## Authors
- Seyma SARIGIL: seymasargil@gmail.com
- Elif SARIGIL KARA: elifsarigil@gmail.com
- Murat KOKLU: mkoklu@selcuk.edu.tr
- Alaaddin Erdinç DAL: aerdincdal@icloud.com
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-squad-4b228794-1283349088 | 2022-08-19T21:31:08.000Z | null | false | ae0b477362fd961c4d67b740e1ad9b218900d640 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad-4b228794-1283349088/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: nbroad/xdistil-l12-h384-squad2
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/xdistil-l12-h384-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 6 | false | autoevaluate/autoeval-eval-project-squad-4b228794-1283349089 | 2022-08-19T21:31:53.000Z | null | false | db528c7c35bef1c06371d03a5cac7926d3bf9d5d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad-4b228794-1283349089/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: nbroad/deberta-v3-xsmall-squad2
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/deberta-v3-xsmall-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
djaym7 | null | @inproceedings{dai2022dialoginpainting,
title={Dialog Inpainting: Turning Documents to Dialogs},
author={Dai, Zhuyun and Chaganty, Arun Tejasvi and Zhao, Vincent and Amini, Aida and Green, Mike and Rashid, Qazi and Guu, Kelvin},
booktitle={International Conference on Machine Learning (ICML)},
year={2022},
organization={PMLR}
} | WikiDialog is a large dataset of synthetically generated information-seeking
conversations. Each conversation in the dataset contains two speakers grounded
in a passage from English Wikipedia: one speaker’s utterances consist of exact
sentences from the passage; the other speaker is generated by a large language
model. | false | 10 | false | djaym7/wiki_dialog | 2022-08-20T02:36:29.000Z | null | false | d9edf5a5e28bbde9ba3989e44e5566809aa40157 | [] | [] | https://huggingface.co/datasets/djaym7/wiki_dialog/resolve/main/README.md | # I've just ported the dataset from tfds to huggingface. All credits goes to original authors, readme is copied from https://github.com/google-research/dialog-inpainting/blob/main/README.md
Load in huggingface using :
dataset = datasets.load_dataset('djaym7/wiki_dialog','OQ', beam_runner='DirectRunner')
# Dialog Inpainting: Turning Documents into Dialogs
## Abstract
Many important questions (e.g. "How to eat healthier?") require conversation to establish context and explore in depth.
However, conversational question answering (ConvQA) systems have long been stymied by scarce training data that is expensive to collect.
To address this problem, we propose a new technique for synthetically generating diverse and high-quality dialog data: *dialog inpainting*.
Our approach takes the text of any document and transforms it into a two-person dialog between the writer and an imagined reader:
we treat sentences from the article as utterances spoken by the writer, and then use a dialog inpainter to predict what the imagined reader asked or said in between each of the writer's utterances.
By applying this approach to passages from Wikipedia and the web, we produce `WikiDialog` and `WebDialog`, two datasets totalling 19 million diverse information-seeking dialogs---1,000x larger than the largest existing ConvQA dataset.
Furthermore, human raters judge the *answer adequacy* and *conversationality* of `WikiDialog` to be as good or better than existing manually-collected datasets.
Using our inpainted data to pre-train ConvQA retrieval systems, we significantly advance state-of-the-art across three benchmarks (`QReCC`, `OR-QuAC`, `TREC CaST`) yielding up to 40\% relative gains on standard evaluation metrics.
## Disclaimer
This is not an officially supported Google product.
# `WikiDialog-OQ`
We are making `WikiDialog-OQ`, a dataset containing 11M information-seeking conversations from passages in English Wikipedia, publicly available.
Each conversation was generated using the dialog inpainting method detailed in the paper using the `Inpaint-OQ` inpainter model, a T5-XXL model that was fine-tuned on `OR-QuAC` and `QReCC` using a dialog reconstruction loss. For a detailed summary of the dataset, please refer to the [data card](WikiDialog-OQ_Data_Card.pdf).
The passages in the dataset come from the `OR-QuAC` retrieval corpus and share passage ids.
You can download the `OR-QuAC` dataset and find more details about it [here](https://github.com/prdwb/orconvqa-release).
## Download the raw JSON format data.
The dataset can be downloaded in (gzipped) JSON format from Google Cloud using the following commands:
```bash
# Download validation data (72Mb)
wget https://storage.googleapis.com/gresearch/dialog-inpainting/WikiDialog_OQ/data_validation.jsonl.gz
# Download training data (100 shards, about 72Mb each)
wget $(seq -f "https://storage.googleapis.com/gresearch/dialog-inpainting/WikiDialog_OQ/data_train.jsonl-%05g-of-00099.gz" 0 99)
```
Each line contains a single conversation serialized as a JSON object, for example:
```json
{
"pid": "894686@1",
"title": "Mother Mary Alphonsa",
"passage": "Two years after Nathaniel's death in 1864, Rose was enrolled at a boarding school run by Diocletian Lewis in nearby Lexington, Massachusetts; she disliked the experience. After Nathaniel's death, the family moved to Germany and then to England. Sophia and Una died there in 1871 and 1877, respectively. Rose married author George Parsons Lathrop in 1871. Prior to the marriage, Lathrop had shown romantic interest in Rose's sister Una. Their brother...",
"sentences": [
"Two years after Nathaniel's death in 1864, Rose was enrolled at a boarding school run by Diocletian Lewis in nearby Lexington, Massachusetts; she disliked the experience.",
"After Nathaniel's death, the family moved to Germany and then to England.",
"Sophia and Una died there in 1871 and 1877, respectively.",
"Rose married author George Parsons Lathrop in 1871.",
"Prior to the marriage, Lathrop had shown romantic interest in Rose's sister Una.",
"..."],
"utterances": [
"Hi, I'm your automated assistant. I can answer your questions about Mother Mary Alphonsa.",
"What was Mother Mary Alphonsa's first education?",
"Two years after Nathaniel's death in 1864, Rose was enrolled at a boarding school run by Diocletian Lewis in nearby Lexington, Massachusetts; she disliked the experience.",
"Did she stay in the USA?",
"After Nathaniel's death, the family moved to Germany and then to England.",
"Why did they move?",
"Sophia and Una died there in 1871 and 1877, respectively.",
"..."],
"author_num": [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0]
}
```
The fields are:
* `pid (string)`: a unique identifier of the passage that corresponds to the passage ids in the public OR-QuAC dataset.
* `title (string)`: Title of the source Wikipedia page for `passage`
* `passage (string)`: A passage from English Wikipedia
* `sentences (list of strings)`: A list of all the sentences that were segmented from `passage`.
* `utterances (list of strings)`: A synthetic dialog generated from `passage` by our Dialog Inpainter model. The list contains alternating utterances from each speaker (`[utterance_1, utterance_2, …, utterance_n]`). In this dataset, the first utterance is a "prompt" that was provided to the model, and every alternating utterance is a sentence from the passage.
* `author_num (list of ints)`: a list of integers indicating the author number in `text`. `[utterance_1_author, utterance_2_author, …, utterance_n_author]`. Author numbers are either 0 or 1.
Note that the dialog in `utterances` only uses the first 6 sentences of the passage; the remaining sentences are provided in the `sentences` field and can be used to extend the dialog.
## Download the processed dataset via [TFDS](https://www.tensorflow.org/datasets/catalog/wiki_dialog).
First, install the [`tfds-nightly`](https://www.tensorflow.org/datasets/overview#installation) package and other dependencies.
```bash
pip install -q tfds-nightly tensorflow apache_beam
```
After installation, load the `WikiDialog-OQ` dataset using the following snippet:
```python
>>> import tensorflow_datasets as tfds
>>> dataset, info = tfds.load('wiki_dialog/OQ', with_info=True)
>>> info
tfds.core.DatasetInfo(
name='wiki_dialog',
full_name='wiki_dialog/OQ/1.0.0',
description="""
WikiDialog is a large dataset of synthetically generated information-seeking
conversations. Each conversation in the dataset contains two speakers grounded
in a passage from English Wikipedia: one speaker’s utterances consist of exact
sentences from the passage; the other speaker is generated by a large language
model.
""",
config_description="""
WikiDialog generated from the dialog inpainter finetuned on OR-QuAC and QReCC. `OQ` stands for OR-QuAC and QReCC.
""",
homepage='https://www.tensorflow.org/datasets/catalog/wiki_dialog',
data_path='/placer/prod/home/tensorflow-datasets-cns-storage-owner/datasets/wiki_dialog/OQ/1.0.0',
file_format=tfrecord,
download_size=7.04 GiB,
dataset_size=36.58 GiB,
features=FeaturesDict({
'author_num': Sequence(tf.int32),
'passage': Text(shape=(), dtype=tf.string),
'pid': Text(shape=(), dtype=tf.string),
'sentences': Sequence(Text(shape=(), dtype=tf.string)),
'title': Text(shape=(), dtype=tf.string),
'utterances': Sequence(Text(shape=(), dtype=tf.string)),
}),
supervised_keys=None,
disable_shuffling=False,
splits={
'train': <SplitInfo num_examples=11264129, num_shards=512>,
'validation': <SplitInfo num_examples=113822, num_shards=4>,
},
citation="""""",
)
```
## Citing WikiDialog
```
@inproceedings{dai2022dialoginpainting,
title={Dialog Inpainting: Turning Documents to Dialogs},
author={Dai, Zhuyun and Chaganty, Arun Tejasvi and Zhao, Vincent and Amini, Aida and Green, Mike and Rashid, Qazi and Guu, Kelvin},
booktitle={International Conference on Machine Learning (ICML)},
year={2022},
organization={PMLR}
}
``` |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-yhavinga__cnn_dailymail_dutch-88133136-1284849222 | 2022-08-20T11:39:44.000Z | null | false | 2f80dbe421217fa8213f66f1b3f01613664423f9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:yhavinga/cnn_dailymail_dutch"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-yhavinga__cnn_dailymail_dutch-88133136-1284849222/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- yhavinga/cnn_dailymail_dutch
eval_info:
task: summarization
model: yhavinga/long-t5-tglobal-small-dutch-cnn-bf16-test
metrics: []
dataset_name: yhavinga/cnn_dailymail_dutch
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/long-t5-tglobal-small-dutch-cnn-bf16-test
* Dataset: yhavinga/cnn_dailymail_dutch
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. |
VanHoan | null | null | null | false | 1 | false | VanHoan/github-issues | 2022-08-20T12:30:24.000Z | null | false | 5f2b4d3f3847eff692773ccd0e9b92e97abfb269 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:Github",
"task_categories:table-question-answering",
"task_categories:fill-mask",
"task_ids:masked-language-mod... | https://huggingface.co/datasets/VanHoan/github-issues/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- machine-generated
license: []
multilinguality:
- monolingual
pretty_name: "From Ray with \u2764\uFE0F"
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- Github
task_categories:
- table-question-answering
- fill-mask
task_ids:
- masked-language-modeling
---
# Dataset Card for GitHub-Issues
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
fidsinn | null | null | null | false | 1 | false | fidsinn/future-statements | 2022-08-29T19:19:49.000Z | null | false | ded5969e45d8056a80eac52c4b04d233d316bc92 | [] | [
"tags:future",
"language:en",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/fidsinn/future-statements/resolve/main/README.md | ---
tags:
- future
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: Future Statements
---
# Dataset Card for Future Statements Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Motivation](#dataset-motivation)
- [Dataset Composition](#dataset-composition)
- [Dataset Collection Process](#dataset-collection-process)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Dataset Uses](#dataset-uses)
- [Dataset Maintenance](#dataset-maintenance)
## Dataset Description
The Future Statements Dataset is an English language dataset containing 2500 statements, 50% of which relate to future events and 50% of which relate to non-future events. The statements were collected manually and programmatically from several websites and datasets. The labels were set manually or programmatically (including corresponding manual examination of the labels).
**The statements within the dataset do not reflect any personal opinion of the creators of the dataset.**
## Dataset Motivation
- The sole purpose of this dataset was to fine tune the [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model into ourc[distilbert-base-future](https://huggingface.co/fidsinn/distilbert-base-future) model.
- The dataset was created by students from the University of Leipzig (Germany) in the Big Data and Language Technologies Module of the [Webis Group](https://huggingface.co/webis).
## Dataset Composition
- The instances represent single- or multi-sentence statements from following sources (unequally distributed):
- http://www.kaggle.com/unitednations/un-general-debates
- http://data.world/ian/united-nations-general-debate-corpus
- http://gadebate.un.org/
- http://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/0TJX8Y
- http://www.wsj.com/
- http://www.vox.com/
- http://futechblog.com/
- http://www.weforum.org/
- http://wired.com/
- http://openai.com/blog/
- http://techcrunch.com/
- http://futurism.com
- The dataset consists of 2500 statements in total, 50% of which relate to future events and 50% of which relate to non-future events.
- The label is represented by the 'future'-column:
- 0: No future statement
- 1: Future statement
- Noise, Biases and redundancies:
- The main goal of the data collection process was to find future statements and general statements in equal amount. The thematic content within the statements can be redundant and some topics can be much more present. The dataset was not created to work with the thematic content while only fine-tune an already existing model into a model which is sensible for future and non-future statements.
- The data in the 'statement'-column is publicly available and does not contain confidential information.
- The data in the 'statement'-column can contain data that might be offensive, insulting, threatening, or might otherwise cause anxiety. This is because the data was collected from several online sources. However this is unlikely because the data was collected from reputable sites.
## Dataset Collection Process
- The data was directly observable on the websites mentioned in upper section.
- The data was collected manually and programmatically (using Pythons NLTK library for automatic sentence-extraction and Regex-filtering).
- The data was collected from graduate students [D. Baradari](https://huggingface.co/Dunya), [F. Bartels](https://huggingface.co/fidsinn), A. Dewald, [J. Peters](https://huggingface.co/jpeters92) as part of a data science module of the University of Leipzig.
- The data was collected in the months 06/2022-07/2022 but the content of the dataset is independent of the data collection period and can be from earlier periods.
## Dataset Preprocessing
## Dataset Uses
- The future-statements dataset has been used for the purpose of fine-tuning the [distilbert-base-future](https://huggingface.co/fidsinn/distilbert-base-future) model.
- Further uses were not intended and are not planned in the future.
- The dataset is not intended to be used for any kind of content analyses, because it is unequally distributed in topics and not designed and evaluated for such use. It was only predestined for fine-tuning purposes in natural language processing.
## Dataset Maintenance
- Curators of the dataset can be contacted via the [community tab](https://huggingface.co/datasets/fidsinn/future-statements/discussions)
- It is not planned to update the dataset for further work or investigations. |
autoevaluate | null | null | null | false | 6 | false | autoevaluate/autoeval-eval-project-ml6team__cnn_dailymail_nl-7b67cb71-1286049228 | 2022-08-20T17:52:18.000Z | null | false | 3a97b8cc111c046a8563072d2f5a794efc889902 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:ml6team/cnn_dailymail_nl"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-ml6team__cnn_dailymail_nl-7b67cb71-1286049228/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ml6team/cnn_dailymail_nl
eval_info:
task: summarization
model: yhavinga/t5-v1.1-large-dutch-cnn-test
metrics: []
dataset_name: ml6team/cnn_dailymail_nl
dataset_config: default
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/t5-v1.1-large-dutch-cnn-test
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. |
abid | null | null | null | false | 2 | false | abid/indonesia-bioner-dataset | 2022-09-02T06:16:26.000Z | null | false | 6b38e31fde7c954f7e69566999fcd6ef2746b524 | [] | [
"license:bsd-3-clause-clear"
] | https://huggingface.co/datasets/abid/indonesia-bioner-dataset/resolve/main/README.md | ---
license: bsd-3-clause-clear
---
### Indonesia BioNER Dataset
This dataset taken from online health consultation platform Alodokter.com which has been annotated by two medical doctors. Data were annotated using IOB in CoNLL format.
Dataset contains 2600 medical answers by doctors from 2017-2020. Two medical experts were assigned to annotate the data into two entity types: DISORDERS and ANATOMY. The topics of answers are: diarrhea, HIV-AIDS, nephrolithiasis and TBC, which marked as high-risk dataset from WHO.
This work is possible by generous support from Dr. Diana Purwitasari and Safitri Juanita.
> Note: this data is provided as is in Bahasa Indonesia. No translations are provided.
| File | Amount |
|-------------|--------|
| train.conll | 1950 |
| valid.conll | 260 |
| test.conll | 390 | |
gzinzi | null | null | null | false | 2 | false | gzinzi/miles | 2022-08-20T22:12:34.000Z | null | false | 8deaeb5b10cbf1a2bf15d4d4947b5e8cebbd1785 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/gzinzi/miles/resolve/main/README.md | ---
license: afl-3.0
---
|
OlegKit | null | null | null | false | 1 | false | OlegKit/RND | 2022-08-21T03:29:17.000Z | null | false | b259a3561b751e4a87261dba40119d03fdc20817 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/OlegKit/RND/resolve/main/README.md | ---
license: afl-3.0
---
|
ariesutiono | null | null | null | false | 1 | false | ariesutiono/entailment-bank-v3 | 2022-08-21T06:05:29.000Z | null | false | 2d1b8010d08c2e6ce17c4879447b9a3ce7531d5e | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/ariesutiono/entailment-bank-v3/resolve/main/README.md |
---
license: cc-by-4.0
---
# Entailment bank dataset
This dataset raw source can be found at [allenai's Github](https://github.com/allenai/entailment_bank/).
If you use this dataset, it is best to cite the original paper
```
@article{entalmentbank2021,
title={Explaining Answers with Entailment Trees},
author={Dalvi, Bhavana and Jansen, Peter and Tafjord, Oyvind and Xie, Zhengnan and Smith, Hannah and Pipatanangkura, Leighanna and Clark, Peter},
journal={EMNLP},
year={2021}
}
``` |
kumapo | null | @InProceedings{Yoshikawa2017,
title = {STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
month = {July},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
pages = {417--421},
url = {http://www.aclweb.org/anthology/P17-2066}
} | COCO is a large-scale object detection, segmentation, and captioning dataset. | false | 3 | false | kumapo/stair_captions_dataset_script | 2022-08-21T06:20:03.000Z | null | false | 707299a96c4770da2d5321042d677071c4919690 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/kumapo/stair_captions_dataset_script/resolve/main/README.md | ---
license: cc-by-4.0
---
|
jayantsingh72 | null | null | null | false | 1 | false | jayantsingh72/github-issues-datasets | 2022-08-21T07:27:38.000Z | null | false | 8891e8b96c8a02c6fa7624edebf19edf1d3a65f9 | [] | [] | https://huggingface.co/datasets/jayantsingh72/github-issues-datasets/resolve/main/README.md | |
tyqiangz | null | null | null | false | 687 | false | tyqiangz/multilingual-sentiments | 2022-08-25T09:55:35.000Z | null | false | 11a34e59b2b0c6f2523d660c83c9d222c402d5df | [] | [
"license:apache-2.0",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:ja",
"language:zh",
"language:id",
"language:ar",
"language:hi",
"language:it",
"language:ms",
"language:pt",
"multilinguality:monolingual",
"multilinguality:multilingual",
"size_categories:100K... | https://huggingface.co/datasets/tyqiangz/multilingual-sentiments/resolve/main/README.md | ---
license: apache-2.0
language:
- de
- en
- es
- fr
- ja
- zh
- id
- ar
- hi
- it
- ms
- pt
multilinguality:
- monolingual
- multilingual
size_categories:
- 100K<n<1M
- 1M<n<10M
task_ids:
- text-classification
- sentiment-classification
- sentiment-analysis
task_categories:
- text-classification
- sentiment-analysis
---
# Multilingual Sentiments Dataset
A collection of multilingual sentiments datasets grouped into 3 classes -- positive, neutral, negative.
Most multilingual sentiment datasets are either 2-class positive or negative, 5-class ratings of products reviews (e.g. Amazon multilingual dataset) or multiple classes of emotions. However, to an average person, sometimes positive, negative and neutral classes suffice and are more straightforward to perceive and annotate. Also, a positive/negative classification is too naive, most of the text in the world is actually neutral in sentiment. Furthermore, most multilingual sentiment datasets don't include Asian languages (e.g. Malay, Indonesian) and are dominated by Western languages (e.g. English, German).
Git repo: https://github.com/tyqiangz/multilingual-sentiment-datasets
## Dataset Description
- **Webpage:** https://github.com/tyqiangz/multilingual-sentiment-datasets
|
ziwenyd | null | null | null | false | 1 | false | ziwenyd/avatar-functions | 2022-09-02T11:04:40.000Z | null | false | 753749c56fe313d51e37896ed12c4894e84dcf19 | [] | [
"license:mit"
] | https://huggingface.co/datasets/ziwenyd/avatar-functions/resolve/main/README.md | ---
license: mit
---
There is no difference between 'train' and 'test', these are just used thus the csv file can be detected by huggingface.
max_java_exp_len=1784
max_python_exp_len=1469 |
ayberk | null | null | null | false | 1 | false | ayberk/ayberksdatasett | 2022-08-22T14:20:55.000Z | null | false | b0684bafbe194d26fcd792a657af71636b227b76 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/ayberk/ayberksdatasett/resolve/main/README.md | ---
license: afl-3.0
---
|
fourteenBDr | null | null | null | false | 1 | false | fourteenBDr/toutiao | 2022-08-21T14:58:22.000Z | null | false | 5c0d3e28f40f5b5d1bb9449683385e6dce5c59c5 | [] | [
"license:mit"
] | https://huggingface.co/datasets/fourteenBDr/toutiao/resolve/main/README.md | ---
license: mit
---
# 中文文本分类数据集
数据来源:
今日头条客户端
数据格式:
```
6552431613437805063_!_102_!_news_entertainment_!_谢娜为李浩菲澄清网络谣言,之后她的两个行为给自己加分_!_佟丽娅,网络谣言,快乐大本营,李浩菲,谢娜,观众们
```
每行为一条数据,以`_!_`分割的个字段,从前往后分别是 新闻ID,分类code(见下文),分类名称(见下文),新闻字符串(仅含标题),新闻关键词
分类code与名称:
```
100 民生 故事 news_story
101 文化 文化 news_culture
102 娱乐 娱乐 news_entertainment
103 体育 体育 news_sports
104 财经 财经 news_finance
106 房产 房产 news_house
107 汽车 汽车 news_car
108 教育 教育 news_edu
109 科技 科技 news_tech
110 军事 军事 news_military
112 旅游 旅游 news_travel
113 国际 国际 news_world
114 证券 股票 stock
115 农业 三农 news_agriculture
116 电竞 游戏 news_game
```
数据规模:
共382688条,分布于15个分类中。
采集时间:
2018年05月
|
pootow | null | null | null | false | 1 | false | pootow/suo-xie-zhai-yao | 2022-08-21T15:22:45.000Z | null | false | a3d1589adadeceb9d89bb2eb0d552859167fa0e4 | [] | [
"license:gpl"
] | https://huggingface.co/datasets/pootow/suo-xie-zhai-yao/resolve/main/README.md | ---
license: gpl
---
|
yhavinga | null | @article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
} | Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article. | false | 11 | false | yhavinga/xsum_dutch | 2022-08-21T20:50:08.000Z | xsum_dutch | false | 89ffbee82a31a0a741d56de24a55918ce0d6d2ea | [] | [
"language:nl",
"task_categories:summarization",
"task_ids:news-articles-summarization"
] | https://huggingface.co/datasets/yhavinga/xsum_dutch/resolve/main/README.md | ---
pretty_name: Extreme Summarization (XSum) in Dutch
language:
- nl
paperswithcode_id: xsum_dutch
task_categories:
- summarization
task_ids:
- news-articles-summarization
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
# Dataset Card for "xsum_dutch" 🇳🇱🇧🇪 Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
The Xsum Dutch 🇳🇱🇧🇪 Dataset is an English-language dataset translated to Dutch.
*This dataset currently (Aug '22) has a single config, which is
config `default` of [xsum](https://huggingface.co/datasets/xsum) translated to Dutch
with [yhavinga/t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi).*
- **Homepage:** [https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 245.38 MB
- **Size of the generated dataset:** 507.60 MB
- **Total amount of disk used:** 752.98 MB
### Dataset Summary
Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 245.38 MB
- **Size of the generated dataset:** 507.60 MB
- **Total amount of disk used:** 752.98 MB
An example of 'validation' looks as follows.
```
{
"document": "some-body",
"id": "29750031",
"summary": "some-sentence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|204045| 11332|11334|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding the English version of this dataset.
The dataset was translated on Cloud TPU compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
|
clips | null | null | 20Q | false | 1 | false | clips/20Q | 2022-08-21T20:54:06.000Z | null | false | 00d84f741dda99d94db780c90ebb5f980050381d | [] | [
"language:en",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"tags:20Q",
"tags:Twenty Questions",
"tags:20 Questions",
"task_categories:question-answering"
] | https://huggingface.co/datasets/clips/20Q/resolve/main/README.md | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: 20Q - World Knowledge Benchmark
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- 20Q
- Twenty Questions
- 20 Questions
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for 20Q
|
neuralspace | null | null | null | false | 1 | false | neuralspace/NSME-COM | 2022-09-13T16:16:28.000Z | acronym-identification | false | 9c3d1ef39f048685295f552ba2b0e3bdff3c14bf | [] | [
"annotations_creators:other",
"language_creators:other",
"language:en",
"expert-generated license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:text2text-... | https://huggingface.co/datasets/neuralspace/NSME-COM/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
expert-generated license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- question-answering
- text-retrieval
- text2text-generation
- other
- translation
- conversational
task_ids:
- extractive-qa
- closed-domain-qa
- utterance-retrieval
- document-retrieval
- closed-domain-qa
- open-book-qa
- closed-book-qa
paperswithcode_id: acronym-identification
pretty_name: Massive E-commerce Dataset for Retail and Insurance domain.
train-eval-index:
- config: nsds
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
sentence: text
label: target
metrics:
- type: nsme-com
name: NSME-COM
config:
nsds
tags:
- chatbots
- e-commerce
- retail
- insurance
- consumer
- consumer goods
configs:
- nsds
---
# Dataset Card for NSME-COM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
### Dataset Description
- **Homepage**: [NeuralSpace Homepage](https://huggingface.co/neuralspace)
- **Repository:** [NSME-COM Dataset](https://huggingface.co/datasets/neuralspace/NSME-COM)
- **Point of Contact:** [Ankur Saxena](mailto:ankursaxena@neuralspace.ai)
- **Point of Contact:** [Ayushman Dash](mailto:ayushman@neuralspace.ai)
- **Size of downloaded dataset files:** 10.86 KB
### Dataset Summary
In this digital age, the E-Commerce industry has increasingly become a vital component of business strategy and development. To streamline, enhance and take the customer experience to the highest level, NLP can help create surprisingly massive value in the E-Commerce industry.
One of the most popular NLP use-cases is a chatbot. With a chatbot you can automate your customer engagement saving yourself time and other resources. Offering an enhanced and simplified customer experience you can increase your sales and also offer your website visitors personalized recommendations.
The NSME-COM dataset (NeuralSpace Massive E-Comm) is a manually curated dataset by data engineers at [NeuralSpace](https://www.neuralspace.ai/) for the insurance and retail domain. The dataset contains intents (the action users want to execute) and examples (anything that a user sends to the chatbot) that can be used to build a chatbot. The files in this dataset are available in JSON format.
### Supported Tasks
#### nsme-com
### Languages
The language data in NSME-COM is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 10.86 KB
An example of 'test' looks as follows.
``` {
"text": "is it good to add roadside assistance?",
"intent": "Add",
"type": "Test"
}
```
An example of 'train' looks as follows.
```{
"text": "how can I add my spouse as a nominee?",
"intent": "Add",
"type": "Train"
},
```
### Data Fields
The data fields are the same among all splits.
#### nsme-com
- `text`: a `string` feature.
- `intent`: a `string` feature.
- `type`: a classification label, with possible values including `train` or `test`.
### Data Splits
#### nsme-com
| |train|test|
|----|----:|---:|
|nsme-com| 1725| 406|
### Contributions
Ankur Saxena (ankursaxena@neuralspace.ai) |
merkalo-ziri | null | null | null | false | 1 | false | merkalo-ziri/qa_main | 2022-08-24T08:54:01.000Z | null | false | dd1c4533dbd97987d313319b71fbf747478db511 | [] | [
"annotations_creators:found",
"language:rus",
"language_creators:found",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:question-answering"
] | https://huggingface.co/datasets/merkalo-ziri/qa_main/resolve/main/README.md | ---
annotations_creators:
- found
language:
- rus
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: qa_main
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
ssharma2020 | null | null | null | false | 5 | false | ssharma2020/Plant-Seedlings-Dataset | 2022-08-22T07:32:11.000Z | null | false | a39a46ee729d724ac67b4a66baab0e6e85a92484 | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/ssharma2020/Plant-Seedlings-Dataset/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
|
masakhane | null | @inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
} | MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are:
- Amharic
- Bambara
- Ghomala
- Ewe
- Fon
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Nigerian-Pidgin
- Chichewa
- Shona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
The train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho
For more details see https://aclanthology.org/2022.naacl-main.223/ | false | 4 | false | masakhane/mafand | 2022-08-23T11:51:31.000Z | null | false | 7028115028b104388af7ec2eb7b7888fc736a106 | [] | [
"annotations_creators:expert-generated",
"language:en",
"language:fr",
"language:am",
"language:bm",
"language:bbj",
"language:ee",
"language:fon",
"language:ha",
"language:ig",
"language:lg",
"language:mos",
"language:ny",
"language:pcm",
"language:rw",
"language:sn",
"language:sw",... | https://huggingface.co/datasets/masakhane/mafand/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
- fr
- am
- bm
- bbj
- ee
- fon
- ha
- ig
- lg
- mos
- ny
- pcm
- rw
- sn
- sw
- tn
- tw
- wo
- xh
- yo
- zu
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- translation
- multilingual
pretty_name: mafand
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- news, mafand, masakhane
task_categories:
- translation
task_ids: []
---
# Dataset Card for MAFAND
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/masakhane-io/lafand-mt
- **Repository:** https://github.com/masakhane-io/lafand-mt
- **Paper:** https://aclanthology.org/2022.naacl-main.223/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [David Adelani](https://dadelani.github.io/)
### Dataset Summary
MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
The languages covered are:
- Amharic
- Bambara
- Ghomala
- Ewe
- Fon
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Nigerian-Pidgin
- Chichewa
- Shona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
## Dataset Structure
### Data Instances
```
>>> from datasets import load_dataset
>>> data = load_dataset('masakhane/mafand', 'en-yor')
{"translation": {"src": "President Buhari will determine when to lift lockdown – Minister", "tgt": "Ààrẹ Buhari ló lè yóhùn padà lórí ètò kónílégbélé – Mínísítà"}}
{"translation": {"en": "President Buhari will determine when to lift lockdown – Minister", "yo": "Ààrẹ Buhari ló lè yóhùn padà lórí ètò kónílégbélé – Mínísítà"}}
```
### Data Fields
- "translation": name of the task
- "src" : source language e.g en
- "tgt": target language e.g yo
### Data Splits
Train/dev/test split
language| Train| Dev |Test
-|-|-|-
amh |-|899|1037
bam |3302|1484|1600
bbj |2232|1133|1430
ewe |2026|1414|1563
fon |2637|1227|1579
hau |5865|1300|1500
ibo |6998|1500|1500
kin |-|460|1006
lug |4075|1500|1500
luo |4262|1500|1500
mos |2287|1478|1574
nya |-|483|1004
pcm |4790|1484|1574
sna |-|556|1005
swa |30782|1791|1835
tsn |2100|1340|1835
twi |3337|1284|1500
wol |3360|1506|1500|
xho |-|486|1002|
yor |6644|1544|1558|
zul |3500|1239|998|
## Dataset Creation
### Curation Rationale
MAFAND was created from the news domain, translated from English or French to an African language
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
- [Masakhane](https://github.com/masakhane-io/lafand-mt)
- [Igbo](https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_en_mt)
- [Swahili](https://opus.nlpl.eu/GlobalVoices.php)
- [Hausa](https://www.statmt.org/wmt21/translation-task.html)
- [Yoruba](https://github.com/uds-lsv/menyo-20k_MT)
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
Masakhane members
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[CC-BY-4.0-NC](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
}
``` |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-jnlpba-c103d433-1295449602 | 2022-08-22T10:58:29.000Z | null | false | 0e94741b4d3fedcef54dbc40fd4a5d0e2cc2ca4a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:jnlpba"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-jnlpba-c103d433-1295449602/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jnlpba
eval_info:
task: entity_extraction
model: siddharthtumre/biobert-ner
metrics: []
dataset_name: jnlpba
dataset_config: jnlpba
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: siddharthtumre/biobert-ner
* Dataset: jnlpba
* Config: jnlpba
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@siddharthtumre](https://huggingface.co/siddharthtumre) for evaluating this model. |
victor | null | null | null | false | 1 | false | victor/autotrain-data-image-classification-test-18 | 2022-08-22T12:11:50.000Z | null | false | ffd6fca23eefc71c119a52e3f7228a5576a9140a | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/victor/autotrain-data-image-classification-test-18/resolve/main/README.md | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: image-classification-test-18
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project image-classification-test-18.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<224x224 RGB PIL image>",
"target": 2
},
{
"image": "<224x224 RGB PIL image>",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=3, names=['ADONIS', 'AFRICAN GIANT SWALLOWTAIL', 'AMERICAN SNOOT'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 269 |
| valid | 69 |
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-09cba8dc-757f-4f7a-8194-174e4439eb99-91 | 2022-08-22T12:28:26.000Z | null | false | 52ac109bd3961cbdca195d1a63d5623df925ae19 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-09cba8dc-757f-4f7a-8194-174e4439eb99-91/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-80c2643d-2334-4a14-9912-449e234f13a2-102 | 2022-08-22T12:34:51.000Z | null | false | 583895c958b37d26d265c28fe134c4bfd5320361 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-80c2643d-2334-4a14-9912-449e234f13a2-102/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: ['matthews_correlation']
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-66155224-f2a7-4c5e-94b3-a3683a04175e-2314 | 2022-08-22T13:04:47.000Z | null | false | 641d2fd9bacfcce2fdfa8c9c586e74fe843d7bef | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/squad-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-66155224-f2a7-4c5e-94b3-a3683a04175e-2314/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/squad-sample
eval_info:
task: extractive_question_answering
model: autoevaluate/distilbert-base-cased-distilled-squad
metrics: []
dataset_name: autoevaluate/squad-sample
dataset_config: autoevaluate--squad-sample
dataset_split: test
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/distilbert-base-cased-distilled-squad
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-2dc683ab-6695-42ab-9eff-11dad91952e1-2415 | 2022-08-22T13:07:28.000Z | null | false | d86659a36094de76171db53a8dda513ffa5a838d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/xsum-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-2dc683ab-6695-42ab-9eff-11dad91952e1-2415/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/xsum-sample
eval_info:
task: summarization
model: autoevaluate/summarization
metrics: []
dataset_name: autoevaluate/xsum-sample
dataset_config: autoevaluate--xsum-sample
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: autoevaluate/summarization
* Dataset: autoevaluate/xsum-sample
* Config: autoevaluate--xsum-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-8a305641-aedc-4d3a-9609-7f9f9c99c489-2616 | 2022-08-22T13:25:10.000Z | null | false | d2fa13f1968351b546a9a5a89610817d868e1120 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/wmt16-ro-en-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-8a305641-aedc-4d3a-9609-7f9f9c99c489-2616/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/wmt16-ro-en-sample
eval_info:
task: translation
model: autoevaluate/translation
metrics: []
dataset_name: autoevaluate/wmt16-ro-en-sample
dataset_config: autoevaluate--wmt16-ro-en-sample
dataset_split: test
col_mapping:
source: translation.ro
target: translation.en
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: autoevaluate/translation
* Dataset: autoevaluate/wmt16-ro-en-sample
* Config: autoevaluate--wmt16-ro-en-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-project-0c5b3473-b8bd-4084-ad01-6ee894dddf29-2917 | 2022-08-22T13:35:37.000Z | null | false | a51d02dac28333f43f90d7d07753ed6c3c47ede0 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-0c5b3473-b8bd-4084-ad01-6ee894dddf29-2917/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
pinecone | null | @InProceedings{huggingface:dataset,
title = {MovieLens Ratings},
author={Ismail Ashraq, James Briggs},
year={2022}
} | This dataset streams recent user ratings from the MovieLens 25M dataset and adds poster URLs. | false | 20 | false | pinecone/movielens-recent-ratings | 2022-08-23T10:00:17.000Z | null | false | 9000ce7fabbce934fc7637c7cd4736bf87a616b2 | [] | [
"annotations_creators:machine-generated",
"language:en",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"tags:movielens",
"tags:recommendation",
"tags:collaborative filtering"
] | https://huggingface.co/datasets/pinecone/movielens-recent-ratings/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license: []
multilinguality:
- monolingual
pretty_name: MovieLens User Ratings
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- movielens
- recommendation
- collaborative filtering
task_categories: []
task_ids: []
---
# MovieLens User Ratings
This dataset contains ~1M user ratings, consisting of ~10k of the most recent movies from the MovieLens 25M dataset, for which over 30k unique users have rated. The dataset is streamed from the MovieLens 25M dataset, filters for the recent movies, and returns the user ratings for those. After a few joins and checks, we get this dataset. Included are the URLs of the respective movie posters.
The dataset is part of an example on [building a movie recommendation engine](https://www.pinecone.io/docs/examples/movie-recommender-system/) with vector search. |
gradio | null | null | null | false | 1 | false | gradio/transformers-stats-space-data | 2022-08-22T20:20:24.000Z | null | false | 99c0a674b67ae0789547e6475a2f62bad451b09c | [] | [
"license:mit"
] | https://huggingface.co/datasets/gradio/transformers-stats-space-data/resolve/main/README.md | ---
license: mit
---
|
Yomyom52 | null | null | null | false | 1 | false | Yomyom52/sb1 | 2022-08-23T11:01:22.000Z | null | false | 9742ee01a91a4f9aa3a779cde65ee80e55b95423 | [] | [] | https://huggingface.co/datasets/Yomyom52/sb1/resolve/main/README.md | |
mehdidn | null | null | null | false | 1 | false | mehdidn/ner | 2022-08-24T00:22:38.000Z | null | false | 69a856480564b5ef3e19e201f1ead5882ee3a3b0 | [] | [
"license:other"
] | https://huggingface.co/datasets/mehdidn/ner/resolve/main/README.md | ---
license: other
---
|
UKPLab | null | @article{stangier2022texprax,
title={TexPrax: A Messaging Application for Ethical, Real-time Data Collection and Annotation},
author={Stangier, Lorenz and Lee, Ji-Ung and Wang, Yuxi and M{\"u}ller, Marvin and Frick, Nicholas and Metternich, Joachim and Gurevych, Iryna},
journal={arXiv preprint arXiv:2208.07846},
year={2022}
} | This dataset was collected in the [TexPrax](https://texprax.de/) project and contains named entities annotated by three researchers as well as annotated sentences (problem/P, cause/C, solution/S, and other/O). | false | 5 | false | UKPLab/TexPrax | 2022-10-18T19:06:10.000Z | null | false | 73fd20b203b1f7d73c082f716b5af1576be75ce4 | [] | [
"arxiv:2208.07846",
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/UKPLab/TexPrax/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
# Dataset Card for TexPrax
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://texprax.de/**
- **Repository: https://github.com/UKPLab/TexPrax**
- **Paper: https://arxiv.org/abs/2208.07846**
- **Leaderboard: n/a**
- **Point of Contact: Ji-Ung Lee (http://www.ukp.tu-darmstadt.de/)**
### Dataset Summary
This dataset contains dialogues collected from German factory workers at the _Center for industrial productivity_ ([CiP](https://www.prozesslernfabrik.de/)). The dialogues mostly concern issues workers encounter during their daily work, such as machines breaking down, material missing, etc. The dialogues are further expert-annotated on a sentence level (problem, cause, solution, other) for sentence classification and on a token level for named entity recognition using a BIO tagging scheme. Note, that the dataset was collected in three rounds, each around one year apart. Here, we provide the data only split into train and test data where the test data was collected at the last round (July 2022). Additionally, the data from the first round is split into two subdomains, industry 4.0 (industrie) and machining (zerspanung). The splits were made according to the respective groups of people working at different assembly lines in the factory.
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Sentence classification
* Named entity recognition (will be updated soon with the new indexing)
* Dialog generation (so far not evaluated)
### Languages
German
## Dataset Structure
### Data Instances
On sentence level, each instance consists of the dialog-id, turn-id, sentence-id, the sentence (raw), the label, the domain, and the subsplit.
```
{"185";"562";993";"wie kriege ich die Dichtung raus?";"P";"n/a";"3"}
```
On token level, each instance consists of a unique identifier, a list of tokens containing the whole dialog, the list of labels (bio-tagged entities), and the subsplit.
```
{"178_0";"['Hi', 'wie', 'kriege', 'ich', 'die', 'Dichtung', 'raus', '?', 'in', 'der', 'Schublade', 'gibt', 'es', 'einen', 'Dichtungszieher']";"['O', 'O', 'O', 'O', 'O', 'B-PRE', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'O', 'O', 'B-PE']";"Batch 3"}
```
### Data Fields
Sentence level:
* dialog-id: unique identifier for the dialog
* turn-id: unique identifier for the turn
* sentence-id: unique identifier for the dialog
* sentence: the respective sentence
* label: the label (_P_ for Problem, _C_ for Cause, _S_ for solution, and _O_ for Other)
* domain: the subdomains where the data was collected from. Domains are industry, machining, or n/a (for batch 2 and batch 3).
* subsplit: the respective subsplit of the data (see below)
Token level:
* id: the identifier
* tokens: a list of tokens (i.e., the tokenized dialogue)
* entities: the named entity in a BIO scheme (_B-X_, _I-X_, or O).
* subsplit: the respective subsplit of the data (see below)
### Data Splits
The dataset is split into train and test splits, but contains further subsplits (subsplit column). Note, that the splits are collected at different times with some turnaround in the workforce. Hence, later data (especially the data from batch 2) contains more turns (due to increased search for a cause) as more inexperienced workers who newly joined were employed in the factory.
Train:
* Batch 1 industrie: data collected in October 2020 from workers in the industry 4.0 assembly line
* Batch 1 zerspanung: data collected in October 2020 from workers in the machining assembly line
* Batch 2: data collected in-between October 2021-June 2022 from all workers
Test:
* Batch 3: data collected in July 2022 together with the system usability study run
Sentence level statistics:
| Batch | Dialogues | Turns | Sentences |
|---|---|---|---|
| 1 | 81 | 246 | 553 |
| 2 | 97 | 309 | 432 |
| 3 | 24 | 36 | 42 |
| Overall | 202 | 591 | 1,027 |
Token level statistics:
[Needs to be added]
## Dataset Creation
### Curation Rationale
This dataset provides task-oriented dialogues that solve a very domain specific problem.
### Source Data
#### Initial Data Collection and Normalization
The data was generated by workers at the [CiP](https://www.prozesslernfabrik.de/). The data was collected in three rounds (October 2020, October 2021-June 2022, July 2022). As the dialogues occurred during their daily work, one distinct property of the dataset is that all dialogues are very informal 'ne', contain abbreviations 'vll', and filler words such as 'ah'. For a detailed description please see the [paper](https://arxiv.org/abs/2208.07846).
#### Who are the source language producers?
German factory workers working at the [CiP](https://www.prozesslernfabrik.de/)
### Annotations
#### Annotation process
**Token level.** Token level annotation was done by researchers who are responsible for supervising and teaching workers at the CiP. The data was first split into three parts, each annotated by one researcher. Next, each researcher cross-examined the other researchers' annotations. If there were disagreements, all three researchers discussed the final label.
**Sentence level.** Sentence level annotations were collected from the factory workers who also generated the dialogues. For details about the data collection, please see the [TexPrax demo paper](https://arxiv.org/abs/2208.07846).
#### Who are the annotators?
**Token level.** Researchers working at the CiP.
**Sentence level.** The factory workers themselves.
### Personal and Sensitive Information
This dataset is fully anonymized. All occurrences of names have been manually checked during annotation and replaced with a random token.
## Considerations for Using the Data
### Social Impact of Dataset
Informal language especially used in short messages, however, seldom considered in existing NLP datasets. This dataset could serve as an interesting evaluation task for transferring language models to low-resource, but highly specific domains. Moreover, we note that despite all abbreviations, typos, and local dialects used in the messages, all workers were able to understand the questions as well as replies. This should be a standard future NLP models should be able to uphold.
### Discussion of Biases
The dialogues are very much on a professional level. The workers were informed (and gave their consent) in advance that their messages are being recorded and processed, which may have influenced them to hold only professional conversations, hence, all dialogues concern inanimate objects (i.e., machines).
### Other Known Limitations
[More Information Needed]
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("UKPLab/TexPrax") # default config is sentence classification
dataset = load_dataset("UKPLab/TexPrax", "ner") # use the ner tag for named entity recognition
```
Please find more information about the code and how the data was collected on [GitHub](https://github.com/UKPLab/TexPrax).
### Dataset Curators
Curation is managed by our [data manager](https://www.informatik.tu-darmstadt.de/ukp/research_ukp/ukp_research_data_and_software/ukp_data_and_software.en.jsp) at UKP.
### Licensing Information
[CC-by-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
Please cite this data using:
```
@article{stangier2022texprax,
title={TexPrax: A Messaging Application for Ethical, Real-time Data Collection and Annotation},
author={Stangier, Lorenz and Lee, Ji-Ung and Wang, Yuxi and M{\"u}ller, Marvin and Frick, Nicholas and Metternich, Joachim and Gurevych, Iryna},
journal={arXiv preprint arXiv:2208.07846},
year={2022}
}
```
### Contributions
Thanks to [@Wuhn](https://github.com/Wuhn) for adding this dataset.
## Tags
annotations_creators:
- expert-generated
language:
- de
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: TexPrax-Conversations
size_categories:
- n<1K
- 1K<n<10K
source_datasets:
- original
tags:
- dialog
- expert to expert conversations
- task-oriented
task_categories:
- token-classification
- text-classification
task_ids:
- named-entity-recognition
- multi-class-classification |
sil-ai | null | \
@InProceedings{huggingface:audio-keyword-spotting,
title = {audio-keyword-spotting},
author={Joshua Nemecek
},
year={2022}
} | null | false | 1 | false | sil-ai/audio-keyword-spotting | 2022-10-25T11:04:31.000Z | null | false | 5ed093783b2027664fb67bf53917aee0e79fb625 | [] | [
"annotations_creators:machine-generated",
"language_creators:other",
"language:eng",
"language:en",
"language:spa",
"language:es",
"language:ind",
"language:id",
"license:cc-by-4.0",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"source_datasets:MLCommons/ml_spoken_w... | https://huggingface.co/datasets/sil-ai/audio-keyword-spotting/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- eng
- en
- spa
- es
- ind
- id
license: cc-by-4.0
multilinguality:
- multilingual
source_datasets:
- extended|common_voice
- MLCommons/ml_spoken_words
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: Audio Keyword Spotting
tags:
- other-keyword-spotting
---
# Dataset Card for Audio Keyword Spotting
## Table of Contents
- [Table of Contents](#table-of-contents)
## Dataset Description
- **Homepage:** https://sil.ai.org
- **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org)
- **Source Data:** [MLCommons/ml_spoken_words](https://huggingface.co/datasets/MLCommons/ml_spoken_words), [trabina GitHub](https://github.com/wswu/trabina)

## Dataset Summary
The initial version of this dataset is a subset of [MLCommons/ml_spoken_words](https://huggingface.co/datasets/MLCommons/ml_spoken_words), which is derived from Common Voice, designed for easier loading. Specifically, the subset consists of `ml_spoken_words` files filtered by the names and placenames transliterated in Bible translations, as found in [trabina](https://github.com/wswu/trabina). For our initial experiment, we have focused only on English, Spanish, and Indonesian, three languages whose name spellings are frequently used in other translations. We anticipate growing this dataset in the future to include additional keywords and other languages as the experiment progresses.
### Data Fields
* file: strinrelative audio path inside the archive
* is_valid: if a sample is valid
* language: language of an instance.
* speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid
* gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]`
* keyword: word spoken in a current sample
* audio: a dictionary containing the relative path to the audio file,
the decoded audio array, and the sampling rate.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically
decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of
a large number of audio files might take a significant amount of time.
Thus, it is important to first query the sample index before the "audio" column,
i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`
### Data Splits
The data for each language is splitted into train / validation / test parts.
## Supported Tasks
Keyword spotting and spoken term search
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online.
You agree to not attempt to determine the identity of speakers.
### Licensing Information
The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic
research and commercial applications in keyword spotting and spoken term search.
|
and111 | null | null | null | false | 1 | false | and111/bert_pretrain_phase1 | 2022-08-23T17:14:31.000Z | null | false | 8359df330efa22f5f856aba4b0c307ecdaf691e3 | [] | [] | https://huggingface.co/datasets/and111/bert_pretrain_phase1/resolve/main/README.md | ### Dataset Summary
Input data for the **first** phase of BERT pretraining (sequence length 128). All text is tokenized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) tokenizer.
Data is obtained by concatenating and shuffling [wikipedia](https://huggingface.co/datasets/wikipedia) (split: `20220301.en`) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) datasets and running [reference BERT data preprocessor](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) without masking and input duplication (`dupe_factor = 1`). Documents are split into sentences with the [NLTK](https://www.nltk.org/) sentence tokenizer (`nltk.tokenize.sent_tokenize`).
See the dataset for the **second** phase of pretraining: [bert_pretrain_phase2](https://huggingface.co/datasets/and111/bert_pretrain_phase2). |
RCC-MSU | null | @inproceedings{mozharova-loukachevitch-2016-two-stage-russian-ner,
author={Mozharova, Valerie and Loukachevitch, Natalia},
booktitle={2016 International FRUCT Conference on Intelligence, Social Media and Web (ISMW FRUCT)},
title={Two-stage approach in Russian named entity recognition},
year={2016},
pages={1-6},
doi={10.1109/FRUCT.2016.7584769}} | Collection3 is a Russian dataset for named entity recognition annotated with LOC (location), PER (person), and ORG (organization) tags.
Dataset is based on collection Persons-1000 originally containing 1000 news documents labeled only with names of persons.
Additional labels were added by Valerie Mozharova and Natalia Loukachevitch.
Conversion to the IOB2 format and splitting into train, validation and test sets was done by DeepPavlov team.
For more details see https://ieeexplore.ieee.org/document/7584769 and http://labinform.ru/pub/named_entities/index.htm | false | 93 | false | RCC-MSU/collection3 | 2022-10-12T09:16:06.000Z | null | false | 1e482baf20cc56634335c1c519a852672100f870 | [] | [
"annotations_creators:other",
"language:ru",
"language_creators:found",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/RCC-MSU/collection3/resolve/main/README.md | ---
annotations_creators:
- other
language:
- ru
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: Collection3
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
splits:
- name: test
num_bytes: 935298
num_examples: 1922
- name: train
num_bytes: 4380588
num_examples: 9301
- name: validation
num_bytes: 1020711
num_examples: 2153
download_size: 878777
dataset_size: 6336597
---
# Dataset Card for Collection3
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Collection3 homepage](http://labinform.ru/pub/named_entities/index.htm)
- **Repository:** [Needs More Information]
- **Paper:** [Two-stage approach in Russian named entity recognition](https://ieeexplore.ieee.org/document/7584769)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Collection3 is a Russian dataset for named entity recognition annotated with LOC (location), PER (person), and ORG (organization) tags. Dataset is based on collection [Persons-1000](http://ai-center.botik.ru/Airec/index.php/ru/collections/28-persons-1000) originally containing 1000 news documents labeled only with names of persons.
Additional labels were obtained using guidelines similar to MUC-7 with web-based tool [Brat](http://brat.nlplab.org/) for collaborative text annotation.
Currently dataset contains 26K annotated named entities (11K Persons, 7K Locations and 8K Organizations).
Conversion to the IOB2 format and splitting into train, validation and test sets was done by [DeepPavlov team](http://files.deeppavlov.ai/deeppavlov_data/collection3_v2.tar.gz).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Russian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"id": "851",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 1, 2, 0, 0, 0],
"tokens": ['Главный', 'архитектор', 'программного', 'обеспечения', '(', 'ПО', ')', 'американского', 'высокотехнологичного', 'гиганта', 'Microsoft', 'Рэй', 'Оззи', 'покидает', 'компанию', '.']
}
```
### Data Fields
- id: a string feature.
- tokens: a list of string features.
- ner_tags: a list of classification labels (int). Full tagset with indices:
```
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}
```
### Data Splits
|name|train|validation|test|
|---------|----:|---------:|---:|
|Collection3|9301|2153|1922|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{mozharova-loukachevitch-2016-two-stage-russian-ner,
author={Mozharova, Valerie and Loukachevitch, Natalia},
booktitle={2016 International FRUCT Conference on Intelligence, Social Media and Web (ISMW FRUCT)},
title={Two-stage approach in Russian named entity recognition},
year={2016},
pages={1-6},
doi={10.1109/FRUCT.2016.7584769}}
``` |
and111 | null | null | null | false | 593 | false | and111/bert_pretrain_phase2 | 2022-08-24T14:01:12.000Z | null | false | 1a5c9e376174dae432c38636a90aafb600204ecd | [] | [] | https://huggingface.co/datasets/and111/bert_pretrain_phase2/resolve/main/README.md | ### Dataset Summary
Input data for the **second** phase of BERT pretraining (sequence length 512). All text is tokenized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) tokenizer.
Data is obtained by concatenating and shuffling [wikipedia](https://huggingface.co/datasets/wikipedia) (split: `20220301.en`) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) datasets and running [reference BERT data preprocessor](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) without masking and input duplication (`dupe_factor = 1`). Documents are split into sentences with the [NLTK](https://www.nltk.org/) sentence tokenizer (`nltk.tokenize.sent_tokenize`).
See the dataset for the **first** phase of pretraining: [bert_pretrain_phase1](https://huggingface.co/datasets/and111/bert_pretrain_phase1). |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-project-squad-3b1fb479-1302649847 | 2022-08-23T14:38:28.000Z | null | false | e97515e0046d6edb35a7e3e236e7f898bf0b3222 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad-3b1fb479-1302649847/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: Graphcore/deberta-base-squad
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Graphcore/deberta-base-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-sepidmnorozy__Urdu_sentiment-559fc5f8-1302749848 | 2022-08-23T14:58:02.000Z | null | false | b1aa7d48bd28bf611cb1e24ebdacd4943790a24f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:sepidmnorozy/Urdu_sentiment"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-sepidmnorozy__Urdu_sentiment-559fc5f8-1302749848/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- sepidmnorozy/Urdu_sentiment
eval_info:
task: summarization
model: yuvraj/summarizer-cnndm
metrics: ['accuracy']
dataset_name: sepidmnorozy/Urdu_sentiment
dataset_config: sepidmnorozy--Urdu_sentiment
dataset_split: train
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yuvraj/summarizer-cnndm
* Dataset: sepidmnorozy/Urdu_sentiment
* Config: sepidmnorozy--Urdu_sentiment
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mwz](https://huggingface.co/mwz) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-squad_v2-7b0e814c-1303349869 | 2022-08-23T16:38:54.000Z | null | false | 8024ae5e1f3ba083cbfca1e9b4499f4b38ff7b11 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad_v2-7b0e814c-1303349869/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-superqa2
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-adversarial_qa-92a1abad-1303449870 | 2022-08-23T16:39:03.000Z | null | false | ddd3894523954e4a2487931093cccd4a6ea182f4 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-adversarial_qa-92a1abad-1303449870/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-superqa2
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: test
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-adversarial_qa-0243fffc-1303549871 | 2022-08-23T16:50:06.000Z | null | false | 4bb6b28f832a1118230451a2e98dfaab9409235f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-adversarial_qa-0243fffc-1303549871/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-superqa2
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-squad-1eddc82e-1303649872 | 2022-08-23T16:56:08.000Z | null | false | 86181b5c13aff9667b5513999aaf83d2747e49f8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad-1eddc82e-1303649872/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-superqa2
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
cakiki | null | null | null | false | 2 | false | cakiki/abc | 2022-08-23T21:08:54.000Z | null | false | 8f5518a06e4ace72e5a8e25399e30cd2c21dae81 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/cakiki/abc/resolve/main/README.md | ---
license: cc-by-4.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-squad_v2-4a3c5c8d-1305249893 | 2022-08-23T21:07:54.000Z | null | false | 0ae49250e4884b552f29252e529d01c77029581f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad_v2-4a3c5c8d-1305249893/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-gc1
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-gc1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-squad_v2-4a3c5c8d-1305249894 | 2022-08-23T21:08:47.000Z | null | false | 9d29ec3eb036547043efdbef5aeafa474f678f0e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad_v2-4a3c5c8d-1305249894/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: nbroad/deb-base-gc2
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/deb-base-gc2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-adversarial_qa-7ab9b963-1305349895 | 2022-08-23T21:06:32.000Z | null | false | 32c7f6b18f236793540e2161d62b9a722e0bf5d5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-adversarial_qa-7ab9b963-1305349895/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-gc1
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-gc1
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-adversarial_qa-7ab9b963-1305349896 | 2022-08-23T21:06:52.000Z | null | false | c0672e0447fc2813a905c6d33718bea35650baa2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-adversarial_qa-7ab9b963-1305349896/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: nbroad/deb-base-gc2
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/deb-base-gc2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-quoref-bbfe943f-1305449897 | 2022-08-23T21:08:05.000Z | null | false | 2c955c42d1e82b3e62b2f42b8639aa1d17be323a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:quoref"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-quoref-bbfe943f-1305449897/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- quoref
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-gc1
metrics: []
dataset_name: quoref
dataset_config: default
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-gc1
* Dataset: quoref
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-project-quoref-bbfe943f-1305449898 | 2022-08-23T21:08:26.000Z | null | false | adbb98bfc272bb274f22f4c978a4bce3607b3597 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:quoref"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-quoref-bbfe943f-1305449898/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- quoref
eval_info:
task: extractive_question_answering
model: nbroad/deb-base-gc2
metrics: []
dataset_name: quoref
dataset_config: default
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/deb-base-gc2
* Dataset: quoref
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-squad_v2-1e2c143e-1305549899 | 2022-08-23T21:20:07.000Z | null | false | 75eff2931ed9963c2996d7744a83db02453b4e54 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad_v2-1e2c143e-1305549899/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-superqa1
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-adversarial_qa-b21f20c3-1305649900 | 2022-08-23T21:18:46.000Z | null | false | c66053954b69c9ab189d13ae97c0106e6d162ebe | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-adversarial_qa-b21f20c3-1305649900/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-superqa1
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa1
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-project-quoref-9c01ff03-1305849901 | 2022-08-23T21:42:05.000Z | null | false | 335a5dd4efdc8cc6250a3c6f4a72c336f039f91e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:quoref"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-quoref-9c01ff03-1305849901/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- quoref
eval_info:
task: extractive_question_answering
model: nbroad/rob-base-superqa1
metrics: []
dataset_name: quoref
dataset_config: default
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa1
* Dataset: quoref
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
djaym7 | null | @inproceedings{dai2022dialoginpainting,
title={Dialog Inpainting: Turning Documents to Dialogs},
author={Dai, Zhuyun and Chaganty, Arun Tejasvi and Zhao, Vincent and Amini, Aida and Green, Mike and Rashid, Qazi and Guu, Kelvin},
booktitle={International Conference on Machine Learning (ICML)},
year={2022},
organization={PMLR}
} | WikiDialog is a large dataset of synthetically generated information-seeking
conversations. Each conversation in the dataset contains two speakers grounded
in a passage from English Wikipedia: one speaker’s utterances consist of exact
sentences from the passage; the other speaker is generated by a large language
model. | false | 1 | false | djaym7/wiki_dialog_mlm | 2022-08-23T22:23:32.000Z | null | false | 59b17e6ed36b643b608da2d1e2fe8827278c2459 | [] | [
"arxiv:2205.09073",
"license:apache-2.0"
] | https://huggingface.co/datasets/djaym7/wiki_dialog_mlm/resolve/main/README.md | ---
license: apache-2.0
---
Wiki_dialog dataset with Inpainting (MLM) on dialog. Section 2.1 in paper : https://arxiv.org/abs/2205.09073
https://huggingface.co/datasets/djaym7/wiki_dialog
Access using
dataset = datasets.load_dataset('djaym7/wiki_dialog_mlm','OQ', beam_runner='DirectRunner') |
Sidd2899 | null | null | null | false | 1 | false | Sidd2899/MyspeechASR | 2022-09-01T12:36:24.000Z | librispeech-1 | false | 07d3d059cbdce2156e917dfbc63d43f068f9efdb | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:automatic-speech-recognition",
"task_categor... | https://huggingface.co/datasets/Sidd2899/MyspeechASR/resolve/main/README.md | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com)
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/siddhant/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/siddhant/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Myspeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
TeDriCS | null | @misc{,
title={ },
author={},
year={2022}
} | null | false | 29 | false | TeDriCS/tedrics-data | 2022-09-07T14:57:46.000Z | null | false | c5d0fec0471ea24513d7f5f7de12d1d4daf8c70a | [] | [] | https://huggingface.co/datasets/TeDriCS/tedrics-data/resolve/main/README.md | |
thepurpleowl | null | @article{codequeries2022,
title={Learning to Answer Semantic Queries over Code},
author={A, B, C, D, E, F},
journal={arXiv preprint arXiv:<.>},
year={2022}
} | CodeQueries Ideal setup. | false | 1 | false | thepurpleowl/codequeries | 2022-09-24T04:04:30.000Z | null | false | 6e4338ec1ba4ab7dc7d87c8893b0509da004145a | [] | [
"arxiv:2209.08372",
"annotations_creators:expert-generated",
"language:code",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"tags:neural modeling of code",
"tags:code question answering",
"tags:code semantic understanding",
"ta... | https://huggingface.co/datasets/thepurpleowl/codequeries/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- code
language_creators:
- found
multilinguality:
- monolingual
pretty_name: codequeries
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- neural modeling of code
- code question answering
- code semantic understanding
task_categories:
- question-answering
task_ids:
- extractive-qa
license:
- apache-2.0
---
# Dataset Card for CodeQueries
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [How to use](#how-to-use)
- [Data Splits and Data Fields](#data-splits-and-data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Data](https://huggingface.co/datasets/thepurpleowl/codequeries)
- **Repository:** [Code](https://github.com/thepurpleowl/codequeries-benchmark)
- **Paper:** [Learning to Answer Semantic Queries over Code](https://arxiv.org/abs/2209.08372)
### Dataset Summary
CodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning. See the [paper]() for more details.
### Supported Tasks and Leaderboards
Extractive question answering for code, semantic understanding of code.
### Languages
The dataset contains code context from `python` files.
## Dataset Structure
### How to Use
The dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
```python
import datasets
# in addition to `twostep`, the other supported settings are <ideal/file_ideal/prefix>.
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST)
print(next(iter(ds)))
#OUTPUT:
{'query_name': 'Unused import',
'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py',
'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...',
'metadata': 'root',
'header': "['module', '___EOS___']",
'index': 0},
'answer_spans': [{'span': 'from glance.common import context',
'start_line': 19,
'start_column': 0,
'end_line': 19,
'end_column': 33}
],
'supporting_fact_spans': [],
'example_type': 1,
'single_hop': False,
'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...],
'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...],
'relevance_label': 1
}
```
### Data Splits and Data Fields
Detailed information on the data splits for proposed settings can be found in the paper.
In general, data splits in all the proposed settings have examples with the following fields -
```
- query_name (query name to uniquely identify the query)
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`]
- answer_spans (answer spans with metadata)
- supporting_fact_spans (supporting-fact spans with metadata)
- example_type (1(positive)) or 0(negative)) example type)
- single_hop (True or False - for query type)
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
- label_sequence (example subtoken labels)
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field]
```
## Dataset Creation
The dataset is created using [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used.
## Additional Information
### Licensing Information
The source code repositories used for preparing CodeQueries are based on the [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available [here](https://huggingface.co/datasets/eth_py150_open). The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license.
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2209.08372,
doi = {10.48550/ARXIV.2209.08372},
url = {https://arxiv.org/abs/2209.08372},
author = {Sahu, Surya Prakash and Mandal, Madhurima and Bharadwaj, Shikhar and Kanade, Aditya and Maniatis, Petros and Shevade, Shirish},
keywords = {Software Engineering (cs.SE), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Learning to Answer Semantic Queries over Code},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
albertvillanova | null | null | null | false | 1 | false | albertvillanova/tmp-10 | 2022-08-24T15:41:27.000Z | null | false | 6420a7628eb1cf05f5e24dd36501e47edc999a0a | [] | [
"language:ase",
"language:en"
] | https://huggingface.co/datasets/albertvillanova/tmp-10/resolve/main/README.md | ---
language:
- ase
- en
--- |
Jaren | null | null | null | false | 1 | false | Jaren/T5-dialogue-pretrain-data | 2022-08-30T15:01:24.000Z | null | false | 738036ce5d904fdf2509ce44cd1d5d63b25582fa | [] | [] | https://huggingface.co/datasets/Jaren/T5-dialogue-pretrain-data/resolve/main/README.md | This dataset is converted from duconv, durecdial, ecm, naturalconv, persona, tencent, kdconv, crosswoz,risawoz,diamante,restoration and LCCC-base 12 high quality datasets and is used for continue pretrain task for T5-pegasus in mengzi version.
|
kdwm | null | null | null | false | 1 | false | kdwm/weather-sentences | 2022-08-24T12:10:55.000Z | null | false | 73715a71e2f1d5eb20949bcadc921e7e32d97072 | [] | [
"license:mit"
] | https://huggingface.co/datasets/kdwm/weather-sentences/resolve/main/README.md | ---
license: mit
---
|
dyhsup | null | null | null | false | 1 | false | dyhsup/CPR | 2022-08-24T13:05:19.000Z | null | false | 969692674a1c5bbb1469682eda42d81fe5c8d64d | [] | [
"license:unknown"
] | https://huggingface.co/datasets/dyhsup/CPR/resolve/main/README.md | ---
license: unknown
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.