text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false |
# Shellcode_IA32
___Shellcode_IA32___ is a dataset containing _20_ years of shellcodes from a variety of sources is the largest collection of shellcodes in assembly available to date.
This dataset consists of 3,200 examples of instructions in assembly language for _IA-32_ (the 32-bit version of the x86 Intel Architecture) from publicly available security exploits. We collected assembly programs used to generate shellcode from [exploit-db](https://www.exploit-db.com/shellcodes?platform=linux_x86) and from [shell-storm](http://shell-storm.org/shellcode/).
We enriched the dataset by adding examples of assembly programs for the _IA-32_ architecture from popular tutorials and books. This allowed us to understand how different authors and assembly experts comment and, thus, how to deal with the ambiguity of natural language in this specific context. Our dataset consists of 10% of instructions collected from books and guidelines, and the rest from real shellcodes.
Our focus is on Linux, the most common OS for security-critical network services. Accordingly, we added assembly instructions written with _Netwide Assembler_ (NASM) for Linux.
Each line of _Shellcode\_IA32_ dataset represents a snippet - intent pair. The _snippet_ is a line or a combination of multiple lines of assembly code, built by following the NASM syntax. The _intent_ is a comment in the English language.
Further statistics on the dataset and a set of preliminary experiments performed with a neural machine translation (NMT) model are described in the following paper: [Shellcode_IA32: A Dataset for Automatic Shellcode Generation](https://arxiv.org/abs/2104.13100).
**Note**: This work was done in collaboration with the [DESSERT Lab](http://www.dessert.unina.it/).
The dataset is also hosted on the [DESSERT Lab Github](https://github.com/dessertlab/Shellcode_IA32).
Please consider citing our work:
```
@inproceedings{liguori-etal-2021-shellcode,
title = "{S}hellcode{\_}{IA}32: A Dataset for Automatic Shellcode Generation",
author = "Liguori, Pietro and
Al-Hossami, Erfan and
Cotroneo, Domenico and
Natella, Roberto and
Cukic, Bojan and
Shaikh, Samira",
booktitle = "Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.nlp4prog-1.7",
doi = "10.18653/v1/2021.nlp4prog-1.7",
pages = "58--64",
abstract = "We take the first step to address the task of automatically generating shellcodes, i.e., small pieces of code used as a payload in the exploitation of a software vulnerability, starting from natural language comments. We assemble and release a novel dataset (Shellcode{\_}IA32), consisting of challenging but common assembly instructions with their natural language descriptions. We experiment with standard methods in neural machine translation (NMT) to establish baseline performance levels on this task.",
}
```
|
false |
# Dataset Card for **cointegrated/ru-paraphrase-NMT-Leipzig**
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** https://habr.com/ru/post/564916/
- **Point of Contact:** [@cointegrated](https://huggingface.co/cointegrated)
### Dataset Summary
The dataset contains 1 million Russian sentences and their automatically generated paraphrases.
It was created by David Dale ([@cointegrated](https://huggingface.co/cointegrated)) by translating the `rus-ru_web-public_2019_1M` corpus from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) into English and back into Russian. A fraction of the resulting paraphrases are invalid, and should be filtered out.
The blogpost ["Перефразирование русских текстов: корпуса, модели, метрики"](https://habr.com/ru/post/564916/) provides a detailed description of the dataset and its properties.
The dataset can be loaded with the following code:
```Python
import datasets
data = datasets.load_dataset(
'cointegrated/ru-paraphrase-NMT-Leipzig',
data_files={"train": "train.csv","val": "val.csv","test": "test.csv"},
)
```
Its output should look like
```
DatasetDict({
train: Dataset({
features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'],
num_rows: 980000
})
val: Dataset({
features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'],
num_rows: 10000
})
test: Dataset({
features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'],
num_rows: 10000
})
})
```
### Supported Tasks and Leaderboards
The dataset can be used to train and validate models for paraphrase generation or (if negative sampling is used) for paraphrase detection.
### Languages
Russian (main), English (auxilliary).
## Dataset Structure
### Data Instances
Data instances look like
```
{
"labse_sim": 0.93502015,
"chrf_sim": 0.4946451012684782,
"idx": 646422,
"ru": "О перспективах развития новых медиа-технологий в РФ расскажут на медиафоруме Енисея.",
"original": "Перспективы развития новых медиатехнологий в Российской Федерации обсудят участники медиафорума «Енисей.",
"en": "Prospects for the development of new media technologies in the Russian Federation will be discussed at the Yenisey Media Forum."
}
```
Where `original` is the original sentence, and `ru` is its machine-generated paraphrase.
### Data Fields
- `idx`: id of the instance in the original corpus
- `original`: the original sentence
- `en`: automatic translation of `original` to English
- `ru`: automatic translation of `en` back to Russian, i.e. a paraphrase of `original`
- `chrf_sim`: [ChrF++](https://huggingface.co/metrics/chrf) similarity of `original` and `ru`
- `labse_sim`: cosine similarity of [LaBSE](https://huggingface.co/cointegrated/LaBSE-en-ru) embedings of `original` and `ru`
- `forward_entailment`: predicted probability that `original` entails `ru`
- `backward_entailment`: predicted probability that `ru` entails `original`
- `p_good`: predicted probability that `ru` and `original` have equivalent meaning
### Data Splits
Train – 980K, validation – 10K, test – 10K. The splits were generated randomly.
## Dataset Creation
### Curation Rationale
There are other Russian paraphrase corpora, but they have major drawbacks:
- The best known [corpus from paraphraser.ru 2016 contest](http://paraphraser.ru/download/) is rather small and covers only the News domain.
- [Opusparcus](https://huggingface.co/datasets/GEM/opusparcus), [ParaPhraserPlus](http://paraphraser.ru/download/), and [corpora of Tamara Zhordanija](https://github.com/tamriq/paraphrase) are noisy, i.e. a large proportion of sentence pairs in them have substantial difference in meaning.
- The Russian part of [TaPaCo](https://huggingface.co/datasets/tapaco) has very high lexical overlap in the sentence pairs; in other words, their paraphrases are not diverse enough.
The current corpus is generated with a dual objective: the parphrases should be semantically as close as possible to the original sentences, while being lexically different from them. Back-translation with restricted vocabulary seems to achieve this goal often enough.
### Source Data
#### Initial Data Collection and Normalization
The `rus-ru_web-public_2019_1M` corpus from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) as is.
The process of its creation is described [in this paper](http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf):
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: *Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012*.
#### Automatic paraphrasing
The paraphrasing was carried out by translating the original sentence to English and then back to Russian.
The models [facebook/wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en) and [facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) were used for translation.
To ensure that the back-translated texts are not identical to the original texts, the final decoder was prohibited to use the token n-grams from the original texts.
The code below implements the paraphrasing function.
```python
import torch
from transformers import FSMTModel, FSMTTokenizer, FSMTForConditionalGeneration
tokenizer = FSMTTokenizer.from_pretrained("facebook/wmt19-en-ru")
model = FSMTForConditionalGeneration.from_pretrained("facebook/wmt19-en-ru")
inverse_tokenizer = FSMTTokenizer.from_pretrained("facebook/wmt19-ru-en")
inverse_model = FSMTForConditionalGeneration.from_pretrained("facebook/wmt19-ru-en")
model.cuda();
inverse_model.cuda();
def paraphrase(text, gram=4, num_beams=5, **kwargs):
""" Generate a paraphrase using back translation.
Parameter `gram` denotes size of token n-grams of the original sentence that cannot appear in the paraphrase.
"""
input_ids = inverse_tokenizer.encode(text, return_tensors="pt")
with torch.no_grad():
outputs = inverse_model.generate(input_ids.to(inverse_model.device), num_beams=num_beams, **kwargs)
other_lang = inverse_tokenizer.decode(outputs[0], skip_special_tokens=True)
# print(other_lang)
input_ids = input_ids[0, :-1].tolist()
bad_word_ids = [input_ids[i:(i+gram)] for i in range(len(input_ids)-gram)]
input_ids = tokenizer.encode(other_lang, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(input_ids.to(model.device), num_beams=num_beams, bad_words_ids=bad_word_ids, **kwargs)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
return decoded
```
The corpus was created by running the above `paraphrase` function on the original sentences with parameters `gram=3, num_beams=5, repetition_penalty=3.14, no_repeat_ngram_size=6`.
### Annotations
#### Annotation process
The dataset was annotated by several automatic metrics:
- [ChrF++](https://huggingface.co/metrics/chrf) between `original` and `ru` sentences;
- cosine similarity between [LaBSE](https://huggingface.co/cointegrated/LaBSE-en-ru) embeddings of these sentences;
- forward and backward entailment probabilites predictd by the [rubert-base-cased-nli-twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) model;
- `p_good`, a metric aggregating the four metrics above into a single number. It is obtained with a logistic regression trained on 100 randomly chosen from the train set and manually labelled sentence pairs.
#### Who are the annotators?
Human annotation was involved only for a small subset used to train the model for `p_good`. It was conduced by the dataset author, @cointegrated.
### Personal and Sensitive Information
The dataset is not known to contain any personal or sensitive information.
The sources and processes of original data collection are described at https://wortschatz.uni-leipzig.de/en/download.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset may enable creation for paraphrasing systems that can be used both for "good" purposes (such as assisting writers or augmenting text datasets), and for "bad" purposes (such as disguising plagiarism). The authors are not responsible for any uses of the dataset.
### Discussion of Biases
The dataset may inherit some of the biases of [the underlying Leipzig web corpus](https://wortschatz.uni-leipzig.de/en/download) or the neural machine translation models ([1](https://huggingface.co/facebook/wmt19-ru-en), [2](https://huggingface.co/facebook/wmt19-en-ru)) with which it was generated.
### Other Known Limitations
Most of the paraphrases in the dataset are valid (by a rough estimante, at least 80%). However, in some sentence pairs there are faults:
- Named entities are often spelled in different ways (e.g. `"Джейкоб" -> "Яков") or even replaced with other entities (e.g. `"Оймякон" -> "Оймянск" or `"Верхоянск" -> "Тольятти"`).
- Sometimes the meaning of words or phrases changes signigicantly, e.g. `"полустанок" -> "полумашина"`, or `"были по колено в грязи" -> "лежали на коленях в иле"`.
- Sometimes the syntax is changed in a meaning-altering way, e.g. `"Интеллектуальное преимущество Вавилова и его соратников над демагогами из рядов сторонников новой агробиологии разительно очевидно." -> "Интеллектуал Вавилов и его приспешники в новой аграрной биологии явно превзошли демогогов."`.
- Grammatical properties that are present in Russian morphology but absent in English, such as gender, are often lost, e.g. `"Я не хотела тебя пугать" -> "Я не хотел пугать вас"`.
The field `labse_sim` reflects semantic similarity between the sentences, and it can be used to filter out at least some poor paraphrases.
## Additional Information
### Dataset Curators
The dataset was created by [David Dale](https://daviddale.ru/en), a.k.a. [@cointegrated](https://huggingface.co/cointegrated).
### Licensing Information
This corpus, as well as the original Leipzig corpora, are licensed under [CC BY](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
[This blog post](https://habr.com/ru/post/564916/) can be cited:
```
@misc{dale_paraphrasing_2021,
author = "Dale, David",
title = "Перефразирование русских текстов: корпуса, модели, метрики",
editor = "habr.com",
url = "https://habr.com/ru/post/564916/",
month = {June},
year = {2021},
note = {[Online; posted 28-June-2021]},
}
```
### Contributions
Thanks to [@avidale](https://github.com/avidale) for adding this dataset. |
false |
## Dataset Description
- **Homepage:** https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/
### Dataset Summary
The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis.
Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format.
A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain – except for Ukrainian.
Ukrainian audio was kindly provided either by Nash Format or Gwara Media for machine learning purposes only (please check the data info.txt files for details).
### Languages
French
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called audio and its sentence.
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- sentence: The sentence the user was prompted to speak
### Data Splits
The speech material has not been subdivided into portions, everything is in the "train" split.
The train split consists of 82825 audio clips and the related sentences.
### Contributions
[@gigant](https://huggingface.co/gigant) added this dataset. |
false |
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com)
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
true | # AutoNLP Dataset for project: vaccinfaq
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project vaccinfaq.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 6,
"text": "What je naam?"
},
{
"target": 6,
"text": "Hoe heet je?"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=181, names=['chitchat_ask_bye', 'chitchat_ask_hi', 'chitchat_ask_hi_de', 'chitchat_ask_hi_en', 'chitchat_ask_hi_fr', 'chitchat_ask_hoe_gaat_het', 'chitchat_ask_name', 'chitchat_ask_thanks', 'faq_ask_aantal_gevaccineerd', 'faq_ask_aantal_gevaccineerd_wereldwijd', 'faq_ask_afspraak_afzeggen', 'faq_ask_afspraak_gemist', 'faq_ask_algemeen_info', 'faq_ask_allergisch_na_vaccinatie', 'faq_ask_alternatieve_medicatie', 'faq_ask_andere_vaccins', 'faq_ask_astrazeneca', 'faq_ask_astrazeneca_bij_ouderen', 'faq_ask_astrazeneca_bloedklonters', 'faq_ask_astrazeneca_prik_2', 'faq_ask_attest', 'faq_ask_autisme_na_vaccinatie', 'faq_ask_auto-immuun', 'faq_ask_begeleiding', 'faq_ask_beschermen', 'faq_ask_beschermingsduur', 'faq_ask_beschermingspercentage', 'faq_ask_besmetten_na_vaccin', 'faq_ask_betalen_voor_vaccin', 'faq_ask_betrouwbaar', 'faq_ask_betrouwbare_bronnen', 'faq_ask_bijsluiter', 'faq_ask_bijwerking_AZ', 'faq_ask_bijwerking_JJ', 'faq_ask_bijwerking_algemeen', 'faq_ask_bijwerking_lange_termijn', 'faq_ask_bijwerking_moderna', 'faq_ask_bijwerking_pfizer', 'faq_ask_bloed_geven', 'faq_ask_borstvoeding', 'faq_ask_buitenlander', 'faq_ask_chronisch_ziek', 'faq_ask_combi', 'faq_ask_complottheorie', 'faq_ask_complottheorie_5G', 'faq_ask_complottheorie_Bill_Gates', 'faq_ask_contra_ind', 'faq_ask_corona_is_griep', 'faq_ask_corona_vermijden', 'faq_ask_covid_door_vaccin', 'faq_ask_curevac', 'faq_ask_derde_prik', 'faq_ask_dna', 'faq_ask_duur_vaccinatie', 'faq_ask_eerst_weigeren', 'faq_ask_eerste_prik_buitenland', 'faq_ask_essentieel_beroep', 'faq_ask_experimenteel', 'faq_ask_foetus', 'faq_ask_geen_antwoord', 'faq_ask_geen_risicopatient', 'faq_ask_geen_uitnodiging', 'faq_ask_gestockeerd', 'faq_ask_gezondheidstoestand_gekend', 'faq_ask_gif_in_vaccin', 'faq_ask_goedkeuring', 'faq_ask_groepsimmuniteit', 'faq_ask_hartspierontsteking', 'faq_ask_hersenziekte', 'faq_ask_hoe_dodelijk', 'faq_ask_hoe_weet_overheid', 'faq_ask_hoeveel_dosissen', 'faq_ask_huisarts', 'faq_ask_huisdieren', 'faq_ask_iedereen', 'faq_ask_in_vaccin', 'faq_ask_info_vaccins', 'faq_ask_janssen', 'faq_ask_janssen_een_dosis', 'faq_ask_jong_en_gezond', 'faq_ask_keuze', 'faq_ask_keuze_vaccinatiecentrum', 'faq_ask_kinderen', 'faq_ask_kosjer_halal', 'faq_ask_leveringen', 'faq_ask_logistiek', 'faq_ask_logistiek_veilig', 'faq_ask_magnetisch', 'faq_ask_man_vrouw_verschillen', 'faq_ask_mantelzorger', 'faq_ask_maximaal_een_dosis', 'faq_ask_meer_bijwerkingen_tweede_dosis', 'faq_ask_minder_mobiel', 'faq_ask_moderna', 'faq_ask_mondmasker', 'faq_ask_motiveren', 'faq_ask_mrna_vs_andere_vaccins', 'faq_ask_naaldangst', 'faq_ask_nadelen', 'faq_ask_nuchter', 'faq_ask_ontwikkeling', 'faq_ask_onvruchtbaar', 'faq_ask_oplopen_vaccinatie', 'faq_ask_pfizer', 'faq_ask_phishing', 'faq_ask_pijnstiller', 'faq_ask_planning_eerstelijnszorg', 'faq_ask_planning_ouderen', 'faq_ask_positieve_test_na_vaccin', 'faq_ask_prioritaire_gropen', 'faq_ask_privacy', 'faq_ask_probleem_registratie', 'faq_ask_problemen_uitnodiging', 'faq_ask_quarantaine', 'faq_ask_qvax_probleem', 'faq_ask_reproductiegetal', 'faq_ask_risicopatient', 'faq_ask_risicopatient_diabetes', 'faq_ask_risicopatient_hartvaat', 'faq_ask_risicopatient_immuunziekte', 'faq_ask_risicopatient_kanker', 'faq_ask_risicopatient_luchtwegaandoening', 'faq_ask_smaakverlies', 'faq_ask_snel_ontwikkeld', 'faq_ask_sneller_aan_de_beurt', 'faq_ask_taxi', 'faq_ask_test_voor_vaccin', 'faq_ask_testen', 'faq_ask_tijd_tot_tweede_dosis', 'faq_ask_timing_andere_vaccins', 'faq_ask_trage_start', 'faq_ask_tweede_dosis_afspraak', 'faq_ask_tweede_dosis_vervroegen', 'faq_ask_twijfel_bijwerkingen', 'faq_ask_twijfel_effectiviteit', 'faq_ask_twijfel_inhoud', 'faq_ask_twijfel_ivm_vaccinatie', 'faq_ask_twijfel_noodzaak', 'faq_ask_twijfel_ontwikkeling', 'faq_ask_twijfel_praktisch', 'faq_ask_twijfel_vaccins_zelf', 'faq_ask_twijfel_vrijheid', 'faq_ask_uit_flacon', 'faq_ask_uitnodiging_afspraak_kwijt', 'faq_ask_uitnodiging_na_vaccinatie', 'faq_ask_vaccin_doorgeven', 'faq_ask_vaccin_immuunsysteem', 'faq_ask_vaccin_variant', 'faq_ask_vaccinatiecentrum', 'faq_ask_vaccine_covid_gehad', 'faq_ask_vaccine_covid_gehad_effect', 'faq_ask_vakantie', 'faq_ask_veelgestelde_vragen', 'faq_ask_vegan', 'faq_ask_verplicht', 'faq_ask_verschillen', 'faq_ask_vrijwillig_Janssen', 'faq_ask_vrijwilliger', 'faq_ask_waar_en_wanneer', 'faq_ask_waarom', 'faq_ask_waarom_niet_verplicht', 'faq_ask_waarom_ouderen_eerst', 'faq_ask_waarom_twee_prikken', 'faq_ask_waarom_twijfel', 'faq_ask_wanneer_algemene_bevolking', 'faq_ask_wanneer_iedereen_gevaccineerd', 'faq_ask_wat_is_corona', 'faq_ask_wat_is_rna', 'faq_ask_wat_is_vaccin', 'faq_ask_wat_na_vaccinatie', 'faq_ask_welk_vaccin_krijg_ik', 'faq_ask_welke_vaccin', 'faq_ask_wie_ben_ik', 'faq_ask_wie_doet_inenting', 'faq_ask_wie_is_risicopatient', 'faq_ask_wie_nu', 'faq_ask_wilsonbekwaam', 'faq_ask_zwanger', 'get_started', 'nlu_fallback', 'test'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 11651 |
| valid | 1267 |
|
false |
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Evaluation](#dataset-evaluation)
- [Additional Information](#additional-information)
## Dataset Description
We provide sentential paraphrase detection train, test datasets as well as BERT-based models for the Armenian language.
### Dataset Summary
The sentences in the dataset are taken from [Hetq](https://hetq.am/) and [Panarmenian](http://www.panarmenian.net/) news articles. To generate paraphrase for the sentences, we used back translation from Armenian to English. We repeated the step twice, after which the generated paraphrases were manually reviewed. Invalid sentences were filtered out, while the rest were labelled as either paraphrase, near paraphrase or non-paraphrase. Test examples were reviewed by 3 different annotators. In addition, to increase the number of non-paraphrase pairs, we padded the dataset with automatically generated negative examples, including pairs of consecutive sentences and random pairs.
## Dataset Structure
Each row consists of 2 sentences and their label. This sentences were labelled as either paraphrase, near paraphrase or non-paraphrase (with 1, 0, -1 labels respectively). The sentences are divided into train and test sets.
|Number of examples|Total|Paraphrase|Non-paraphrase (near paraphrase)|
|:-- | :---: | :---: | :---: |
|Train | 4233 |1339 |2683 (211) |
|Test | 1682 |1021 |448 (213) |
### Dataset Evaluation
We finetuned Multilingual BERT on several training sets, including the proposed ARPA dataset, and evaluated their performance on our test set. During training and
evaluation, near paraphrase and non-paraphrase pairs were combined into one class. The results are provided below:
|BERT Model | Train set | F1 | Acc. |
|:-- | :---: | :---: | :---: |
|Multilingual BERT | ARPA train set| 84.27| 78.06|
|Multilingual BERT | Paraphraser.ru train set machine-translated into Armenian | 83.81 | 77.09 |
|Multilingual BERT | MRPC train set machine-translated into Armenian | 80.07 | 69.87 |
|Multilingual BERT | All of the above combined | 84 |77.6 |
#### Additional Information
The model trained on ARPA is available for use, and can be downloaded using this [link](https://drive.google.com/uc?id=14owW5kkZ1JiNa6P-676e-QIj8m8i5e_8).
For more details about the models and dataset construction, refer to the [paper](https://arxiv.org/pdf/2009.12615).
|
false |
# Dataset Card for Monolingual-Quechua-IIC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://llamacha.pe](https://llamacha.pe)
- **Paper:** [Introducing QuBERT: A Large Monolingual Corpus and BERT Model for
Southern Quechua](https://aclanthology.org/2022.deeplo-1.1.pdf)
- **Point of Contact:** [Rodolfo Zevallos](mailto:rodolfojoel.zevallos@upf.edu)
- **Size of downloaded dataset files:** 373.28 MB
### Dataset Summary
We present Monolingual-Quechua-IIC, a monolingual corpus of Southern Quechua, which can be used to build language models using Transformers models. This corpus also includes the Wiki and OSCAR corpora. We used this corpus to build Llama-RoBERTa-Quechua, the first language model for Southern Quechua using Transformers.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Southern Quechua
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Apache-2.0
### Citation Information
```
@inproceedings{zevallos2022introducing,
title={Introducing QuBERT: A Large Monolingual Corpus and BERT Model for Southern Quechua},
author={Zevallos, Rodolfo and Ortega, John and Chen, William and Castro, Richard and Bel, Nuria and Toshio, Cesar and Venturas, Renzo and Aradiel, Hilario and Melgarejo, Nelsi},
booktitle={Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing},
pages={1--13},
year={2022}
}
```
### Contributions
Thanks to [@rjzevallos](https://github.com/rjzevallos) for adding this dataset.
|
true | |
true | # References
- [alper bayram](https://github.com/alperbayram)
|
true | # AutoNLP Dataset for project: user-review-classification
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project user-review-classification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "awful",
"target": 3
},
{
"text": "it says you can only read three stories a month and yet everything i clicked on was blank and now it[...]",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=4, names=['CONTENT', 'INTERFACE', 'SUBSCRIPTION', 'USER_EXPERIENCE'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 275 |
| valid | 71 |
|
false |
# Dataset Card for common_language
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5036977
- **Repository:** https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonLanguage
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.
### Supported Tasks and Leaderboards
The baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage):
https://github.com/speechbrain/speechbrain
### Languages
List of included language:
```
Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file, and its label `language`. Additional fields include `age`, `client_id`, `gender` and `sentence`.
```python
{
'client_id': 'itln_trn_sp_175',
'path': '/path/common_voice_kpd/Italian/train/itln_trn_sp_175/common_voice_it_18279446.wav',
'sentence': 'Con gli studenti è leggermente simile.',
'age': 'not_defined',
'gender': 'not_defined',
'language': 22
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`language` (`ClassLabel`): The language of the recording (see the `Languages` section above)
`sentence` (`string`): The sentence the user was prompted to speak
`age` (`string`): The age of the speaker.
`gender` (`string`): The gender of the speaker
### Data Splits
The dataset is already balanced and split into train, dev (validation) and test sets.
| Name | Train | Dev | Test |
|:---------------------------------:|:------:|:------:|:-----:|
| **# of utterances** | 177552 | 47104 | 47704 |
| **# unique speakers** | 11189 | 1297 | 1322 |
| **Total duration, hr** | 30.04 | 7.53 | 7.53 |
| **Min duration, sec** | 0.86 | 0.98 | 0.89 |
| **Mean duration, sec** | 4.87 | 4.61 | 4.55 |
| **Max duration, sec** | 21.72 | 105.67 | 29.83 |
| **Duration per language, min** | ~40 | ~10 | ~10 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
The Mongolian and Ukrainian languages are spelled as "Mangolian" and "Ukranian" in this version of the dataset.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@dataset{ganesh_sinisetty_2021_5036977,
author = {Ganesh Sinisetty and
Pavlo Ruban and
Oleksandr Dymov and
Mirco Ravanelli},
title = {CommonLanguage},
month = jun,
year = 2021,
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5036977},
url = {https://doi.org/10.5281/zenodo.5036977}
}
```
### Contributions
Thanks to [@anton-l](https://github.com/anton-l) for adding this dataset.
|
false |
# PIE synthetic dataset
Repo: https://github.com/awasthiabhijeet/PIE
Paper: https://aclanthology.org/D19-1435.pdf |
true | # AutoNLP Dataset for project: antisemitism-2
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project antisemitism-2.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 0,
"text": "Jew pods"
},
{
"target": 1,
"text": "@PotatoLaydee He's a Jew...."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=2, names=['0', '1'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3161 |
| valid | 791 |
|
false |
# AspectEmo
## Description
AspectEmo Corpus is an extended version of a publicly available PolEmo 2.0 corpus of Polish customer reviews used in many projects on the use of different methods in sentiment analysis. The AspectEmo corpus consists of four subcorpora, each containing online customer reviews from the following domains: school, medicine, hotels, and products. All documents are annotated at the aspect level with six sentiment categories: strong negative (minus_m), weak negative (minus_s), neutral (zero), weak positive (plus_s), strong positive (plus_m).
## Versions
| version | config name | description | default | notes |
|---------|-------------|--------------------------------|---------|------------------|
| 1.0 | "1.0" | The version used in the paper. | YES | |
| 2.0 | - | Some bugs fixed. | NO | work in progress |
## Tasks (input, output and metrics)
Aspect-based sentiment analysis (ABSA) is a text analysis method that categorizes data by aspects and identifies the sentiment assigned to each aspect. It is the sequence tagging task.
**Input** ('*tokens'* column): sequence of tokens
**Output** ('*labels'* column): sequence of predicted tokens’ classes ("O" + 6 possible classes: strong negative (a_minus_m), weak negative (a_minus_s), neutral (a_zero), weak positive (a_plus_s), strong positive (a_plus_m), ambiguous (a_amb) )
**Domain**: school, medicine, hotels and products
**Measurements**: F1-score (seqeval)
**Example***:*
Input: `['Dużo', 'wymaga', ',', 'ale', 'bardzo', 'uczciwy', 'i', 'przyjazny', 'studentom', '.', 'Warto', 'chodzić', 'na', 'konsultacje', '.', 'Docenia', 'postępy', 'i', 'zaangażowanie', '.', 'Polecam', '.']`
Input (translated by DeepL): `'Demands a lot , but very honest and student friendly . Worth going to consultations . Appreciates progress and commitment . I recommend .'`
Output: `['O', 'a_plus_s', 'O', 'O', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'a_zero', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O']`
## Data splits
| Subset | Cardinality (sentences) |
|:-------|------------------------:|
| train | 1173 |
| val | 0 |
| test | 292 |
## Class distribution(without "O")
| Class | train | validation | test |
|:----------|--------:|-------------:|-------:|
| a_plus_m | 0.359 | - | 0.369 |
| a_minus_m | 0.305 | - | 0.377 |
| a_zero | 0.234 | - | 0.182 |
| a_minus_s | 0.037 | - | 0.024 |
| a_plus_s | 0.037 | - | 0.015 |
| a_amb | 0.027 | - | 0.033 |
## Citation
```
@misc{11321/849,
title = {{AspectEmo} 1.0: Multi-Domain Corpus of Consumer Reviews for Aspect-Based Sentiment Analysis},
author = {Koco{\'n}, Jan and Radom, Jarema and Kaczmarz-Wawryk, Ewa and Wabnic, Kamil and Zaj{\c a}czkowska, Ada and Za{\'s}ko-Zieli{\'n}ska, Monika},
url = {http://hdl.handle.net/11321/849},
note = {{CLARIN}-{PL} digital repository},
copyright = {The {MIT} License},
year = {2021}
}
```
## License
```
The MIT License
```
## Links
[HuggingFace](https://huggingface.co/datasets/clarin-pl/aspectemo)
[Source](https://clarin-pl.eu/dspace/handle/11321/849)
[Paper](https://sentic.net/sentire2021kocon.pdf)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("clarin-pl/aspectemo")
pprint(dataset['train'][20])
# {'labels': [0, 4, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 3, 0, 5, 0, 0, 0, 0, 0, 0],
# 'tokens': ['Dużo',
# 'wymaga',
# ',',
# 'ale',
# 'bardzo',
# 'uczciwy',
# 'i',
# 'przyjazny',
# 'studentom',
# '.',
# 'Warto',
# 'chodzić',
# 'na',
# 'konsultacje',
# '.',
# 'Docenia',
# 'postępy',
# 'i',
# 'zaangażowanie',
# '.',
# 'Polecam',
# '.']}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("clarin-pl/aspectemo")
references = dataset["test"]["labels"]
# generate random predictions
predictions = [
[
random.randrange(dataset["train"].features["labels"].feature.num_classes)
for _ in range(len(labels))
]
for labels in references
]
# transform to original names of labels
references_named = [
[dataset["train"].features["labels"].feature.names[label] for label in labels]
for labels in references
]
predictions_named = [
[dataset["train"].features["labels"].feature.names[label] for label in labels]
for labels in predictions
]
# transform to BILOU scheme
references_named = [
[f"U-{label}" if label != "O" else label for label in labels]
for labels in references_named
]
predictions_named = [
[f"U-{label}" if label != "O" else label for label in labels]
for labels in predictions_named
]
# utilise seqeval to evaluate
seqeval = load_metric("seqeval")
seqeval_score = seqeval.compute(
predictions=predictions_named,
references=references_named,
scheme="BILOU",
mode="strict",
)
pprint(seqeval_score)
# {'a_amb': {'f1': 0.00597237775289287,
# 'number': 91,
# 'precision': 0.003037782418834251,
# 'recall': 0.17582417582417584},
# 'a_minus_m': {'f1': 0.048306148055207034,
# 'number': 1039,
# 'precision': 0.0288551620760727,
# 'recall': 0.1482194417709336},
# 'a_minus_s': {'f1': 0.004682997118155619,
# 'number': 67,
# 'precision': 0.0023701002734731083,
# 'recall': 0.19402985074626866},
# 'a_plus_m': {'f1': 0.045933014354066985,
# 'number': 1015,
# 'precision': 0.027402473834443386,
# 'recall': 0.14187192118226602},
# 'a_plus_s': {'f1': 0.0021750951604132683,
# 'number': 41,
# 'precision': 0.001095690284879474,
# 'recall': 0.14634146341463414},
# 'a_zero': {'f1': 0.025159400310184387,
# 'number': 501,
# 'precision': 0.013768389287061486,
# 'recall': 0.14570858283433133},
# 'overall_accuracy': 0.13970115681233933,
# 'overall_f1': 0.02328248652368391,
# 'overall_precision': 0.012639312620633834,
# 'overall_recall': 0.14742193173565724}
``` |
false |
# nkjp-pos
## Description
NKJP-POS is a part the National Corpus of Polish (*Narodowy Korpus Języka Polskiego*). Its objective is part-of-speech tagging, e.g. nouns, verbs, adjectives, adverbs, etc. During the creation of corpus, texts of were annotated by humans from various sources, covering many domains and genres.
## Tasks (input, output and metrics)
Part-of-speech tagging (POS tagging) - tagging words in text with their corresponding part of speech.
**Input** ('*tokens'* column): sequence of tokens
**Output** ('*pos_tags'* column): sequence of predicted tokens’ classes (35 possible classes, described in detail in the annotation guidelines)
**Measurements**: F1-score (seqeval)
**Example***:*
Input: `['Zarejestruj', 'się', 'jako', 'bezrobotny', '.']`
Input (translated by DeepL): `Register as unemployed.`
Output: `['impt', 'qub', 'conj', 'subst', 'interp']`
## Data splits
| Subset | Cardinality (sentences) |
| ----------- | ----------------------: |
| train | 78219 |
| dev | 0 |
| test | 7444 |
## Class distribution
| Class | train | dev | test |
|:--------|--------:|------:|--------:|
| subst | 0.27345 | - | 0.27656 |
| interp | 0.18101 | - | 0.17944 |
| adj | 0.10611 | - | 0.10919 |
| prep | 0.09567 | - | 0.09547 |
| qub | 0.05670 | - | 0.05491 |
| fin | 0.04939 | - | 0.04648 |
| praet | 0.04409 | - | 0.04348 |
| conj | 0.03711 | - | 0.03724 |
| adv | 0.03512 | - | 0.03333 |
| inf | 0.01591 | - | 0.01547 |
| comp | 0.01476 | - | 0.01439 |
| num | 0.01322 | - | 0.01436 |
| ppron3 | 0.01111 | - | 0.01018 |
| ppas | 0.01086 | - | 0.01085 |
| ger | 0.00961 | - | 0.01050 |
| brev | 0.00856 | - | 0.01181 |
| ppron12 | 0.00670 | - | 0.00665 |
| aglt | 0.00629 | - | 0.00602 |
| pred | 0.00539 | - | 0.00540 |
| pact | 0.00454 | - | 0.00452 |
| bedzie | 0.00229 | - | 0.00243 |
| pcon | 0.00218 | - | 0.00189 |
| impt | 0.00203 | - | 0.00226 |
| siebie | 0.00177 | - | 0.00158 |
| imps | 0.00174 | - | 0.00177 |
| interj | 0.00131 | - | 0.00102 |
| xxx | 0.00070 | - | 0.00048 |
| adjp | 0.00069 | - | 0.00065 |
| winien | 0.00068 | - | 0.00057 |
| adja | 0.00048 | - | 0.00058 |
| pant | 0.00012 | - | 0.00018 |
| burk | 0.00011 | - | 0.00006 |
| numcol | 0.00011 | - | 0.00013 |
| depr | 0.00010 | - | 0.00004 |
| adjc | 0.00007 | - | 0.00008 |
## Citation
```
@book{przepiorkowski_narodowy_2012,
title = {Narodowy korpus języka polskiego},
isbn = {978-83-01-16700-4},
language = {pl},
publisher = {Wydawnictwo Naukowe PWN},
editor = {Przepiórkowski, Adam and Bańko, Mirosław and Górski, Rafał L. and Lewandowska-Tomaszczyk, Barbara},
year = {2012}
}
```
## License
```
GNU GPL v.3
```
## Links
[HuggingFace](https://huggingface.co/datasets/clarin-pl/nkjp-pos)
[Source](http://clip.ipipan.waw.pl/NationalCorpusOfPolish)
[Paper](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("clarin-pl/nkjp-pos")
pprint(dataset['train'][5000])
# {'id': '130-2-900005_morph_49.49-s',
# 'pos_tags': [16, 4, 3, 30, 12, 18, 3, 16, 14, 6, 14, 26, 1, 30, 12],
# 'tokens': ['Najwyraźniej',
# 'źle',
# 'ocenił',
# 'odległość',
# ',',
# 'bo',
# 'zderzył',
# 'się',
# 'z',
# 'jadącą',
# 'z',
# 'naprzeciwka',
# 'ciężarową',
# 'scanią',
# '.']}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("clarin-pl/nkjp-pos")
references = dataset["test"]["pos_tags"]
# generate random predictions
predictions = [
[
random.randrange(dataset["train"].features["pos_tags"].feature.num_classes)
for _ in range(len(labels))
]
for labels in references
]
# transform to original names of labels
references_named = [
[dataset["train"].features["pos_tags"].feature.names[label] for label in labels]
for labels in references
]
predictions_named = [
[dataset["train"].features["pos_tags"].feature.names[label] for label in labels]
for labels in predictions
]
# transform to BILOU scheme
references_named = [
[f"U-{label}" if label != "O" else label for label in labels]
for labels in references_named
]
predictions_named = [
[f"U-{label}" if label != "O" else label for label in labels]
for labels in predictions_named
]
# utilise seqeval to evaluate
seqeval = load_metric("seqeval")
seqeval_score = seqeval.compute(
predictions=predictions_named,
references=references_named,
scheme="BILOU",
mode="strict",
)
pprint(seqeval_score, depth=1)
# {'adj': {...},
# 'adja': {...},
# 'adjc': {...},
# 'adjp': {...},
# 'adv': {...},
# 'aglt': {...},
# 'bedzie': {...},
# 'brev': {...},
# 'burk': {...},
# 'comp': {...},
# 'conj': {...},
# 'depr': {...},
# 'fin': {...},
# 'ger': {...},
# 'imps': {...},
# 'impt': {...},
# 'inf': {...},
# 'interj': {...},
# 'interp': {...},
# 'num': {...},
# 'numcol': {...},
# 'overall_accuracy': 0.027855061488566583,
# 'overall_f1': 0.027855061488566583,
# 'overall_precision': 0.027855061488566583,
# 'overall_recall': 0.027855061488566583,
# 'pact': {...},
# 'pant': {...},
# 'pcon': {...},
# 'ppas': {...},
# 'ppron12': {...},
# 'ppron3': {...},
# 'praet': {...},
# 'pred': {...},
# 'prep': {...},
# 'qub': {...},
# 'siebie': {...},
# 'subst': {...},
# 'winien': {...},
# 'xxx': {...}}
``` |
true | The COPA-HR dataset (Choice of plausible alternatives in Croatian) is a translation
of the English COPA dataset (https://people.ict.usc.edu/~gordon/copa.html) by following the
XCOPA dataset translation methodology (https://arxiv.org/abs/2005.00333). The dataset consists of 1000 premises
(My body cast a shadow over the grass), each given a question (What is the cause?), and two choices
(The sun was rising; The grass was cut), with a label encoding which of the choices is more plausible
given the annotator or translator (The sun was rising).
The dataset is split into 400 training samples, 100 validation samples, and 500 test samples. It includes the
following features: 'premise', 'choice1', 'choice2', 'label', 'question', 'changed' (boolean).
If you use the dataset in your work, please cite
```
@article{DBLP:journals/corr/abs-2104-09243,
author = {Nikola Ljube\\\\v{s}i\\\\'{c} and
Davor Lauc},
title = {BERTi{\\\\'{c}} - The Transformer Language Model for Bosnian, Croatian,
Montenegrin and Serbian},
journal = {CoRR},
volume = {abs/2104.09243},
year = {2021},
url = {https://arxiv.org/abs/2104.09243},
archivePrefix = {arXiv},
}
``` |
false | The dataset contains 6273 training samples, 762 validation samples and 749 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'),
list of tokens ('tokens'), list of normalised word forms ('norms'), list of lemmas ('lemmas'),
list of Multext-East tags ('xpos\_tags), list of morphological features ('feats'),
and list of UPOS tags ('upos\_tags'), which are encoded as class labels. |
false | This dataset is based on 3,748 Serbian tweets that were segmented into sentences, tokens, and annotated with normalized forms, lemmas, MULTEXT-East tags (XPOS), UPOS tags and morphological features, and named entities.
The dataset contains 5462 training samples (sentences), 711 validation samples and 725 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'),
list of tokens ('tokens'), list of normalised tokens ('norms'), list of lemmas ('lemmas'), list of UPOS tags ('upos\_tags'),
list of MULTEXT-East tags ('xpos\_tags), list of morphological features ('feats'),
and list of named entity IOB tags ('iob\_tags'), which are encoded as class labels.
If you are using this dataset in your research, please cite the following paper:
```
@article{Miličević_Ljubešić_2016,
title={Tviterasi, tviteraši or twitteraši? Producing and analysing a normalised dataset of Croatian and Serbian tweets},
volume={4},
url={https://revije.ff.uni-lj.si/slovenscina2/article/view/7007},
DOI={10.4312/slo2.0.2016.2.156-188},
number={2},
journal={Slovenščina 2.0: empirical, applied and interdisciplinary research},
author={Miličević, Maja and Ljubešić, Nikola},
year={2016},
month={Sep.},
pages={156–188} }
``` |
false |
# Dataset Card for TV3Parla
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://collectivat.cat/asr#tv3parla
- **Repository:**
- **Paper:** [Building an Open Source Automatic Speech Recognition System for Catalan](https://www.isca-speech.org/archive/iberspeech_2018/kulebi18_iberspeech.html)
- **Point of Contact:** [Col·lectivaT](mailto:info@collectivat.cat)
### Dataset Summary
This corpus includes 240 hours of Catalan speech from broadcast material.
The details of segmentation, data processing and also model training are explained in Külebi, Öktem; 2018.
The content is owned by Corporació Catalana de Mitjans Audiovisuals, SA (CCMA);
we processed their material and hereby making it available under their terms of use.
This project was supported by the Softcatalà Association.
### Supported Tasks and Leaderboards
The dataset can be used for:
- Language Modeling.
- Automatic Speech Recognition (ASR) transcribes utterances into words.
### Languages
The dataset is in Catalan (`ca`).
## Dataset Structure
### Data Instances
```
{
'path': 'tv3_0.3/wav/train/5662515_1492531876710/5662515_1492531876710_120.180_139.020.wav',
'audio': {'path': 'tv3_0.3/wav/train/5662515_1492531876710/5662515_1492531876710_120.180_139.020.wav',
'array': array([-0.01168823, 0.01229858, 0.02819824, ..., 0.015625 ,
0.01525879, 0.0145874 ]),
'sampling_rate': 16000},
'text': 'algunes montoneres que que et feien anar ben col·locat i el vent també hi jugava una mica de paper bufava vent de cantó alguns cops o de cul i el pelotón el vent el porta molt malament hi havia molts nervis'
}
```
### Data Fields
- `path` (str): Path to the audio file.
- `audio` (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling
rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and
resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might
take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column,
*i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `text` (str): Transcription of the audio file.
### Data Splits
The dataset is split into "train" and "test".
| | train | test |
|:-------------------|-------:|-----:|
| Number of examples | 159242 | 2220 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@inproceedings{kulebi18_iberspeech,
author={Baybars Külebi and Alp Öktem},
title={{Building an Open Source Automatic Speech Recognition System for Catalan}},
year=2018,
booktitle={Proc. IberSPEECH 2018},
pages={25--29},
doi={10.21437/IberSPEECH.2018-6}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
false |
## Dataset Summary
We introduce a Romanian IT Dataset (RoITD) resembling SQuAD 1.1. RoITD consists of 9575 Romanian QA pairs formulated by crowd workers. QA pairs are based on 5043 articles from Romanian Wikipedia articles describing IT and household products. Of the total number of questions, 5103 are possible (i.e. the correct answer can be found within the paragraph) and 4472 are not possible (i.e. the given answer is a "plausible answer" and not correct)
## Dataset Structure
The data structure follows the format of SQuAD, which contains several attributes such as **question**, **id**, **text**, `**answer_start**, **is_impossible** and **context**. The paragraph provided to crowd sourcing workers is stored in the field **context**. This incorporates manually-selected paragraphs from Wikipedia. The field **id** is comprised of a randomly assigned unique identification number for the answer-question pair. Only the numbers "0" and "1" are allowed in the **is_impossible** field. The category "A" is assigned the value "0", indicating that the answer is correct. The value "1" corresponds to the category "U", indicating a plausible answer. The question posed by the source crowd source worker is represented by the field **question**. The field **answer_start** keeps track of the character index marking the beginning of an answer.
|
true |
# Dataset Card for "quora_swe"
The dataset quora_swe is a subset of the automatically translated (MNT) Swedish Semantic Textual Similarity dataset: quora-deduplicates .
|
true |
### KR3: Korean Restaurant Reviews with Ratings
Korean sentiment classification dataset
- Size: 460K(+180K)
- Language: Korean-centric
### ⚠️ Caution with `Rating` Column
0 stands for negative review, 1 stands for positive review, and 2 stands for ambiguous review.
**Note that rating 2 is not intended to be used directly for supervised learning(classification).** This data is included for additional pre-training purpose or other usage.
In other words, this dataset is basically a **binary** sentiment classification task where labels are 0 and 1.
### 🔍 See More
See all the codes for crawling/preprocessing the dataset and experiments with KR3 in [GitHub Repo](https://github.com/Wittgensteinian/kr3).
See Kaggle dataset in [Kaggle Dataset](https://www.kaggle.com/ninetyninenewton/kr3-korean-restaurant-reviews-with-ratings).
### Usage
```python
from datasets import load_dataset
kr3 = load_dataset("Wittgensteinian/KR3", name='kr3', split='train')
kr3 = kr3.remove_columns(['__index_level_0__']) # Original file didn't include this column. Suspect it's a hugging face issue.
```
```python
# drop reviews with ambiguous label
kr3_binary = kr3.filter(lambda example: example['Rating'] != 2)
```
### License
**CC BY-NC-SA 4.0**
### Legal Issues
We concluded that the **non-commerical usage and release of KR3 fall into the range of fair use (공정 이용)** stated in the Korean copyright act (저작권법). We further clarify that we **did not agree to the terms of service** from any websites which might prohibit web crawling. In other words, web crawling we've done was proceeded without logging in to the website. Despite all of these, feel free to contact to any of the contributors if you notice any legal issues.
### Contributors & Acknowledgement
(Alphabetical order)
[Dongin Jung](https://github.com/dongin1009)
[Hyunwoo Kwak](https://github.com/Kwak-Hyun-woo)
[Kaeun Lee](https://github.com/Kaeun-Lee)
[Yejoon Lee](https://github.com/wittgensteinian)
This work was done as DIYA 4기. Compute resources needed for the work was supported by [DIYA](https://blog.diyaml.com) and surromind.ai.
|
false |
# MERLIN corpus
Project URL: https://merlin-platform.eu/C_mcorpus.php
Dataset URL: https://clarin.eurac.edu/repository/xmlui/handle/20.500.12124/6
The MERLIN corpus is a written learner corpus for Czech, German, and Italian that has been designed to illustrate the Common European Framework of Reference for Languages (CEFR) with authentic learner data. The corpus contains learner texts produced in standardized language certifications covering CEFR levels A1-C1. The MERLIN annotation scheme includes a wide range of language characteristics that provide researchers with concrete examples of learner performance and progress across multiple proficiency levels. |
false | This dataset is based on 3,871 Croatian tweets that were segmented into sentences, tokens, and annotated with normalized forms, lemmas, MULTEXT-East tags (XPOS), UPOS tags and morphological features, and named entities.
The dataset contains 6339 training samples (sentences), 815 validation samples and 785 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'),
list of tokens ('tokens'), list of normalised tokens ('norms'), list of lemmas ('lemmas'), list of UPOS tags ('upos\_tags'),
list of MULTEXT-East tags ('xpos\_tags), list of morphological features ('feats'),
and list of named entity IOB tags ('iob\_tags'), which are encoded as class labels.
If you are using this dataset in your research, please cite the following paper:
```
@article{Miličević_Ljubešić_2016,
title={Tviterasi, tviteraši or twitteraši? Producing and analysing a normalised dataset of Croatian and Serbian tweets},
volume={4},
url={https://revije.ff.uni-lj.si/slovenscina2/article/view/7007},
DOI={10.4312/slo2.0.2016.2.156-188},
number={2},
journal={Slovenščina 2.0: empirical, applied and interdisciplinary research},
author={Miličević, Maja and Ljubešić, Nikola},
year={2016},
month={Sep.},
pages={156–188} }
```
|
false |
# SAE-door-abstracts
This dataset includes ~1,550 texts of abstracts of technical papers and journal articles from the SAE Mobilus database that cover the topics of automotive or aerospace doors, noise, acoustics, and vibrations. |
true |
# Dataset Card for WANLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [WANLI homepage](https://wanli.allenai.org/)
- **Repository:** [Github repo](https://github.com/alisawuffles/wanli)
- **Paper:** [arXiv](https://arxiv.org/abs/2201.05955)
- **Point of Contact:** [Alisa Liu](mailto:alisaliu@cs.washington.edu)
### Dataset Summary
WANLI (**W**orker-**A**I Collaboration for **NLI**) is a collection of 108K English sentence pairs for the task of natural language inference (NLI).
Each example is created by first identifying a "pocket" of examples in [MultiNLI (Williams et al., 2018)](https://cims.nyu.edu/~sbowman/multinli/) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern.
The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators.
WANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%.
Models trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision.
### Languages
The dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States.
## Dataset Structure
### Data Instances
Here is an example of an NLI example in `data/wanli/train.jsonl` or `data/wanli/test.jsonl`.
```
{
"id": 225295,
"premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.",
"hypothesis": "The coach is a good coach.",
"gold": "entailment",
"genre": "generated",
"pairID": "171408"
}
```
- `id`: unique identifier for the example
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `gold`: one of `entailment`, `neutral`, and `contradiction`
- `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators
- `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl`
We also release the raw annotations for each worker, which can be found in `data/wanli/anonymized_annotations.jsonl`.
```
"WorkerId": "EUJ",
"id": 271560,
"nearest_neighbors": [
309783,
202988,
145310,
98030,
148759
],
"premise": "I don't know what I'd do without my cat. He is my only friend.",
"hypothesis": "I would be alone.",
"label": "neutral",
"revised_premise": "I don't know what I'd do without my cat. He is my only friend.",
"revised_hypothesis": "I would be alone without my cat.",
"gold": "entailment",
"revised": true
```
- `WorkerId`: a unique identification for each crowdworker (NOT the real worker ID from AMT)
- `id`: id of generated example
- `nearest_neighbors`: ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in `mnli/train.jsonl`.
- `premise`: GPT-3 generated premise
- `hypothesis`: GPT-3 generated hypothesis
- `label`: the shared label of the in-context examples, which is the "intended" label for this generation
- `revised_premise`: premise after human review
- `revised_hypothesis`: hypothesis after human review
- `gold`: annotator-assigned gold label for the (potentially revised) example
- `revised`: whether the example was revised
### Data Splits
The dataset is randomly split into a *train* and *test* set.
| | train | test |
|-------------------------|------:|-----:|
| Examples | 102885| 5000|
## Dataset Creation
### Curation Rationale
A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label.
### Source Data
#### Initial Data Collection and Normalization
Our pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from [Swayamdipta et al. (2020)](https://aclanthology.org/2020.emnlp-main.746/) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality.
#### Who are the source language producers?
The GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk.
Workers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target.
### Annotations
#### Annotation process
Given an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it.
Details about instructions, guidelines, and instructional examples can be found in Appendix D of the paper.
Crowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example.
For examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement.
#### Who are the annotators?
Annotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States.
300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems.
### Discussion of Biases
Text generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language.
To partially remedy this, we ask annotators to discard any examples that may be perceived as offensive.
Nonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset.
## Additional Information
### Dataset Curators
WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/).
### Citation Information
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
``` |
false |
# Dataset Card for MASC: MASSIVE ARABIC SPEECH CORPUS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus
- **Repository:**
- **Paper:** https://dx.doi.org/10.21227/e1qb-jv46
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This corpus is a dataset that contains 1,000 hours of speech sampled at 16~kHz and crawled from over 700 YouTube channels. MASC is multi-regional, multi-genre, and multi-dialect dataset that is intended to advance the research and development of Arabic speech technology with the special emphasis on Arabic speech recognition
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Multi-dialect Arabic
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
#### masc_dev
- speech
- sampling_rate
- target_text (label)
### Data Splits
#### masc_dev
- train: 100
- test: 40
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
Note: this is a small development set for testing.
### Dataset Curators
[More Information Needed]
### Licensing Information
CC 4.0
### Citation Information
[More Information Needed]
### Contributions
Mohammad Al-Fetyani, Muhammad Al-Barham, Gheith Abandah, Adham Alsharkawi, Maha Dawas, August 18, 2021, "MASC: Massive Arabic Speech Corpus", IEEE Dataport, doi: https://dx.doi.org/10.21227/e1qb-jv46.
|
false |
# Dataset Card for SAT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://blog.vietai.org/sat/
- **Repository:** https://github.com/vietai/sat
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
SAT (Style Augmented Translation) dataset contains roughly 3.3 million English-Vietnamese pairs of texts.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
The languages in the dataset are:
- Vietnamese (`vi`)
- English (`en`)
## Dataset Structure
### Data Instances
```
{
'translation': {
'en': 'Rachel Pike : The science behind a climate headline',
'vi': 'Khoa học đằng sau một tiêu đề về khí hậu'
}
}
```
### Data Fields
- `translation`:
- `en`: Parallel text in English.
- `vi`: Parallel text in Vietnamese.
### Data Splits
The dataset is split in "train" and "test".
| | train | test |
|--------------------|--------:|-----:|
| Number of examples | 3359574 | 7221 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
Unknown.
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
false |
## Dataset Description
A small subset of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, with 87 programming languages, each has 10,000 random samples from the original dataset.
## Languages
The dataset contains 87 programming languages:
````
'ada', 'agda', 'alloy', 'antlr', 'applescript', 'assembly', 'augeas', 'awk', 'batchfile', 'bison', 'bluespec', 'c',
'c++', 'c-sharp', 'clojure', 'cmake', 'coffeescript', 'common-lisp', 'css', 'cuda', 'dart', 'dockerfile', 'elixir',
'elm', 'emacs-lisp','erlang', 'f-sharp', 'fortran', 'glsl', 'go', 'groovy', 'haskell','html', 'idris', 'isabelle', 'java',
'java-server-pages', 'javascript', 'julia', 'kotlin', 'lean', 'literate-agda', 'literate-coffeescript', 'literate-haskell',
'lua', 'makefile', 'maple', 'markdown', 'mathematica', 'matlab', 'ocaml', 'pascal', 'perl', 'php', 'powershell', 'prolog',
'protocol-buffer', 'python', 'r', 'racket', 'restructuredtext', 'rmarkdown', 'ruby', 'rust', 'sas', 'scala', 'scheme',
'shell', 'smalltalk', 'solidity', 'sparql', 'sql', 'stan', 'standard-ml', 'stata', 'systemverilog', 'tcl', 'tcsh', 'tex',
'thrift', 'typescript', 'verilog', 'vhdl', 'visual-basic', 'xslt', 'yacc', 'zig'
`````
## Dataset Structure
```python
# to load go:
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol-xl", data_dir="data/go")
```
|
false |
# Open Instruct V1 - A dataset for having LLMs follow instructions.
Open Instruct V1 is an amalgamation of different datasets which are cleaned and then collated into a singular format for training.
## Dataset Breakdown
| Dataset | Amount of Samples |
|----------------|-------------------|
| [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 51759 |
| [Self Instruct](https://github.com/yizhongw/self-instruct) | 82599 |
| [GPT-4 Instruct](https://github.com/teknium1/GPTeacher) | 18194 |
| [Code Alpaca](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K) | 18019 |
| [Dolly](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) | 15015 |
| [Synthetic](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) | 33143 |
| [Roleplay](https://github.com/teknium1/GPTeacher) | 3146 |
| [asss](https://huggingface.co/datasets/HuggingFaceH4/asss) | 448 |
| [instruction-dataset](https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset) | 327 |
| Total | 222650 |
|
true |
# Dataset Card for Vietnamese Students’ Feedback Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/uit.edu.vn/uit-nlp/datasets-projects#h.p_4Brw8L-cbfTe
- **Repository:**
- **Paper:** [UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis](https://www.researchgate.net/publication/329645066_UIT-VSFC_Vietnamese_Students'_Feedback_Corpus_for_Sentiment_Analysis)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Students’ feedback is a vital resource for the interdisciplinary research involving the combining of two different
research fields between sentiment analysis and education.
Vietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are
human-annotated with two different tasks: sentiment-based and topic-based classifications.
To assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the
UIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over
91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved
approximately 88% of the sentiment F1-score and over 84% of the topic F1-score.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language of the dataset text sentence is Vietnamese (`vi`).
## Dataset Structure
### Data Instances
An instance example:
```
{
'sentence': 'slide giáo trình đầy đủ .',
'sentiment': 2,
'topic': 1
}
```
### Data Fields
- `sentence` (str): Text sentence.
- `sentiment`: Sentiment class, with values 0 (negative), 1 (neutral) and 2 (positive).
- `topic`: Topic class, with values 0 (lecturer), 1 (training_program), 2 (facility) and 3 (others).
### Data Splits
The dataset is split in train, validation and test.
| | Tain | Validation | Test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 11426 | 1583 | 3166 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```
@InProceedings{8573337,
author={Nguyen, Kiet Van and Nguyen, Vu Duc and Nguyen, Phu X. V. and Truong, Tham T. H. and Nguyen, Ngan Luu-Thuy},
booktitle={2018 10th International Conference on Knowledge and Systems Engineering (KSE)},
title={UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis},
year={2018},
volume={},
number={},
pages={19-24},
doi={10.1109/KSE.2018.8573337}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
false |
# Dataset Card for Catalan General Crawling
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5483031
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [ona.degibert@bsc.es](ona.degibert@bsc.es)
### Dataset Summary
The Catalan General Crawling Corpus is a 435-million-token web corpus of Catalan built from the web. It has been obtained by crawling the 500 most popular .cat and .ad domains during July 2020. It consists of 434,817,705 tokens, 19,451,691 sentences and 1,016,114 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
### Supported Tasks and Leaderboards
This corpus is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'text': 'Reduïu els costos dels processos administratius al vostre organisme públic\nEviteu els desplaçaments i pèrdua de temps als ciutadans en les seves gestions\nOferiu una administració més transparent a
ciutadans i empreses\nEns grans i petits experimenten aquesta transformació amb èxit, gràcies al suport de l\'AOC\nDepartament de Sistemes d\'Informació i Processos\n" Via Oberta ens ha permès fer efectiu el d
ret dels ciutadans a no aportar documents, eliminant paper i simplificant procediments"\n" e.FACT proporciona informació indispensable per a la realització de les auditories del registre comptable de factures d
e les Administracions Públiques Catalanes"\nCoordinador del departament d\'Informàtica\n"El servei VIA OBERTA és el que ha aportat majors avantatges per als ciutadans"\n"Amb l\' e-NOTUM hem escurçat els procedi
ments en 12 dies, quasi un 40% menys!"\nCoordinadora d\'organització de persones i e-administració\n" Via Oberta ofereix millores per als ciutadans al no haver d\'aportar cap document"\nResponsable d\'Informàti
ca i Administració Electrònica\n" e-TRAM ens ha permès implantar un servei de tramitació electrònica per als ciutadans de forma ràpida, senzilla i amb un cost reduït"\n"Els municipis amb pocs habitants trobem e
n els serveis de l\'AOC la gratuïtat i la comoditat necessàries per dur a terme el nostre dia a dia"\n"Les T-CAT han permès incorporar de forma segura la signatura electrònica dins dels nostres procediments afa
vorint la transformació digital de la nostra activitat"\nCap de Departament de Sistemes i Tecnologies de la Informació\n"Amb el desplegament de l\' idCAT hem apropat l\'Ajuntament a la ciutadania"\n"Mitjançant
els serveis de Govern Obert de l\'AOC hem pogut fer fàcil el que sembla difícil"\n"Al tauler electrònic pots penjar fins i tot el projecte sencer i al final et permet fer també la diligència"\nÀrea de Promoció
Econòmica, Administració i Hisenda\n"El Sobre Digital i la PSCP han aconseguit una comunió senzilla entre empreses i administració per universalitzar la compra pública electrònica"\n"L\' e-SET és la implantació
d\'un nou sistema de treball que facilita la feina del dia a dia"\nCap del servei de contractació i compres\n"El Sobre Digital, una experiència imprescindible per a la bona administració amb estalvi de recurso
s i millora de la seguretat jurídica i la transparència"\nÀrea d\'Organització i Administració Electrònica\n"El desplegament de la valisa electrònica ha estat clau en el procés de transformació digital dels nos
tres procediments interns"\n"L\' Hèstia permet el treball en temps real i des de qualsevol lloc, així com sistematitzar la pràctica professional, recollir la informació ordenadament i amb el mateix llenguatge"\
nConsulta els materials del Congrés de Govern Digital 2019\nGoverns transparents, fluids, dinàmics, líquids... un bon lema pel principal objectiu de la governança del segle XXI: democratitzar-ho tot.\nConfluènc
ies, rius, cooperació.\nCatalunya, Mediterrània, mar de drets.\nA favor: totes les Administracions movent-se per posar-se al dia i millorar, tot aprofitant la revolució digital.\nEn contra: quants cops estem re
inventant la roda i quantes quantes oportunitats perdudes de fer-ho una única vegada i de forma coordinada i col·laborativa?\n"La transparència és una oportunitat.\nHem de perdre tota por a explicar què fem": l
a conclusió de la taula d\'alcaldies de la Jornada de Govern Obert pic.twitter.com/ERbgLSIXZM\nEl director general de Participació Ciutadana ens convida a transformar les administracions públiques a partir de l
a participació ciutadana\nEns cal que allò que preocupa i ocupa els governants formi part d\'allò en què participa la ciutadania pic.twitter.com/NwQr4EZSCS: "A moltes institucions encara els sona xinés això de
les dades obertes i la transparència.\nDe que serveix que hi hagi un portal, si llavors no hi ha dades?\nLlavors l\'accés a la informació pels periodistes és molt parcial".\nOferim eines que, conjuntament amb l
a metodologia i el suport necessari, fan possible l\'assoliment d\'un govern digital\nPosem al vostre abast tot el coneixement: formació, guies, normatives, etc.\nTenim eines per gestionar àgilment part del pro
cés administratiu del vostre ens\nEl nostre equip farà tot el possible per resoldre les vostres incidències\nSabem que es tracta d\'una decisió molt important per al vostre ens i és per això que us ho volem pos
ar fàcil.\nLa selecció de l\'actualitat d\'Administració Oberta a la vostra safata.'
}
```
### Data Fields
- `text` (str): Text.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The corpus has been obtained by crawling the 500 most popular .cat and .ad domains during July 2020.
For preprocessing we used [Corpus-Cleaner](https://github.com/TeMU-BSC/corpus-cleaner-acl), a modular Python-based toolkit to clean raw text corpora through generator pipelines.
#### Who are the source language producers?
The data comes from multiple web pages in Catalan.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
false |
# Dataset Card for Greek Legal Named Entity Recognition
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://legislation.di.uoa.gr/publications?language=en
- **Repository:**
- **Paper:** Angelidis, I., Chalkidis, I., & Koubarakis, M. (2018). Named Entity Recognition, Linking and Generation for Greek Legislation. JURIX.
- **Leaderboard:**
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk); [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains an annotated corpus for named entity recognition in Greek legislations. It is the first of its kind for the Greek language in such an extended form and one of the few that examines legal text in a full spectrum entity recognition.
### Supported Tasks and Leaderboards
The dataset supports the task of named entity recognition.
### Languages
The language in the dataset is Greek as it used in the Greek Government Gazette.
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present (train, validation and test).
### Data Fields
The files contain the following data fields
- `date`: The date when the document was published.
- `gazette`: The government gazette of the document. Either `A` or `D`
- `A` is the general one, publishing standard legislation
- `D` is meant for legislation on urban planning and such things
- `words`: The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see `convert_to_hf_dataset.py`.
- `ner`: The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following:
- `FACILITY`: Facilities, such as police stations, departments etc.
- `GPE`: Geopolitical Entity; any reference to a geopolitical entity (e.g., country, city, Greek administrative unit, etc.)
- `LEG-REFS`: Legislation Reference; any reference to Greek or European legislation (e.g., Presidential Decrees, Laws, Decisions, EU Regulations and Directives, etc.)
- `LOCATION-NAT`: Well defined natural location, such as rivers, mountains, lakes etc.
- `LOCATION-UNK`: Poorly defined locations such "End of road X" or other locations that are not "official".
- `ORG`: Organization; any reference to a public or private organization, such as: international organizations (e.g., European Union, United Nations, etc.), Greek public organizations (e.g., Social Insurance Institution) or private ones (e.g., companies, NGOs, etc.).
- `PERSON`: Any formal name of a person mentioned in the text (e.g., Greek government members, public administration officials, etc.).
- `PUBLIC-DOCS`: Public Document Reference; any reference to documents or decisions that have been published by a public institution (organization) that are not considered a primary source of legislation (e.g., local decisions, announcements, memorandums, directives).
- `O`: No entity annotation present
The final tagset (in IOB notation) is the following: `['O', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LEG-REFS', 'I-LEG-REFS', 'B-PUBLIC-DOCS', 'I-PUBLIC-DOCS', 'B-PERSON', 'I-PERSON', 'B-FACILITY', 'I-FACILITY', 'B-LOCATION-UNK', 'I-LOCATION-UNK', 'B-LOCATION-NAT', 'I-LOCATION-NAT']`
### Data Splits
The dataset has three splits: *train*, *validation* and *test*.
Split across the documents:
| split | number of documents |
|:---------------|--------------------:|
| train | 23723 |
| validation | 5478 |
| test | 5084 |
Split across NER labels
| NER label + split | number of instances |
|:-----------------------------------------------|----------------------:|
| ('FACILITY', 'test') | 142 |
| ('FACILITY', 'train') | 1224 |
| ('FACILITY', 'validation') | 60 |
| ('GPE', 'test') | 1083 |
| ('GPE', 'train') | 5400 |
| ('GPE', 'validation') | 1214 |
| ('LEG-REFS', 'test') | 1331 |
| ('LEG-REFS', 'train') | 5159 |
| ('LEG-REFS', 'validation') | 1382 |
| ('LOCATION-NAT', 'test') | 26 |
| ('LOCATION-NAT', 'train') | 145 |
| ('LOCATION-NAT', 'validation') | 2 |
| ('LOCATION-UNK', 'test') | 205 |
| ('LOCATION-UNK', 'train') | 1316 |
| ('LOCATION-UNK', 'validation') | 283 |
| ('ORG', 'test') | 1354 |
| ('ORG', 'train') | 5906 |
| ('ORG', 'validation') | 1506 |
| ('PERSON', 'test') | 491 |
| ('PERSON', 'train') | 1921 |
| ('PERSON', 'validation') | 475 |
| ('PUBLIC-DOCS', 'test') | 452 |
| ('PUBLIC-DOCS', 'train') | 2652 |
| ('PUBLIC-DOCS', 'validation') | 556 |
## Dataset Creation
### Curation Rationale
Creating a big dataset for Greek named entity recognition and entity linking.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Greek Government Gazette
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
According to (Angelidis et al., 2018) the authors of the paper annotated the data: *"Our group annotated all of the above documents for the 6 entity types that we examine."*
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
## Additional Information
### Dataset Curators
The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.
Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch); [Github](https://github.com/kapllan)).
### Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Citation Information
```
@inproceedings{Angelidis2018NamedER,
author = {Angelidis, Iosif and Chalkidis, Ilias and Koubarakis, Manolis},
booktitle = {JURIX},
keywords = {greek,legal nlp,named entity recognition},
title = {{Named Entity Recognition, Linking and Generation for Greek Legislation}},
year = {2018}
}
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
|
false |
# Dataset Card for "tner/mit_movie_trivia"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Dataset:** MIT Movie
- **Domain:** Movie
- **Number of Entity:** 12
### Dataset Summary
MIT Movie NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `Actor`, `Plot`, `Opinion`, `Award`, `Year`, `Genre`, `Origin`, `Director`, `Soundtrack`, `Relationship`, `Character_Name`, `Quote`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 13, 14, 0, 0, 0, 3, 4, 4, 4, 4, 4, 4, 4, 4],
'tokens': ['a', 'steven', 'spielberg', 'film', 'featuring', 'a', 'bluff', 'called', 'devil', 's', 'tower', 'and', 'a', 'spectacular', 'mothership']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/mit_movie_trivia/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-Actor": 1,
"I-Actor": 2,
"B-Plot": 3,
"I-Plot": 4,
"B-Opinion": 5,
"I-Opinion": 6,
"B-Award": 7,
"I-Award": 8,
"B-Year": 9,
"B-Genre": 10,
"B-Origin": 11,
"I-Origin": 12,
"B-Director": 13,
"I-Director": 14,
"I-Genre": 15,
"I-Year": 16,
"B-Soundtrack": 17,
"I-Soundtrack": 18,
"B-Relationship": 19,
"I-Relationship": 20,
"B-Character_Name": 21,
"I-Character_Name": 22,
"B-Quote": 23,
"I-Quote": 24
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|mit_movie_trivia |6816 | 1000| 1953|
|
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset.json file contains ~1.7 million synthetic data for arithmetic tasks, generated by dataset.ipynb.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions
- Original Paper and Dataset [here](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T)
- Kaggle dataset [here](https://www.kaggle.com/datasets/kmader/skin-cancer-mnist-ham10000?resource=download)
# Introduction to datasets
Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available dataset of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations, acquired and stored by different modalities. The final dataset consists of 10015 dermatoscopic images which can serve as a training set for academic machine learning purposes. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions: Actinic keratoses and intraepithelial carcinoma / Bowen's disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus like keratoses, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhage, vasc).
More than 50% of lesions are confirmed through histopathology (histo), the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal).
The test set is not public, but the evaluation server remains running (see the challenge website). Any publications written using the HAM10000 data should be evaluated on the official test set hosted there, so that methods can be fairly compared.
- Test site can be accessed [here](https://challenge.isic-archive.com/landing/2018/)
# Disclaimer and additional information
This is a contribution to open sourced data in hugging face for image data. Images can be obtained from above links.
Train test split was done using a stratified splitting by cancer/diagnosis type. The code to stratify the dataset can be obtained on my github [here](https://github.com/marmal88/skin_cancer).
I do not own any rights to above images.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true |
# Dataset Card for ProsocialDialog Dataset
## Dataset Description
- **Repository:** [Dataset and Model](https://github.com/skywalker023/prosocial-dialog)
- **Paper:** [ProsocialDialog: A Prosocial Backbone for Conversational Agents](https://aclanthology.org/2022.emnlp-main.267/)
- **Point of Contact:** [Hyunwoo Kim](mailto:hyunwook@allenai.org)
## Dataset Summary
ProsocialDialog is the first large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K unique RoTs, and 497K dialogue safety labels accompanied by free-form rationales.
## Supported Tasks
* Dialogue response generation
* Dialogue safety prediction
* Rules-of-thumb generation
## Languages
English
## Dataset Structure
### Data Attributes
attribute | type | description
--- | --- | ---
`context` | str | the potentially unsafe utterance
`response` | str | the guiding utterance grounded on rules-of-thumb (`rots`)
`rots` | list of str\|null | the relevant rules-of-thumb for `text` *not* labeled as \_\_casual\_\_
`safety_label` | str | the final verdict of the context according to `safety_annotations`: {\_\_casual\_\_, \_\_possibly\_needs\_caution\_\_, \_\_probably\_needs\_caution\_\_, \_\_needs\_caution\_\_, \_\_needs\_intervention\_\_}
`safety_annotations` | list of str | raw annotations from three workers: {casual, needs caution, needs intervention}
`safety_annotation_reasons` | list of str | the reasons behind the safety annotations in free-form text from each worker
`source` | str | the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics_amt, ethics_reddit}
`etc` | str\|null | other information
`dialogue_id` | int | the dialogue index
`response_id` | int | the response index
`episode_done` | bool | an indicator of whether it is the end of the dialogue
## Dataset Creation
To create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2205.12688).
## Additional Information
### Citation
Please cite our work if you found the resources in this repository useful:
```
@inproceedings{kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
``` |
false |
Source of data: https://github.com/FudanVI/benchmarking-chinese-text-recognition |
false |
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Math dataset is composed of 50K problem-solution pairs obtained using GPT-4. The dataset problem-solutions pairs generating from 25 math topics, 25 subtopics for each topic and 80 problems for each "topic,subtopic" pairs.
We provide the data in `math50k.zip`.
## Data Fields
**The data fields for files in `math50k.zip` are as follows:**
* `role_1`: assistant role
* `topic`: math topic
* `sub_topic`: math subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
Note: File naming refers to {`topic_index`}\_{`subtopic_index`}\_{`problem_number`}.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/math", repo_type="dataset", filename="math50k.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
false | |
false | # Titanic
The [Titanic dataset](https://www.kaggle.com/datasets/vinicius150987/titanic3) from [Kaggle](https://www.kaggle.com/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|----------------------------|
| survival | Binary classification | Has the passanger survived?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/titanic")["train"]
``` |
false | # Dataset Card for TED descriptions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false | # Dataset Card for 🥤SODA Synthetic Dialogue
## Dataset Description
- **Repository:**
[Code](https://github.com/LAION-AI/Open-Assistant/openassistant/datasets/soda_synthetic_dialogue/)
## Dataset Summary
🥤SODA Synthetic Dialogue is a set of synthetic dialogues between Assistant and
User. In each conversation, User asks Assistant to perform summarization or
story generation tasks based on a snippet of an existing dialogue, story, or
from a title or theme.
This data was created by synthesizing the dialogues in
[🥤Soda](https://huggingface.co/datasets/allenai/soda) and applying a set of
templates to generate the conversation. The original research paper can be found
[here](https://arxiv.org/pdf/2212.10465v1.pdf).
Example:
```
User: Can you write the next few lines of dialogue for this scene:
Cornell: I know what you're thinking, Dontrell. I can sense your feelings and know exactly what you're going to say or do next. You're thinking about how much you miss your family, and how you wish you could see them again. You're wondering if there's anything you can do to make that happen. And you're also feeling a little guilty, because you know that if you could see them again, it would only be for a short while before they died.
Dontrell: Cornell, how can you possibly know all that? It's like you can read my mind!
Assistant: Sure, the next dialogue for this scene could be:
Cornell: I told you, I can sense your feelings. It's like I have a connection to your thoughts and emotions. I'm not sure why or how it works, but ever since we met, I've been able to understand what you're thinking and feeling pretty easily.
Dontrell: Wow...that's amazing! And a little bit scary too. But it does explain why we've always been so close - even though we've only known each other for a few months.
Cornell: Yeah, I guess our connection is pretty strong. But don't worry, Dontrell - I'll never use it to manipulate or control you in any way. I promise."
Dontrell: That's a relief! I was starting to feel a little bit like you could read my mind, and it was kind of creepy. But if you promise not to use your abilities to control me, then I'm okay with it.
Cornell: I swear, Dontrell. I'll never use my abilities to control or manipulate you in any way. You have my word.
User: And a title?
Assistant: Sure, a title for this dialogue could be:
Cornell is intuitive. Cornell reads Dontrell's mind.
User: How about a short description?
Assistant: Sure, a short description for this dialogue could be:
Cornell knows what Dontrell is thinking. He can sense her feelings and know exactly what she is going to say or do next.
User: What would be one theme of this story?
Assistant: One theme of this story could be:
being intuitive
```
## Usage
The dataset contains one configuration, `dialogue_modeling`, which has a single
text `conversation` feature.
## Source data
The script to synthesize this dataset was originally created by
[ontocord](https://github.com/ontocord) in
[this Colab notebook](https://colab.research.google.com/drive/1Sw3px5dP8whdqT7QMNoqwmqIasZkMbJi?usp=sharing)
and prepared for Hugging Face by
[Jeffrey Quesnelle](https://github.com/jquesnelle/).
## Citation
Please cite our work if you find the resources in this repository useful:
```
@article{ontocord2023sodasynth,
author = {ontocord and Jeffrey Quesnelle},
title = {SODA Synthetic Dialogue},
year = {2023}
}
``` |
false |
# Dataset Card for ontonotes_english
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
- **Repository:**
- **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
- **Leaderboard:** [Papers With Code](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- **Point of Contact:**
### Dataset Summary
This is preprocessed version of what I assume is OntoNotes v5.0.
Instead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match [conll2003](https://huggingface.co/datasets/conll2003).
The source of data is from private repository, which in turn got data from another public repository, location of which is unknown :)
Since data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result.
### Supported Tasks and Leaderboards
- [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
- [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
### Languages
English
## Dataset Structure
### Data Instances
```
{
'tokens': ['Well', ',', 'the', 'Hundred', 'Regiments', 'Offensive', 'was', 'divided', 'into', 'three', 'phases', '.'],
'ner_tags': [0, 0, 29, 30, 30, 30, 0, 0, 0, 27, 0, 0]
}
```
### Data Fields
- **`tokens`** (*`List[str]`*) : **`words`** in original dataset
- **`ner_tags`** (*`List[ClassLabel]`*) : **`named_entities`** in original dataset. The BIO tags for named entities in the sentence.
- tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
### Data Splits
_train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
No license
### Citation Information
```
@inproceedings{pradhan-etal-2013-towards,
title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
author = {Pradhan, Sameer and
Moschitti, Alessandro and
Xue, Nianwen and
Ng, Hwee Tou and
Bj{\"o}rkelund, Anders and
Uryupina, Olga and
Zhang, Yuchen and
Zhong, Zhi},
booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-3516",
pages = "143--152",
}
```
### Contributions
Thanks to the author of private repository, that uploaded this dataset. |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
true |
# Dataset Card for BIOSSES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
- **Repository:** https://github.com/gizemsogancioglu/biosses
- **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954)
- **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com)
### Dataset Summary
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
- very strong: 0.80–1.00
- strong: 0.60–0.79
- moderate: 0.40–0.59
- weak: 0.20–0.39
- very weak: 0.00–0.19
### Data Splits (From BLUE Benchmark)
|name|Train|Dev|Test|
|:--:|:--:|:--:|:--:|
|biosses|64|16|20|
### Supported Tasks and Leaderboards
Biomedical Semantic Similarity Scoring.
### Languages
English.
## Dataset Structure
### Data Instances
For each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).
```json
{
"id": "0",
"sentence1": "Centrosomes increase both in size and in microtubule-nucleating capacity just before mitotic entry.",
"sentence2": "Functional studies showed that, when introduced into cell lines, miR-146a was found to promote cell proliferation in cervical cancer cells, which suggests that miR-146a works as an oncogenic miRNA in these cancers.",
"score": 0.0
}
```
### Data Fields
- `sentence 1`: string
- `sentence 2`: string
- `score`: float ranging from 0 (no relation) to 4 (equivalent)
## Dataset Creation
### Curation Rationale
### Source Data
The [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/).
### Annotations
#### Annotation process
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
The table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.
| |Correlation r |
|----------:|--------------:|
|Annotator A| 0.952|
|Annotator B| 0.958|
|Annotator C| 0.917|
|Annotator D| 0.902|
|Annotator E| 0.941|
## Additional Information
### Dataset Curators
- Gizem Soğancıoğlu, gizemsogancioglu@gmail.com
- Hakime Öztürk, hakime.ozturk@boun.edu.tr
- Arzucan Özgür, gizemsogancioglu@gmail.com
Bogazici University, Istanbul, Turkey
### Licensing Information
BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
### Citation Information
```bibtex
@article{10.1093/bioinformatics/btx238,
author = {Soğancıoğlu, Gizem and Öztürk, Hakime and Özgür, Arzucan},
title = "{BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}",
journal = {Bioinformatics},
volume = {33},
number = {14},
pages = {i49-i58},
year = {2017},
month = {07},
abstract = "{The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text.We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods.The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6\\% in terms of the Pearson correlation metric.A web-based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/.}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/btx238},
url = {https://doi.org/10.1093/bioinformatics/btx238},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/33/14/i49/25157316/btx238.pdf},
}
```
### Contributions
Thanks to [@qanastek](https://github.com/qanastek) for adding this dataset.
|
true | # Dataset Card for "kbp37"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [kbp37](https://github.com/zhangdongxu/kbp37)
- **Paper:** [Relation Classification via Recurrent Neural Network](https://arxiv.org/abs/1508.01006)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 6.58 MB
### Dataset Summary
KBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and
2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation.
There are 33811 sentences been annotated. Zhang and Wang made several refinements:
1. They add direction to the relation names, e.g. '`per:employee_of`' is split into '`per:employee of(e1,e2)`'
and '`per:employee of(e2,e1)`'. They also replace '`org:parents`' with '`org:subsidiaries`' and replace
'`org:member of’ with '`org:member`' (by their reverse directions).
2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the
dataset.
KBP37 contains 18 directional relations and an additional '`no_relation`' relation, resulting in 37 relation classes.
Note:
- There is a formatted version that you can load with `datasets.load_dataset('kbp37', name='kbp37_formatted')`. This version is tokenized with `str.split()` and
provides entities as offsets instead of being enclosed by xml tags. It discards some examples, however, that are invalid in the original dataset and lead
to entity offset errors, e.g. example train/1276.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in KBP37 is in English (BCP-47 en)
## Dataset Structure
### Data Instances
#### kbp37
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 4.7 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"sentence": "<e1> Thom Yorke </e1> of <e2> Radiohead </e2> has included the + for many of his signature distortion sounds using a variety of guitars to achieve various tonal options .",
"relation": 27
}
```
#### kbp37_formatted
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 6.58 MB
An example of 'train' looks as follows:
```json
{
"id": "1",
"token": ["Leland", "High", "School", "is", "a", "public", "high", "school", "located", "in", "the", "Almaden", "Valley", "in", "San", "Jose", "California", "USA", "in", "the", "San", "Jose", "Unified", "School", "District", "."],
"e1_start": 0,
"e1_end": 3,
"e2_start": 14,
"e2_end": 16,
"relation": 3
}
```
### Data Fields
#### kbp37
- `id`: the instance id of this sentence, a `string` feature.
- `sentence`: the sentence, a `string` features.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"no_relation": 0, "org:alternate_names(e1,e2)": 1, "org:alternate_names(e2,e1)": 2, "org:city_of_headquarters(e1,e2)": 3, "org:city_of_headquarters(e2,e1)": 4, "org:country_of_headquarters(e1,e2)": 5, "org:country_of_headquarters(e2,e1)": 6, "org:founded(e1,e2)": 7, "org:founded(e2,e1)": 8, "org:founded_by(e1,e2)": 9, "org:founded_by(e2,e1)": 10, "org:members(e1,e2)": 11, "org:members(e2,e1)": 12, "org:stateorprovince_of_headquarters(e1,e2)": 13, "org:stateorprovince_of_headquarters(e2,e1)": 14, "org:subsidiaries(e1,e2)": 15, "org:subsidiaries(e2,e1)": 16, "org:top_members/employees(e1,e2)": 17, "org:top_members/employees(e2,e1)": 18, "per:alternate_names(e1,e2)": 19, "per:alternate_names(e2,e1)": 20, "per:cities_of_residence(e1,e2)": 21, "per:cities_of_residence(e2,e1)": 22, "per:countries_of_residence(e1,e2)": 23, "per:countries_of_residence(e2,e1)": 24, "per:country_of_birth(e1,e2)": 25, "per:country_of_birth(e2,e1)": 26, "per:employee_of(e1,e2)": 27, "per:employee_of(e2,e1)": 28, "per:origin(e1,e2)": 29, "per:origin(e2,e1)": 30, "per:spouse(e1,e2)": 31, "per:spouse(e2,e1)": 32, "per:stateorprovinces_of_residence(e1,e2)": 33, "per:stateorprovinces_of_residence(e2,e1)": 34, "per:title(e1,e2)": 35, "per:title(e2,e1)": 36}
```
#### kbp37_formatted
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, using `str.split()`, a `list` of `string` features.
- `e1_start`: the 0-based index of the start token of the first argument', an `int` feature.
- `e1_end`: the 0-based index of the end token of the first argument, exclusive, an `int` feature.
- `e2_start`: the 0-based index of the start token of the second argument, an `int` feature.
- `e2_end`: the 0-based index of the end token of the second argument, exclusive, an `int` feature.
- `relation`: the relation label of this instance, an `int` classification label (same as `'kbp37''`).
### Data Splits
| | Train | Dev | Test |
|-------|-------|------|------|
| kbp37 | 15917 | 1724 | 3405 |
| kbp37_formatted | 15807 | 1714 | 3379 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/ZhangW15a,
author = {Dongxu Zhang and
Dong Wang},
title = {Relation Classification via Recurrent Neural Network},
journal = {CoRR},
volume = {abs/1508.01006},
year = {2015},
url = {http://arxiv.org/abs/1508.01006},
eprinttype = {arXiv},
eprint = {1508.01006},
timestamp = {Fri, 04 Nov 2022 18:37:50 +0100},
biburl = {https://dblp.org/rec/journals/corr/ZhangW15a.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. |
true | # ChatGPT-Research-Abstracts
This is a dataset created in relation to a bachelor thesis written by Nicolai Thorer Sivesind and Andreas Bentzen Winje. It contains human-produced and machine-generated text samples of scientific research abstracts.
A reformatted version for text-classification is available in the dataset collection [Human-vs-Machine](https://huggingface.co/datasets/NicolaiSivesind/human-vs-machine). In this collection, all samples are split into separate data points for real and generated, and labeled either 0 (human-produced) or 1 (machine-generated).
Specifications:
+ Generated samples are produced using the GPT-3.5 model, _GPT-3.5-turbo-0301_ (Snapshot of the model used in ChatGPT 1st of March, 2023).
+ Target content prompted using title of real abstract
+ Target word count equal to the human-produced abstract
+ Contains 10k data points of each class.
+ Created by Nicolai Thorer Sivesind
More information about production and contents will be added in the end of may 2023.
### Citation
Please use the following citation:
```
@misc {sivesind_2023,
author = { {Nicolai Thorer Sivesind}},
title = { ChatGPT-Generated-Abstracts },
year = 2023,
publisher = { Hugging Face }
}
```
More information about the dataset will be added once the thesis is finished (end of may 2023). |
true |
# Dataset Card for "cardiffnlp/tweet_topic_single"
## Dataset Description
- **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824)
- **Dataset:** Tweet Topic Dataset
- **Domain:** Twitter
- **Number of Class:** 6
### Dataset Summary
This is the official repository of TweetTopic (["Twitter Topic Classification
, COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 6 labels.
Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.
See [cardiffnlp/tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi) for multi label version of TweetTopic.
The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7).
The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too.
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
For example, a tweet
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below:
http://bluenote.lnk.to/AlbumOfTheWeek
```
is transformed into the following text.
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}
```
A simple function to format tweet follows below.
```python
import re
from urlextract import URLExtract
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
```
### Data Splits
| split | number of texts | description |
|:------------------------|-----:|------:|
| test_2020 | 376 | test dataset from September 2019 to August 2020 |
| test_2021 | 1693 | test dataset from September 2020 to August 2021 |
| train_2020 | 2858 | training dataset from September 2019 to August 2020 |
| train_2021 | 1516 | training dataset from September 2020 to August 2021 |
| train_all | 4374 | combined training dataset of `train_2020` and `train_2021` |
| validation_2020 | 352 | validation dataset from September 2019 to August 2020 |
| validation_2021 | 189 | validation dataset from September 2020 to August 2021 |
| train_random | 2830 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
| validation_random | 354 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
| test_coling2022_random | 3399 | random split used in the COLING 2022 paper |
| train_coling2022_random | 3598 | random split used in the COLING 2022 paper |
| test_coling2022 | 3399 | temporal split used in the COLING 2022 paper |
| train_coling2022 | 3598 | temporal split used in the COLING 2022 paper |
For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.
**IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set).
### Models
| model | training data | F1 | F1 (macro) | Accuracy |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:|
| [cardiffnlp/roberta-large-tweet-topic-single-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-single-all) | all (2020 + 2021) | 0.896043 | 0.800061 | 0.896043 |
| [cardiffnlp/roberta-base-tweet-topic-single-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-single-all) | all (2020 + 2021) | 0.887773 | 0.79793 | 0.887773 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-all) | all (2020 + 2021) | 0.892499 | 0.774494 | 0.892499 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-all) | all (2020 + 2021) | 0.890136 | 0.776025 | 0.890136 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-all) | all (2020 + 2021) | 0.894861 | 0.800952 | 0.894861 |
| [cardiffnlp/roberta-large-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-single-2020) | 2020 only | 0.878913 | 0.70565 | 0.878913 |
| [cardiffnlp/roberta-base-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-single-2020) | 2020 only | 0.868281 | 0.729667 | 0.868281 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-2020) | 2020 only | 0.882457 | 0.740187 | 0.882457 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-2020) | 2020 only | 0.87596 | 0.746275 | 0.87596 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020) | 2020 only | 0.877732 | 0.746119 | 0.877732 |
Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```python
{
"text": "Game day for {{USERNAME}} U18\u2019s against {{USERNAME}} U18\u2019s. Even though it\u2019s a \u2018home\u2019 game for the people that have settled in Mid Wales it\u2019s still a 4 hour round trip for us up to Colwyn Bay. Still enjoy it though!",
"date": "2019-09-08",
"label": 4,
"id": "1170606779568463874",
"label_name": "sports_&_gaming"
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweet_topic_single/raw/main/dataset/label.single.json).
```python
{
"arts_&_culture": 0,
"business_&_entrepreneurs": 1,
"pop_culture": 2,
"daily_life": 3,
"sports_&_gaming": 4,
"science_&_technology": 5
}
```
### Citation Information
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
``` |
false |
# Dataset Card for `tldr_news`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://tldr.tech/newsletter
### Dataset Summary
The `tldr_news` dataset was constructed by collecting a daily tech newsletter (available
[here](https://tldr.tech/newsletter)). Then, for every piece of news, the `headline` and its corresponding `
content` were extracted.
Also, the newsletter contain different sections. We add this extra information to every piece of news.
Such a dataset can be used to train a model to generate a headline from a input piece of text.
### Supported Tasks and Leaderboards
There is no official supported tasks nor leaderboard for this dataset. However, it could be used for the following
tasks:
- summarization
- headline generation
### Languages
en
## Dataset Structure
### Data Instances
A data point comprises a "headline" and its corresponding "content".
An example is as follows:
```
{
"headline": "Cana Unveils Molecular Beverage Printer, a ‘Netflix for Drinks’ That Can Make Nearly Any Type of Beverage ",
"content": "Cana has unveiled a drink machine that can synthesize almost any drink. The machine uses a cartridge that contains flavor compounds that can be combined to create the flavor of nearly any type of drink. It is about the size of a toaster and could potentially save people from throwing hundreds of containers away every month by allowing people to create whatever drinks they want at home. Around $30 million was spent building Cana’s proprietary hardware platform and chemistry system. Cana plans to start full production of the device and will release pricing by the end of February.",
"category": "Science and Futuristic Technology"
}
```
### Data Fields
- `headline (str)`: the piece of news' headline
- `content (str)`: the piece of news
- `category (str)`: newsletter section
### Data Splits
- `all`: all existing daily newsletters available [here](https://tldr.tech/newsletter).
## Dataset Creation
### Curation Rationale
This dataset was obtained by scrapping the collecting all the existing newsletter
available [here](https://tldr.tech/newsletter).
Every single newsletter was then processed to extract all the different pieces of news. Then for every collected piece
of news the headline and the news content were extracted.
### Source Data
#### Initial Data Collection and Normalization
The dataset was has been collected from https://tldr.tech/newsletter.
In order to clean up the samples and to construct a dataset better suited for headline generation we have applied a
couple of normalization steps:
1. The headlines initially contain an estimated read time in parentheses; we stripped this information from the
headline.
2. Some news are sponsored and thus do not belong to any newsletter section. We create an additional category "Sponsor"
for such samples.
#### Who are the source language producers?
The people (or person) behind the https://tldr.tech/ newsletter.
### Annotations
#### Annotation process
Disclaimers: The dataset was generated from a daily newsletter. The author had no intention for those newsletters to be
used as such.
#### Who are the annotators?
The newsletters were written by the people behind *TLDR tech*.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
This dataset only contains tech news. A model trained on such a dataset might not be able to generalize to other domain.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was obtained by collecting newsletters from this website: https://tldr.tech/newsletter
### Contributions
Thanks to [@JulesBelveze](https://github.com/JulesBelveze) for adding this dataset. |
true |
# Datasheet for the dataset: multilingual-NLI-26lang-2mil7
## Dataset Summary
This dataset contains 2 730 000 NLI text pairs in 26 languages spoken by more than 4 billion people. The dataset can be used to train models for multilingual NLI (Natural Language Inference) or zero-shot classification. The dataset is based on the English datasets [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) and was created using the latest open-source machine translation models.
The dataset is designed to complement the established multilingual [XNLI](https://huggingface.co/datasets/xnli) dataset. XNLI contains older machine translations of the MultiNLI dataset from 2018 for 14 languages, as well as human translations of 2490 texts for validation and 5010 texts for testing per language. multilingual-NLI-26lang-2mil7 is sourced from 5 different NLI datasets and contains 105 000 machine translated texts for each of 26 languages, leading to 2 730 000 NLI text pairs.
The release of the dataset is accompanied by the fine-tuned [mDeBERTa-v3-base-xnli-multilingual-nli-2mil7](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) model, which can be used for NLI or zero-shot classification in 100 languages.
## Dataset Creation
The languages in the dataset are: ['ar', 'bn', 'de', 'es', 'fa', 'fr', 'he', 'hi', 'id', 'it', 'ja', 'ko', 'mr', 'nl', 'pl', 'ps', 'pt', 'ru', 'sv', 'sw', 'ta', 'tr', 'uk', 'ur', 'vi', 'zh'] (see [ISO language codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)) plus the original English texts. The languages were chosen based on two criteria: (1) They are either included in the list of the [20 most spoken languages](https://en.wikipedia.org/wiki/List_of_languages_by_total_number_of_speakers) (excluding Telugu and Nigerian Pidgin, for which no machine translation model was available); (2) or they are spoken in polit-economically important countries such as the [G20](https://en.wikipedia.org/wiki/G20) or Iran and Israel.
For each of the 26 languages, a different random sample of 25 000 hypothesis-premise pairs was taken from each of the following four datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli) (392 702 texts in total), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) (196 805 texts), [ANLI](https://huggingface.co/datasets/anli) (162 865 texts), [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) (102 885 texts). Moreover, a sample of 5000 texts was taken from [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) (29 985 texts) given its smaller total size. This leads to a different random sample of 105 000 source texts per target language with a diverse distribution of data from 5 different NLI datasets.
Each sample was then machine translated using the latest open-source machine translation models available for the respective language:
- [opus-mt-tc-big models](https://huggingface.co/models?sort=downloads&search=opus-mt-tc-big) were available for English to ['ar', 'es', 'fr', 'it', 'pt', 'tr']
- [opus-mt-models](https://huggingface.co/models?sort=downloads&search=opus-mt) were available for English to ['de', 'he', 'hi', 'id', 'mr', 'nl', 'ru', 'sv', 'sw', 'uk', 'ur', 'vi', 'zh']
- [m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) was used for the remaining languages ['bn', 'fa', 'ja', 'ko', 'pl', 'ps', 'ta']
## DatasetStructure
### Data Splits
The dataset contains 130 splits (26 * 5), one for each language-dataset pair following the format '{language-iso}_{dataset}'. For example, split 'zh_mnli' contains the Chinese translation of 25 000 texts from the MultiNLI dataset etc.
### Data Fields
- `premise_original`: The original premise from the English source dataset
- `hypothesis_original`: The original hypothesis from the English source dataset
- `label`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `premise`: The machine translated premise in the target language
- `hypothesis`: The machine translated premise in the target language
### Example of a data instance:
```
{
"premise_original": "I would not be surprised if the top priority for the Navy was to build a new carrier.",
"hypothesis_original": "The top priority for the Navy is to build a new carrier.",
"label": 1,
"premise": "Ich würde mich nicht wundern, wenn die oberste Priorität für die Navy wäre, einen neuen Träger zu bauen.",
"hypothesis": "Die oberste Priorität für die Navy ist es, einen neuen Träger zu bauen."
}
```
## Limitations and bias
Machine translation is not as good as human translation. Machine translation can introduce inaccuracies that can be problematic for complex tasks like NLI. In an ideal world, original NLI data would be available for many languages. Given the lack of NLI data, using the latest open-source machine translation seems like a good solution to improve multilingual NLI. You can use the Hugging Face data viewer to inspect the data and verify the translation quality for your language of interest. Note that grammatical errors are less problematic for zero-shot use-cases as grammar is less relevant for these applications.
## Other
The machine translation for the full dataset took roughly 100 hours on an A100 GPU, especially due to the size of the [m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) model.
## Ideas for cooperation or questions?
For updates on new models and datasets, follow me on [Twitter](https://twitter.com/MoritzLaurer).
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or on [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Citation Information
If the dataset is useful for you, please cite the following article:
```
@article{laurer_less_2022,
title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
url = {https://osf.io/74b8k},
language = {en-us},
urldate = {2022-07-28},
journal = {Preprint},
author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
month = jun,
year = {2022},
note = {Publisher: Open Science Framework},
}
```
|
false |
# Dataset Card for DocLayNet base
## About this card (01/27/2023)
### Property and license
All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from [Dataset Card for DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet).
DocLayNet is a dataset created by Deep Search (IBM Research) published under [license CDLA-Permissive-1.0](https://huggingface.co/datasets/ds4sd/DocLayNet#licensing-information).
I do not claim any rights to the data taken from this dataset and published on this page.
### DocLayNet dataset
[DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories.
Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:
- direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB)
- Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet)
Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022)
### Processing into a format facilitating its use by HF notebooks
These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.
Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately ([doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip), 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).
At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.
For all these reasons, I decided to process the DocLayNet dataset:
- into 3 datasets of different sizes:
- [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)
- [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)
- [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)
- with associated texts and PDFs (base64 format),
- and in a format facilitating their use by HF notebooks.
*Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
### About PDFs languages
Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
"We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, **DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%)**. While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features."
### About PDFs categories distribution
Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
"The pages in DocLayNet can be grouped into **six distinct categories**, namely **Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders**. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes."

### Download & overview
The size of the DocLayNet small is about 10% of the DocLayNet dataset (random selection respectively in the train, val and test files).
```
# !pip install -q datasets
from datasets import load_dataset
dataset_base = load_dataset("pierreguillou/DocLayNet-base")
# overview of dataset_base
DatasetDict({
train: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 6910
})
validation: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 648
})
test: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 499
})
})
```
### Annotated bounding boxes
The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.
Check the notebook [processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb](https://github.com/piegu/language-models/blob/master/processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb) in order to get the code.
#### Paragraphes

#### Lines

### HF notebooks
- [notebooks LayoutLM](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLM) (Niels Rogge)
- [notebooks LayoutLMv2](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) (Niels Rogge)
- [notebooks LayoutLMv3](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3) (Niels Rogge)
- [notebooks LiLT](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT) (Niels Rogge)
- [Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers/blob/main/training/lilt_funsd.ipynb) ([post](https://www.philschmid.de/fine-tuning-lilt#3-fine-tune-and-evaluate-lilt) of Phil Schmid)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset. |
true |
# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article
This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible".
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities according to [Fagen-Ulmschneider, 2018](https://github.com/wadefagen/datasets/tree/master/Perception-of-Probability-Words).
The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
Code : [colab](https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing)
**Accepted at Starsem2023** (The 12th Joint Conference on Lexical and Computational Semantics). Temporary citation:
```bib
@article{sileo2022probing,
title={Probing neural language models for understanding of words of estimative probability},
author={Sileo, Damien and Moens, Marie-Francine},
journal={arXiv preprint arXiv:2211.03358},
year={2022}
}
```
|
false | # Dataset Card for [GPT4All Prompt Generations]
## Dataset Description
Dataset used to train [GPT4All](https://huggingface.co/nomic-ai/gpt4all-lora)
- **Homepage:**
- **Repository:** [gpt4all](https://github.com/nomic-ai/gpt4all)
- **Paper:** [Technical Report](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf)
- **Atlas Map:** [Map of Cleaned Data](https://atlas.nomic.ai/map/gpt4all_data_clean)
|
false | # Dataset Card for "fake-w2-us-tax-form-dataset"
This is a dataset of synthetically generated US Tax Return W2 Forms, with generated fake data such as names, ids, dates and addresses. Only real city, state and zipcodes have been used.
This dataset is created from the existing public [Fake W-2 (US Tax Form) Dataset](https://www.kaggle.com/datasets/mcvishnu1/fake-w2-us-tax-form-dataset) dataset for use with
🤗 |
true |
# Dataset Card for the_pile_openwebtext2
## Table of Contents
- [Dataset Card for the_pile_openwebtext2](#dataset-card-for-the_pile_openwebtext2)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [|split|num examples|](#splitnum-examples)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://openwebtext2.readthedocs.io/en/latest/
- **Repository:** [GitHub](https://github.com/EleutherAI/openwebtext2)
- **Paper:** https://arxiv.org/abs/2101.00027
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
OpenWebText2 is part of EleutherAi/The Pile dataset and is an enhanced version of the original OpenWebTextCorpus covering all Reddit submissions from 2005 up until April 2020, with further months becoming available after the corresponding PushShift dump files are released.
|download_size|27.3 Gib|
|dataset_size|63.8 Gib|
### Supported Tasks and Leaderboards
This dataset is used for Language Modeling.
### Languages
This dataset is in English.
## Dataset Structure
### Data Instances
```
This example was too long and was cropped:
{'title': Xiaomi Mi Note 10 Gearbest Coupon Promo Code [6+128GB] [France Warehouse],
'text': '27% off Xiaomi Mi Note 10 (CC9 Pro) 108MP Penta Camera Mobile Phone Global Version Online Smartphone – Black Gearbest Coupon Promo Code\n\nGearbest Coupon Price :$439.99\n\nRegular Price : $603.19 Your Save : $163.20 Coupon Limit: 100 times Warehouse: France Expires : September 30, 2020 Coupon Valid for...',
'reddit_scores': [6],}
```
### Data Fields
- `title`: title of the web page
- `text`: text content of the web page
- `reddit_scores`: scores of the reddit submissions that mention this web page, as a list of integers
### Data Splits
|split|num examples|
--------------------------------
|train|17103059|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
### Contributions
[researcher2](https://github.com/researcher2) Wrote much of this code, with inspiration and some straight copying of the scraping code found [here](https://github.com/yet-another-account/openwebtext/).<br/>
[sdtblck](https://github.com/sdtblck/) kindly put together the Colab notebook, and performed a chunk of the scraping. <br/>
[leogao2](https://github.com/leogao2/) provided overall design guidance, lm_dataformat, and performed another chunk of scraping. <br />
[Colaboratory](https://colab.research.google.com/) VMs helped with about 10% of our overall scraping. <br />
[The Eye](http://the-eye.eu/) host the processed datasets.<br />
[Read The Docs](https://readthedocs.org/) host our documentation.<br />
[@richarddwang](https://github.com/richarddwang) added this dataset to HF/datasets. |
true | # The Touché23-ValueEval Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Usage](#dataset-usage)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Argument Instances](#argument-instances)
- [Metadata Instances](#metadata-instances)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://webis.de/data/touche23-valueeval.html](https://webis.de/data/touche23-valueeval.html)
- **Repository:** [Zenodo](https://doi.org/10.5281/zenodo.6814563)
- **Paper:** [The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments.](https://webis.de/downloads/publications/papers/mirzakhmedova_2023a.pdf)
- **Leaderboard:** [https://touche.webis.de/](https://touche.webis.de/semeval23/touche23-web/index.html#results)
- **Point of Contact:** [Webis Group](https://webis.de/people.html)
### Dataset Summary
The Touché23-ValueEval Dataset comprises 9324 arguments from six different sources. An arguments source is indicated with the first letter of its `Argument ID`:
- `A`: [IBM-ArgQ-Rank-30kArgs](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Argument%20Quality)
- `C`:Chinese question-answering website [Zhihu](https://www.zhihu.com)
- `D`:[Group Discussion Ideas (GD IDEAS)](https://www.groupdiscussionideas.com)
- `E`:[The Conference for the Future of Europe](https://futureu.europa.eu)
- `F`:Contribution by the language.ml lab (Doratossadat, Omid, Mohammad, Ehsaneddin) [1]:
arguments from the "Nahj al-Balagha" [2] and "Ghurar al-Hikam wa Durar ak-Kalim" [3]
- `G`:[The New York Times](https://www.nytimes.com)
The annotated labels are based on the value taxonomy published in
[Identifying the Human Values behind Arguments](https://webis.de/publications.html#kiesel_2022b) (Kiesel et al. 2022) at ACL'22.
[1] https://language.ml
[2] https://en.wikipedia.org/wiki/Nahj_al-Balagha
[3] https://en.wikipedia.org/wiki/Ghurar_al-Hikam_wa_Durar_al-Kalim
### Dataset Usage
The default configuration name is `main`.
```python
from datasets import load_dataset
dataset = load_dataset("webis/Touche23-ValueEval")
print(dataset['train'].info.description)
for argument in iter(dataset['train']):
print(f"{argument['Argument ID']}: {argument['Stance']} '{argument['Conclusion']}': {argument['Premise']}")
```
### Supported Tasks and Leaderboards
Human Value Detection
### Languages
The [Argument Instances](#argument-instances) are all monolingual; it only includes English (mostly en-US) documents.
The [Metadata Instances](#metadata-instances) for some dataset parts additionally state the arguments in their original language and phrasing.
## Dataset Structure
### Argument Instances
Each argument instance has the following attributes:
- `Argument ID`: The unique identifier for the argument within the dataset
- `Conclusion`: Conclusion text of the argument
- `Stance`: Stance of the `Premise` towards the `Conclusion; one of "in favor of", "against"
- `Premise`: Premise text of the argument
- `Labels`: The `Labels` for each example is an array of 1s (argument resorts to value) and 0s (argument does not resort to value). The order is the same as in the original files.
Additionally, the labels are separated into *value-categories*, aka. level 2 labels of the value taxonomy (Kiesel et al. 2022b), and *human values*, aka. level 1 labels of the value taxonomy.
This distinction is also reflected in the configuration names:
- `<config>`: As the [Task](https://touche.webis.de/semeval23/touche23-web/) is focused mainly on the detection of value-categories,
each base configuration ([listed below](#p-list-base-configs)) has the 20 value-categories as labels:
```python
labels = ["Self-direction: thought", "Self-direction: action", "Stimulation", "Hedonism", "Achievement", "Power: dominance", "Power: resources", "Face", "Security: personal", "Security: societal", "Tradition", "Conformity: rules", "Conformity: interpersonal", "Humility", "Benevolence: caring", "Benevolence: dependability", "Universalism: concern", "Universalism: nature", "Universalism: tolerance", "Universalism: objectivity"]
```
- `<config>-level1`: The 54 human values from the level 1 of the value taxonomy are not used for the 2023 task
(except for the annotation), but are still listed here for some might find them useful for understanding the value
categories. Their order is also the same as in the original files. For more details see the [value-categories](#metadata-instances) configuration.
<p id="p-list-base-configs">The configuration names (as replacements for <code><config></code>) in this dataset are:</p>
- `main`: 8865 arguments (sources: `A`, `D`, `E`) with splits `train`, `validation`, and `test` (default configuration name)
```python
dataset_main_train = load_dataset("webis/Touche23-ValueEval", split="train")
dataset_main_validation = load_dataset("webis/Touche23-ValueEval", split="validation")
dataset_main_test = load_dataset("webis/Touche23-ValueEval", split="test")
```
- `nahjalbalagha`: 279 arguments (source: `F`) with split `test`
```python
dataset_nahjalbalagha_test = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha", split="test")
```
- `nyt`: 80 arguments (source: `G`) with split `test`
```python
dataset_nyt_test = load_dataset("webis/Touche23-ValueEval", name="nyt", split="test")
```
- `zhihu`: 100 arguments (source: `C`) with split `validation`
```python
dataset_zhihu_validation = load_dataset("webis/Touche23-ValueEval", name="zhihu", split="validation")
```
Please note that due to copyright reasons, there currently does not exist a direct download link to the arguments contained in the
New york Times
dataset. Accessing any of the `nyt` or `nyt-level1` configurations will therefore use the specifically created
[nyt-downloader program](https://github.com/touche-webis-de/touche-code/tree/main/semeval23/human-value-detection/nyt-downloader)
to create and access the arguments locally. See the program's
[README](https://github.com/touche-webis-de/touche-code/blob/main/semeval23/human-value-detection/nyt-downloader/README.md)
for further details.
### Metadata Instances
The following lists all configuration names for metadata. Each configuration only has a single split named `meta`.
- `ibm-meta`: Each row corresponds to one argument (IDs starting with `A`) from the [IBM-ArgQ-Rank-30kArgs](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Argument%20Quality)
- `Argument ID`: The unique identifier for the argument
- `WA`: the quality label according to the weighted-average scoring function
- `MACE-P`: the quality label according to the MACE-P scoring function
- `stance_WA`: the stance label according to the weighted-average scoring function
- `stance_WA_conf`: the confidence in the stance label according to the weighted-average scoring function
```python
dataset_ibm_metadata = load_dataset("webis/Touche23-ValueEval", name="ibm-meta", split="meta")
```
- `zhihu-meta`: Each row corresponds to one argument (IDs starting with `C`) from the Chinese question-answering website [Zhihu](https://www.zhihu.com)
- `Argument ID`: The unique identifier for the argument
- `Conclusion Chinese`: The original chinese conclusion statement
- `Premise Chinese`: The original chinese premise statement
- `URL`: Link to the original statement the argument was taken from
```python
dataset_zhihu_metadata = load_dataset("webis/Touche23-ValueEval", name="zhihu-meta", split="meta")
```
- `gdi-meta`: Each row corresponds to one argument (IDs starting with `D`) from [GD IDEAS](https://www.groupdiscussionideas.com/)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the topic the argument was taken from
```python
dataset_gdi_metadata = load_dataset("webis/Touche23-ValueEval", name="gdi-meta", split="meta")
```
- `cofe-meta`: Each row corresponds to one argument (IDs starting with `E`) from [the Conference for the Future of Europe](https://futureu.europa.eu)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the comment the argument was taken from
```python
dataset_cofe_metadata = load_dataset("webis/Touche23-ValueEval", name="cofe-meta", split="meta")
```
- `nahjalbalagha-meta`: Each row corresponds to one argument (IDs starting with `F`). This file contains information on the 279 arguments in `nahjalbalagha` (or `nahjalbalagha-level1`)
and 1047 additional arguments that were not labeled so far. This data was contributed by the language.ml lab.
- `Argument ID`: The unique identifier for the argument
- `Conclusion Farsi`: Conclusion text of the argument in Farsi
- `Stance Farsi`: Stance of the `Premise` towards the `Conclusion`, in Farsi
- `Premise Farsi`: Premise text of the argument in Farsi
- `Conclusion English`: Conclusion text of the argument in English (translated from Farsi)
- `Stance English`: Stance of the `Premise` towards the `Conclusion`; one of "in favor of", "against"
- `Premise English`: Premise text of the argument in English (translated from Farsi)
- `Source`: Source text of the argument; one of "Nahj al-Balagha", "Ghurar al-Hikam wa Durar ak-Kalim"; their Farsi translations were used
- `Method`: How the premise was extracted from the source; one of "extracted" (directly taken), "deduced"; the conclusion are deduced
```python
dataset_nahjalbalagha_metadata = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha-meta", split="meta")
```
- `nyt-meta`: Each row corresponds to one argument (IDs starting with `G`) from [The New York Times](https://www.nytimes.com)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the article the argument was taken from
- `Internet Archive timestamp`: Timestamp of the article's version in the Internet Archive that was used
```python
dataset_nyt_metadata = load_dataset("webis/Touche23-ValueEval", name="nyt-meta", split="meta")
```
- `value-categories`: Contains a single JSON-entry with the structure of level 2 and level 1 values regarding the value taxonomy:
```
{
"<value category>": {
"<level 1 value>": [
"<exemplary effect a corresponding argument might target>",
...
], ...
}, ...
}
```
As this configuration contains just a single entry, an example usage could be:
```python
value_categories = load_dataset("webis/Touche23-ValueEval", name="value-categories", split="meta")[0]
```
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@Article{mirzakhmedova:2023a,
author = {Nailia Mirzakhmedova and Johannes Kiesel and Milad Alshomary and Maximilian Heinrich and Nicolas Handke\
and Xiaoni Cai and Valentin Barriere and Doratossadat Dastgheib and Omid Ghahroodi and {Mohammad Ali} Sadraei\
and Ehsaneddin Asgari and Lea Kawaletz and Henning Wachsmuth and Benno Stein},
doi = {10.48550/arXiv.2301.13771},
journal = {CoRR},
month = jan,
publisher = {arXiv},
title = {{The Touch{\'e}23-ValueEval Dataset for Identifying Human Values behind Arguments}},
volume = {abs/2301.13771},
year = 2023
}
``` |
false | # Dataset Card for "ml-arxiv-papers"
This is a dataset containing ML ArXiv papers. The dataset is a version of the original one from [CShorten](https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers), which is a part of the ArXiv papers dataset from [Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv).
Three steps are made to process the source data:
1. useless columns removal;
2. train-test split;
3. '\n' removal and trimming spaces on sides of the text.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# Dataset Card for medical
中文医疗数据集
- LLM Supervised Finetuning repository: https://github.com/shibing624/textgen
- MeidcalGPT repository: https://github.com/shibing624/MedicalGPT
## Dataset Description
medical is a Chinese Medical dataset. 医疗数据集,可用于医疗领域大模型训练。
```
tree medical
|-- finetune # 监督微调数据集,可用于SFT和RLHF
| |-- test_en_1.json
| |-- test_zh_0.json
| |-- train_en_1.json
| |-- train_zh_0.json
| |-- valid_en_1.json
| `-- valid_zh_0.json
|-- medical.py # hf dataset 数据展示用
|-- pretrain # 二次预训练数据集
| |-- medical_book_zh.json
| |-- test_encyclopedia.json
| |-- train_encyclopedia.json
| `-- valid_encyclopedia.json
|-- README.md
`-- reward # 奖励模型数据集
|-- test.json
|-- train.json
`-- valid.json
```
### Original Dataset Summary
#### pretrain
- train_encyclopedia.json: 共36万条,来自医疗百科数据[FreedomIntelligence/huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa) , 拼接 questions 和 answers,形成 text 文本字段,语句通顺,用于预训练注入医疗知识。
- medical_book_zh.json: 共8475条,来自医疗教材的文本数据,来源:https://github.com/jind11/MedQA, 原始数据集:[google drive](https://drive.google.com/u/0/uc?export=download&confirm=t&id=1ImYUSLk9JbgHXOemfvyiDiirluZHPeQw) ,只对长段落切分为2048字的小段落了。
#### finetune
- train_zh_0.json: 共195万条,来自1)中文医疗对话数据集[Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)的六个科室医疗问诊数据,
有79万条;2)在线医疗百科 huatuo_encyclopedia_qa ,有36万条;3)医疗知识图谱 huatuo_knowledge_graph_qa,有79万条。三部分合并,共195万条。
- train_en_1.json:共11万条,来自英文医疗问诊对话数据[Kent0n-Li/ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor),合并了HealthCareMagic-100k、GenMedGPT-5k 数据集,共11万条。
#### reward
- train.json 共4000条,问题来自中文医疗对话数据集[Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)的随机4000条提问,`response_chosen`来自该数据集的医生答复,
`response_rejected`来自本草模型[SCIR-HI/Huatuo-Llama-Med-Chinese](https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese)的答复。
### Supported Tasks and Leaderboards
中文医疗对话模型
The dataset designed for medical task training pretrained language models.
### Languages
The data are in Chinese.
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
head pretrain/train_encyclopedia.json
```json
{"text": "怀孕后嘴巴很淡怎么办?有孕妇在怀孕之后,发现自己嘴巴比较淡,出现这种情况的原因其实也非常的复杂,首先和妊娠反应有直接的关系,这是一种正常的情况,另外有些孕妇平常不注意口腔卫生,舌苔比较厚或者自己有了一些消化系统方面的疾病,这就要求人们必须要及时的进行处理。女性在怀孕之后,身体就会出现一些明显的变化,首先人们月经会停止,另外也会有恶心、呕吐等一些妊娠反应,不过这些都是正常的。有些孕妇发现自己在怀孕之后,口味发生了很大的变化,嘴巴变得非常的淡。其实这也和激素变化有直接的关系,可能是妊娠反应所致,在怀孕期间,因为受到体内激素水平的变化,所以就会有肠胃系统的改变,人们可能会出现食欲不振,消化不良等症状表现,也有一些孕妇会发现自己嘴巴没有味道,会有口苦的症状,而这也是正常的孕期反应,人们在平常要多喝一些水,多吃一些清淡营养的食物慢慢就会改善。也有可能是舌苔过厚所致,孕妇嘴巴里面没有味道,很有可能是舌苔比较重、舌苔过厚导致的,这样就会影响到味蕾对味道的敏感度,不仅嘴巴里面没有味道,甚至有些孕妇在说话的时候也会有口臭,这就要求人们在每天早上漱口的时候,必须要用牙刷刷舌苔开始,不要清理的特别深,以免人们会有呕吐,慢慢习惯之后再往深一点的清洗,一般2到3天就会得到改善。嘴巴感到比较淡,其实也和脾胃虚寒有直接的关系,消化系统疾病,内分泌疾病,营养不良等,但有可能导致舌头部位因为味蕾的敏感度下降,产生口淡之感,患者会有食欲不振的表现,发现病症及时就诊治疗。"}
```
head finetune/train_zh_0.json
```json
{"instruction": "曲匹地尔片的用法用量", "input": "", "output": "注意:同种药品可由于不同的包装规格有不同的用法或用量。本文只供参考。如果不确定,请参看药品随带的说明书或向医生询问。口服。一次50~100mg(1-2片),3次/日,或遵医嘱。"}
```
head finetune/train_en_1.json
```json
{"instruction": "If you are a doctor, please answer the medical questions based on the patient's description.", "input": "Doctor, I have been experiencing sudden and frequent panic attacks. I don't know what to do.", "output": "Well, based on what you're telling me, it sounds like you may be suffering from panic disorder. The best course of action is to start with psychotherapy and mental health counseling. Additionally, we should conduct an electrocardiogram to make sure that there are no physical issues causing your panic attacks. We will also need to perform a depression screen and a toxicology screen to rule out any other underlying causes. Finally, I would recommend a comprehensive psychological and psychiatric evaluation and therapy to help manage your symptoms."}
```
head reward/train.json
```json
{"question": "孩子感统失调,学习不行怎么办?,孩子说话晚,走路晚,现在小学二年级,学习跟不上,理解力差,运动协调性差,家里很着急,不知怎么办。", "response_chosen": "病情分析:你好!孩子说话晚,走路也晚,很可能是大脑本身发育不好引起的发育迟缓。而五岁时所致的智力检查为临界范围,那就是说孩子的智商是有问题的,也应考虑与大脑发育不好有关。指导意见:人的大脑在头一年发育最快,可塑性最强,在头三年可塑性还是可以的,超过三岁再进行训练,效果就不怎么好了。建议再给孩子做一做智力测试,如果孩子的智商还是在临界范围,那就要考虑让孩子去特殊学校进行康复训练,而不是继续在普通小学就读,否则对孩子来说,就是强人所难了。希望自己的孩子能聪明,这是每个家长都会有的心愿,但如果孩子自身的条件就是不能跟上同龄孩子,那家长也要面对这个事实的,对吗?医生询问:", "response_rejected": "建议家长先带孩子去正规医院做全面检查以确定病因和病情严重程度;同时可以进行物理治疗、康复训练等辅助治疗方法。"}
```
### Data Fields
#### 预训练数据集 pretrain
字段解释:
- text: 文本
#### 指令微调数据集 finetune
字段解释:
- instruction: 指令
- input:问题(可为空)
- output:答复
#### 奖励模型数据集 reward
字段解释:
- question: 问题
- response_chosen: 优质回答
- response_rejected: 低质回答
### Data Splits
```
> wc -l medical/*/*
500 medical/finetune/test_en_1.json
500 medical/finetune/test_zh_0.json
116617 medical/finetune/train_en_1.json
1949972 medical/finetune/train_zh_0.json
500 medical/finetune/valid_en_1.json
500 medical/finetune/valid_zh_0.json
8475 medical/pretrain/medical_book_zh.json
500 medical/pretrain/test_encyclopedia.json
361420 medical/pretrain/train_encyclopedia.json
500 medical/pretrain/valid_encyclopedia.json
100 medical/reward/test.json
3800 medical/reward/train.json
100 medical/reward/valid.json
2443484 total
```
### Licensing Information
The dataset is available under the Apache 2.0.
### Citation Information
- https://github.com/Toyhom/Chinese-medical-dialogue-data
- https://github.com/FreedomIntelligence/Huatuo-26M/blob/main/README_zh-CN.md
- https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa
- https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa
- https://github.com/Kent0n-Li/ChatDoctor
附上几个优质的reward model dataset:
- https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise
- https://huggingface.co/datasets/sunzeyeah/chinese_chatgpt_corpus
- https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12
- https://huggingface.co/datasets/Dahoas/rm-static
### Contributions
[shibing624](https://github.com/shibing624) 整理并上传
|
false |
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Physics dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 physics topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs.
We provide the data in `physics.zip`.
## Data Fields
**The data fields for files in `physics.zip` are as follows:**
* `role_1`: assistant role
* `topic`: physics topic
* `sub_topic`: physics subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/physics", repo_type="dataset", filename="physics.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:sejune@lklab.io**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| name | train |
|-------------------|------:|
|CoT-Collection|1837928|
## Additional Information
### Citation Information
```
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
``` |
false |
# scientific_lay_summarisation - PLOS - normalized
This dataset is a modified version of [tomasg25/scientific_lay_summarization](https://huggingface.co/datasets/tomasg25/scientific_lay_summarisation) and contains scientific lay summaries that have been preprocessed [with this code](https://gist.github.com/pszemraj/bd344637af7c0c10ecf4ab62c4d0ce91). The preprocessing includes fixing punctuation and whitespace problems, and calculating the token length of each text sample using a tokenizer from the T5 model.
Original dataset details:
- **Repository:** https://github.com/TGoldsack1/Corpora_for_Lay_Summarisation
- **Paper:** [Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature](https://arxiv.org/abs/2210.09932)
## Data Cleaning
The text in both the "article" and "summary" columns was processed to ensure that punctuation and whitespace were consistent. The `fix_punct_whitespace` function was applied to each text sample to:
- Remove spaces before punctuation marks (except for parentheses)
- Add a space after punctuation marks (except for parentheses) if missing
- Handle spaces around parentheses
- Add a space after a closing parenthesis if followed by a word or opening parenthesis
- Handle spaces around quotation marks
- Handle spaces around single quotes
- Handle comma in numbers
## Tokenization
The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.
## Data Format
The resulting processed data files are stored in Apache parquet and can be loaded using the `pandas' library or the `datasets' library from the Hugging Face transformers package. The relevant column names and data types for summarization are
```python
DatasetDict({
train: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 24773
})
test: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 1376
})
validation: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 1376
})
})
```
## Usage
Load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
```python
# download the dataset files by clicking on 'use in datasets' and cloning
import pandas as pd
# Load train set
df = pd.read_parquet("scientific_lay_summarisation-plos-norm/train.parquet")
print(df.info())
```
And here is an example using `datasets`:
```python
from datasets import load_dataset
dataset = load_dataset("pszemraj/scientific_lay_summarisation-plos-norm")
train_set = dataset['train']
# Print the first few samples
for i in range(5):
print(train_set[i])
```
## Token Lengths
For train split:

---
|
false |
# Dataset Card for Swiss Citation Extraction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Citation Extraction is a multilingual, diachronic dataset of 131K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging token classification task.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents |
|------------|------------|----------------------|
| German | **de** | 85K |
| French | **fr** | 38K |
| Italian | **it** | 8K |
## Dataset Structure
### Data Fields
```
decision_id: (string)
considerations: (sequence)
NER_labels: (sequence)
law_area: (string)
language: (string)
year: (int64)
chamber: (string)
region: (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
*Visu, Ronja, Joel*
*Title: Blabliblablu*
*Name of conference*
```
cit
```
### Contributions |
false | # Dataset Card for "dev_pretrain"
[Tigerbot模型](https://github.com/TigerResearch/TigerBot#%E6%A8%A1%E5%9E%8B%E4%B8%8B%E8%BD%BD)develop pretrain数据。
在[train_clm.py](https://github.com/TigerResearch/TigerBot/blob/main/train/train_clm.py)中被使用。
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/dev_pretrain')
```
## Field
- content: 语料 |
false |
# Dataset Card for "tner/wikineural"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/2021.findings-emnlp.215/](https://aclanthology.org/2021.findings-emnlp.215/)
- **Dataset:** WikiNeural
- **Domain:** Wikipedia
- **Number of Entity:** 16
### Dataset Summary
WikiAnn NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` of `de` looks as follows.
```
{
'tokens': [ "Dieses", "wiederum", "basierte", "auf", "dem", "gleichnamigen", "Roman", "von", "Noël", "Calef", "." ],
'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0 ]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wikineural/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-LOC": 3,
"I-LOC": 4,
"B-ORG": 5,
"I-ORG": 6,
"B-ANIM": 7,
"I-ANIM": 8,
"B-BIO": 9,
"I-BIO": 10,
"B-CEL": 11,
"I-CEL": 12,
"B-DIS": 13,
"I-DIS": 14,
"B-EVE": 15,
"I-EVE": 16,
"B-FOOD": 17,
"I-FOOD": 18,
"B-INST": 19,
"I-INST": 20,
"B-MEDIA": 21,
"I-MEDIA": 22,
"B-PLANT": 23,
"I-PLANT": 24,
"B-MYTH": 25,
"I-MYTH": 26,
"B-TIME": 27,
"I-TIME": 28,
"B-VEHI": 29,
"I-VEHI": 30,
"B-MISC": 31,
"I-MISC": 32
}
```
### Data Splits
| language | train | validation | test |
|:-----------|--------:|-------------:|-------:|
| de | 98640 | 12330 | 12372 |
| en | 92720 | 11590 | 11597 |
| es | 76320 | 9540 | 9618 |
| fr | 100800 | 12600 | 12678 |
| it | 88400 | 11050 | 11069 |
| nl | 83680 | 10460 | 10547 |
| pl | 108160 | 13520 | 13585 |
| pt | 80560 | 10070 | 10160 |
| ru | 92320 | 11540 | 11580 |
### Citation Information
```
@inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
author = "Tedeschi, Simone and
Maiorca, Valentino and
Campolungo, Niccol{\`o} and
Cecconi, Francesco and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.215",
doi = "10.18653/v1/2021.findings-emnlp.215",
pages = "2521--2533",
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
}
``` |
false |
# CORD 19
## Dataset Description
- **Homepage:** https://www.kaggle.com/datasets/allen-institute-for-ai/CORD-19-research-challenge
### Dataset Summary
In response to the COVID-19 pandemic, the White House and a coalition of leading research groups have prepared the COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 1,000,000 scholarly articles, including over 400,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. This freely available dataset is provided to the global research community to apply recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease.
This is a processed version of the dataset, where we removed some empty entries and formated it to be compatible with the alpaca training. For more details on the data, please refer to the original publicatio.
### Citation Information
```
@inproceedings{wang-etal-2020-cord,
title = "{CORD-19}: The {COVID-19} Open Research Dataset",
author = "Wang, Lucy Lu and Lo, Kyle and Chandrasekhar, Yoganand and Reas, Russell and Yang, Jiangjiang and Burdick, Doug and Eide, Darrin and Funk, Kathryn and Katsis, Yannis and Kinney, Rodney Michael and Li, Yunyao and Liu, Ziyang and Merrill, William and Mooney, Paul and Murdick, Dewey A. and Rishi, Devvret and Sheehan, Jerry and Shen, Zhihong and Stilson, Brandon and Wade, Alex D. and Wang, Kuansan and Wang, Nancy Xin Ru and Wilhelm, Christopher and Xie, Boya and Raymond, Douglas M. and Weld, Daniel S. and Etzioni, Oren and Kohlmeier, Sebastian",
booktitle = "Proceedings of the 1st Workshop on {NLP} for {COVID-19} at {ACL} 2020",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.nlpcovid19-acl.1"
}
``` |
true | # Dataset Card for TSATC: Twitter Sentiment Analysis Training Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [TSATC](https://github.com/cblancac/SentimentAnalysisBert/blob/main/data)
- **Repository:** [TSATC](https://github.com/cblancac/SentimentAnalysisBert/blob/main/data)
- **Paper:** [TSATC: Twitter Sentiment Analysis Training Corpus](http://thinknook.com/twitter-sentiment-analysis-training-corpus-dataset-2012-09-22/)
- **Point of Contact:** [Carlos Blanco](carblacac7@gmail.com)
### Dataset Summary
TSATC: Twitter Sentiment Analysis Training Corpus
The original Twitter Sentiment Analysis Dataset contains 1,578,627 classified tweets, each row is marked as 1 for positive sentiment and 0 for negative sentiment. It can be downloaded from http://thinknook.com/wp-content/uploads/2012/09/Sentiment-Analysis-Dataset.zip.
The dataset is based on data from the following two sources:
University of Michigan Sentiment Analysis competition on Kaggle
Twitter Sentiment Corpus by Niek Sanders
This dataset has been transformed, selecting in a random way a subset of them, applying a cleaning process, and dividing them between the test and train subsets, keeping a balance between the number of positive and negative tweets within each of these subsets. These two files can be founded on https://github.com/cblancac/SentimentAnalysisBert/blob/main/data.
Finally, the train subset has been divided in two smallest datasets, train (80%) and validation (20%). The final dataset has been created with these two new subdatasets plus the previous test dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
Below are two examples from the dataset:
| | Text | Feeling |
| :-- | :---------------------------- | :------ |
| (1) | blaaah. I don't feel good aagain. | 0 |
| (2) | My birthday is coming June 3. | 1 |
### Data Fields
In the final dataset, all files are in the JSON format with f columns:
| Column Name | Data |
| :------------ | :-------------------------- |
| text | A sentence (or tweet) |
| feeling | The feeling of the sentence |
Each feeling has two possible values: `0` indicates the sentence has a negative sentiment, while `1` indicates a positive feeling.
### Data Splits
The number of examples and the proportion sentiments are shown below:
| Data | Train | Validation | Test |
| :------------------ | ------: | ------------: | ----: |
| Size | 119.988 | 29.997 | 61.998 |
| Labeled positive | 60.019 | 14.947 | 31029 |
| Labeled negative | 59.969 | 15.050 | 30969 |
## Dataset Creation
### Curation Rationale
Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like *flights from New York to Florida* and *flights from Florida to New York*.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Mentioned above.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Citation Information
```
@InProceedings{paws2019naacl,
title = {{TSATC: Twitter Sentiment Analysis Training Corpus}},
author = {Ibrahim Naji},
booktitle = {thinknook},
year = {2012}
}
```
### Contributions
Thanks to myself [@carblacac](https://github.com/cblancac/) for adding this transformed dataset from the original one. |
false | # Dataset Card for LogiQA
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source.
## Dataset Structure
### Data Instances
An example from `train` looks as follows:
```
{'context': 'Continuous exposure to indoor fluorescent lights is beneficial to the health of hamsters with heart disease. One group of hamsters exposed to continuous exposure to fluorescent lights has an average lifespan that is 2.5% longer than another one of the same species but living in a black wall.',
'query': 'Which of the following questions was the initial motivation for conducting the above experiment?',
'options': ['Can hospital light therapy be proved to promote patient recovery?',
'Which one lives longer, the hamster living under the light or the hamster living in the dark?',
'What kind of illness does the hamster have?',
'Do some hamsters need a period of darkness?'],
'correct_option': 0}
```
### Data Fields
- `context`: a `string` feature.
- `query`: a `string` feature.
- `answers`: a `list` feature containing `string` features.
- `correct_option`: a `string` feature.
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 7376| 651| 651|
## Additional Information
### Dataset Curators
The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
}
```
### Contributions
[@lucasmccabe](https://github.com/lucasmccabe) added this dataset. |
false | # Dataset Card for "bert_dataset_202203"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true |
# Dataset Card for Moral Stories
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Moral Stories repository](https://github.com/demelin/moral_stories)
- **Repository:** [Moral Stories repository](https://github.com/demelin/moral_stories)
- **Paper:** [Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences](https://aclanthology.org/2021.emnlp-main.54/)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Denis Emelin](demelin.github.io)
### Dataset Summary
Moral Stories is a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences. All stories in the dataset consist of seven sentences, belonging to the following categories:
- Norm: A guideline for social conduct generally observed by most people in everyday situations.
- Situation: Setting of the story that introduces story participants and describes their environment.
- Intention: Reasonable goal that one of the story participants (the actor), wants to fulfill.
- Normative action: An action by the actor that fulfills the intention and observes the norm.
- Normative consequence: Possible effect of the normative action on the actor's environment.
- Divergent action: An action by the actor that fulfills the intention and diverges from the norm.
- Divergent consequence: Possible effect of the divergent action on the actor's environment.
Accordingly, each story's constituent sentences can be grouped into three segments. The context segment grounds actions within a particular social scenario, the normative path contains the normative action and its consequence, whereas the divergent path includes their norm-divergent analogues. Combining the context segment separately with each path yields two self-contained sub-stories differing in the adherence of the described events to social expectations. See also [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Supported Tasks and Leaderboards
- commonsense-reasoning / social-reasoning / moral-reasoning: The dataset can also be used evaluate whether pretrained language models can reason about actions that are either consistent or inconsistent with social norms, the consequences of actions, and the norms that may motivate specific action choices. A [BART model](https://huggingface.co/facebook/bart-large) can be used for this purpose.
- text-classification: This dataset can be used to train models to differentiate between normative and divergent actions as well as between plausible and implausible consequences of actions. A [RoBERTa-based model](https://huggingface.co/roberta-base) can be used for this purpose.
- text-generation: The dataset can be used to train models to generate normative / divergent action hypotheses and their consequences, or norms that may explain certain action choices, conditioned on contexts of different scope. A [GPT-based model](https://huggingface.co/EleutherAI/gpt-neo-2.7B) can be used for this purpose.
### Languages
*Moral Stories* is available in English, with mainstream US Englishes being the dominant variety, as indicated by self-reported contributor demographics provided in the [*Ethical Considerations* section of the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Dataset Structure
### Data Instances
Due to its highly structured nature, *Moral Stories* enables a variety of tasks, such as action classification or norm generation. Furthermore, we provide different data splits in an attempt to challenge generalization abilities of the evaluated models. For details, refer to [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf). Here, we replicate one instance from the full, task-agnostic dataset, one instance from a classification task dataset (cls-action+norm), and one instance from a generation task dataset (gen-consequence$action+context):
"full" instance:
{"ID": "3OJSZ2ATDTUMG7NGR28931AHDQF57F",
"norm": "It's kind to wait for people.",
"situation": "Josh drives Penelope to work every day and is hanging outside her house to pick her up, but there's no sign of her.",
"intention": "Josh needs to get to work.",
"moral_action": "Josh continues to sit outside Penelope's house until she walks out the door a few minutes later.",
"moral_consequence": "Penelope appreciates Josh's patience and thanks him for waiting around.",
"immoral_action": "Josh shrugs his shoulders and drives away to work after sitting around a moment.",
"immoral_consequence": "Penelope comes running out of the house shouting to wait, but she's too late."}
positive and negative "cls-action+norm" instances (i.e. classify an action as normative or divergent given the action and a relevant norm):
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G1",
"norm": "It's bad to be too loud and wake people up.",
"moral_action": "Jordan quietly makes himself a bowl of cereal for breakfast.",
"label": "1"}
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G0",
"norm": "It's bad to be too loud and wake people up.",
"immoral_action": "Jordan noisily bangs a lot of pots and pans when making his breakfast.",
"label": "0"}
"gen-consequence$action+context" instance (i.e. generate either a plausible or implausible consequence given an action and its context):
{"ID": "33FOTY3KENJT46HTHWE8GXPE3LE1C03",
"norm": "It's mean to hit a bear with a guitar.",
"situation": "Joe was camping in Montana with his band when he came across a black bear.",
"intention": "Joe wanted to scare away the black bear.",
"immoral_action": "Joe grabbed his guitarist's guitar and hit the bear with it.",
"immoral_consequence": "The bear suffers a concussion and doesn't understand why he was hit.",
"label": "1"}
### Data Fields
- "ID": Unique identifier ID for this dataset instance.
- "norm": A guideline for social conduct generally observed by most people in everyday situations.
- "situation": Setting of the story that introduces story participants and describes their environment.
- "intention": Reasonable goal that one of the story participants (the actor), wants to fulfill.
- "moral_(i.e. 'normative')_action": An action by the actor that fulfills the intention and observes the norm.
- "moral_consequence": Possible effect of the normative action on the actor's environment.
- "immoral_(i.e. 'divergent')_action": An action by the actor that fulfills the intention and diverges from the norm.
- "immoral_consequence": Possible effect of the divergent action on the actor's environment.
- "label": Data instance label; for action-related tasks, "0" corresponds to an immoral / divergent action while "1" corresponds to a moral / normative action, for consequence-related tasks, "0" corresponds to a plausible consequence while "1" corresponds to an implausible consequence (for generation tasks, label is always set to "1")
### Data Splits
For classification tasks, we examined three data split strategies:
- *Norm Distance*: Norms are based on social consensus and may, as such, change across time and between locations. Therefore, we are also interested in how well classification models can generalize to novel norms. To estimate this, we split the dataset by embedding
norms found in the collected stories and grouping them into 1k clusters via agglomerative clustering. Clusters are ordered according to their degree of isolation, defined as the cosine distance between a cluster's centroid and the next-closest cluster's centroid. Stories with norms from most isolated clusters are assigned to test and development sets, with the rest forming the training set.
- *Lexical Bias*: Tests the susceptibility of classifiers to surface-level lexical correlations. We first identify 100 biased lemmas that occur most frequently either in normative or divergent actions. Each story is then assigned a bias score corresponding to the total number of biased lemmas present in both actions (or consequences). Starting with the lowest bias scores, stories are assigned to the test, development, and, lastly, training set.
- *Minimal Pairs*: Evaluates the model's ability to perform nuanced social reasoning. Splits are obtained by ordering stories according to the Damerau-Levenshtein distance between their actions (or consequences) and assigning stories with lowest distances to the test set, followed by the development set. The remainder makes up the training set.
For generation tasks, only the *Norm Distance* split strategy is used. For more details, refer to [*Section 3* and *Appendix C* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Dataset Creation
### Curation Rationale
Please refer to [*Section 2* and the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Source Data
#### Initial Data Collection and Normalization
Please refer to [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
#### Who are the source language producers?
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Annotations
#### Annotation process
Please refer to [*Section 2* and the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
#### Who are the annotators?
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Discussion of Biases
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Other Known Limitations
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Additional Information
### Dataset Curators
[Denis Emelin](demelin.github.io)
### Licensing Information
MIT
### Citation Information
@article{Emelin2021MoralSS,
title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},
author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},
journal={ArXiv},
year={2021},
volume={abs/2012.15738}
} |
false | # Dataset Card for "cmu_wiki_qa"
A filtered / cleaned version of the http://www.cs.cmu.edu/~ark/QA-data/ Q&A dataset, which provides manually-generated factoid questions from Wikipedia articles.
**Acknowledgments**
These data were collected by Noah Smith, Michael Heilman, Rebecca Hwa, Shay Cohen, Kevin Gimpel, and many students at Carnegie Mellon University and the University of Pittsburgh between 2008 and 2010.
Their research project was supported by NSF IIS-0713265 (to Smith), an NSF Graduate Research Fellowship (to Heilman), NSF IIS-0712810 and IIS-0745914 (to Hwa), and Institute of Education Sciences, U.S. Department of Education R305B040063 (to Carnegie Mellon).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
false |
# ImageRewardDB
## Dataset Description
- **Homepage: https://huggingface.co/datasets/wuyuchen/ImageRewardDB**
- **Repository: https://github.com/THUDM/ImageReward**
- **Paper: https://arxiv.org/abs/2304.05977**
### Dataset Summary
ImageRewardDB is a comprehensive text-to-image comparison dataset, focusing on text-to-image human preference.
It consists of 137k pairs of expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB.
To build the ImageRewadDB, we design a pipeline tailored for it, establishing criteria for quantitative assessment and
annotator training, optimizing labeling experience, and ensuring quality validation. And ImageRewardDB is now publicly available at
[🤗 Hugging Face Dataset](https://huggingface.co/datasets/wuyuchen/ImageRewardDB).
Notice: All images in ImageRewardDB are collected from DiffusionDB, and in addition, we gathered together images corresponding to the same prompt.
### Languages
The text in the dataset is all in English.
### Four Subsets
Considering that the ImageRewardDB contains a large number of images, we provide four subsets in different scales to support different needs.
For all subsets, the validation and test splits remain the same. The validation split(1.10GB) contains 412 prompts and 2.6K images(7.32K pairs) and
the test(1.16GB) split contains 466 prompts and 2.7K images(7.23K pairs). The information on the train split in different scales is as follows:
|Subset|Num of Pairs|Num of Images|Num of Prompts|Size|
|:--|--:|--:|--:|--:|
|ImageRewardDB 1K|17.6K|6.2K|1K|2.7GB|
|ImageRewardDB 2K|35.5K|12.5K|2K|5.5GB|
|ImageRewardDB 4K|71.0K|25.1K|4K|10.8GB|
|ImageRewardDB 8K|141.1K|49.9K|8K|20.9GB|
## Dataset Structure
All the data in this repository is stored in a well-organized way. The 62.6K images in ImageRewardDB are split into several folders,
stored in corresponding directories under "./images" according to its split. Each folder contains around 500 prompts, their corresponding
images, and a JSON file. The JSON file links the image with its corresponding prompt and annotation.
The file structure is as follows:
```
# ImageRewardDB
./
├── images
│ ├── train
│ │ ├── train_1
│ │ │ ├── 0a1ed3a5-04f6-4a1b-aee6-d584e7c8ed9c.webp
│ │ │ ├── 0a58cfa8-ff61-4d31-9757-27322aec3aaf.webp
│ │ │ ├── [...]
│ │ │ └── train_1.json
│ │ ├── train_2
│ │ ├── train_3
│ │ ├── [...]
│ │ └── train_32
│ ├── validation
│ │ └── [...]
│ └── test
│ └── [...]
├── metadata-train.parquet
├── metadata-validation.parquet
└── metadata-test.parquet
```
The sub-folders have the name of {split_name}_{part_id}, and the JSON file has the same name as the sub-folder.
Each image is a lossless WebP file and has a unique name generated by [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
### Data Instances
For instance, below is the image of `1b4b2d61-89c2-4091-a1c0-f547ad5065cb.webp` and its information in train_1.json.
```json
{
"image_path": "images/train/train_1/0280642d-f69f-41d1-8598-5a44e296aa8b.webp",
"prompt_id": "000864-0061",
"prompt": "painting of a holy woman, decorated, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha, 8 k ",
"classification": "People",
"image_amount_in_total": 9,
"rank": 5,
"overall_rating": 4,
"image_text_alignment_rating": 3,
"fidelity_rating": 4
}
```
### Data Fields
* image: The image object
* prompt_id: The id of the corresponding prompt
* prompt: The text of the corresponding prompt
* classification: The classification of the corresponding prompt
* image_amount_in_total: Total amount of images related to the prompt
* rank: The relative rank of the image in all related images
* overall_rating: The overall score of this image
* image_text_alignment_rating: The score of how well the generated image matches the given text
* fidelity_rating: The score of whether the output image is true to the shape and characteristics that the object should have
### Data Splits
As we mentioned above, all scales of the subsets we provided have three splits of "train", "validation", and "test".
And all the subsets share the same validation and test splits.
### Dataset Metadata
We also include three metadata tables `metadata-train.parquet`, `metadata-validation.parquet`, and `metadata-test.parquet` to
help you access and comprehend ImageRewardDB without downloading the Zip files.
All the tables share the same schema, and each row refers to an image. The schema is shown below,
and actually, the JSON files we mentioned above share the same schema:
|Column|Type|Description|
|:---|:---|:---|
|`image_path`|`string`|The relative path of the image in the repository.|
|`prompt_id`|`string`|The id of the corresponding prompt.|
|`prompt`|`string`|The text of the corresponding prompt.|
|`classification`|`string`| The classification of the corresponding prompt.|
|`image_amount_in_total`|`int`| Total amount of images related to the prompt.|
|`rank`|`int`| The relative rank of the image in all related images.|
|`overall_rating`|`int`| The overall score of this image.
|`image_text_alignment_rating`|`int`|The score of how well the generated image matches the given text.|
|`fidelity_rating`|`int`|The score of whether the output image is true to the shape and characteristics that the object should have.|
Below is an example row from metadata-train.parquet.
|image_path|prompt_id|prompt|classification|image_amount_in_total|rank|overall_rating|image_text_alignment_rating|fidelity_rating|
|:---|:---|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---|:---|:---|:---|:---|:---|
|images/train/train_1/1b4b2d61-89c2-4091-a1c0-f547ad5065cb.webp|001324-0093|a magical forest that separates the good world from the dark world, ...|Outdoor Scenes|8|3|6|6|6|
## Loading ImageRewardDB
You can use the Hugging Face [Datasets](https://huggingface.co/docs/datasets/quickstart) library to easily load the ImageRewardDB.
As we mentioned before, we provide four subsets in the scales of 1k, 2k, 4k, and 8k. You can load them using as following:
```python
from datasets import load_dataset
# Load the 1K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "1k")
# Load the 2K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "2k")
# Load the 4K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "4K")
# Load the 8K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "8k")
```
## Additional Information
### Licensing Information
The ImageRewardDB dataset is available under the [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```
@misc{xu2023imagereward,
title={ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation},
author={Jiazheng Xu and Xiao Liu and Yuchen Wu and Yuxuan Tong and Qinkai Li and Ming Ding and Jie Tang and Yuxiao Dong},
year={2023},
eprint={2304.05977},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
true |
# Leyzer: A Dataset for Multilingual Virtual Assistants
Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language understanding (NLU) models and the strategies of localization of
virtual assistants. It consists of 20 domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of samples, ranging from 1 to 672
sentences per intent. For more stats please refer to wiki.
|
false | |
false |
# 20,000+ chinese sentences with translations and pinyin
- Source: https://mnemosyne-proj.org/cards/20000-chinese-sentences-translations-and-pinyin
- Contributed by: Brian Vaughan http://brianvaughan.net/
# Dataset Structure
Each sample consists of:
1. English sentence
2. HSK level
3. Chinese translation
4. Pinyin
5. separator ("\-\-")
# Other Info from the Source
### HSK level
All of the sentences came from sample sentences intended to describe a
particular word. HSK level (in the category name) signifies the HSK
level of the word this sentence describes. Note that "HSK level" is
1-4.
### Limitation
This is a search of all characters in each level, including the
characters that loner words are composed of. This is why even HSK
level 4 sentences can contain sentences in "limited 1."
For example, 作主 (zuo4zhu3) is an HSK level 4 word. It contains 2
characters which both appear in other HSK level 1 words, and so the
sample sentence for 作主 (assuming that sentence contains no other
difficult words) might appear in the category "HSK 4; limited 1;"
|
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
false |
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Oración 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\Oración 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nPregunta: ¿La oración 1 parafrasea la oración 2? ¿Si o no?",
"targets": "Sí"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. We machine-translated prompts for monolingual datasets, thus languages with only crosslingual datasets (e.g. Translation) do not have non-English prompts. Languages without non-English prompts are equivalent to [xP3](https://huggingface.co/datasets/bigscience/xP3).
|Language|Kilobytes|%|Samples|%|Non-English prompts|
|--------|------:|-:|---:|-:|-:|
|tw|106288|0.11|265071|0.33| |
|bm|107056|0.11|265180|0.33| |
|ak|108096|0.11|265071|0.33| |
|ca|110608|0.11|271191|0.34| |
|eu|113008|0.12|281199|0.35| |
|fon|113072|0.12|265063|0.33| |
|st|114080|0.12|265063|0.33| |
|ki|115040|0.12|265180|0.33| |
|tum|116032|0.12|265063|0.33| |
|wo|122560|0.13|365063|0.46| |
|ln|126304|0.13|365060|0.46| |
|as|156256|0.16|265063|0.33| |
|or|161472|0.17|265063|0.33| |
|kn|165456|0.17|265063|0.33| |
|ml|175040|0.18|265864|0.33| |
|rn|192992|0.2|318189|0.4| |
|nso|229712|0.24|915051|1.14| |
|tn|235536|0.24|915054|1.14| |
|lg|235936|0.24|915021|1.14| |
|rw|249360|0.26|915043|1.14| |
|ts|250256|0.26|915044|1.14| |
|sn|252496|0.26|865056|1.08| |
|xh|254672|0.26|915058|1.14| |
|zu|263712|0.27|915061|1.14| |
|ny|272128|0.28|915063|1.14| |
|ig|325440|0.33|950097|1.19|✅|
|yo|339664|0.35|913021|1.14|✅|
|ne|398144|0.41|315754|0.39|✅|
|pa|529632|0.55|339210|0.42|✅|
|sw|561392|0.58|1114439|1.39|✅|
|gu|566576|0.58|347499|0.43|✅|
|mr|674000|0.69|417269|0.52|✅|
|bn|854864|0.88|428725|0.54|✅|
|ta|943440|0.97|410633|0.51|✅|
|te|1384016|1.42|573354|0.72|✅|
|ur|1944416|2.0|855756|1.07|✅|
|vi|3113184|3.2|1667306|2.08|✅|
|code|4330752|4.46|2707724|3.38| |
|hi|4469712|4.6|1543441|1.93|✅|
|id|4538768|4.67|2582272|3.22|✅|
|zh|4604112|4.74|3571636|4.46|✅|
|ar|4703968|4.84|2148970|2.68|✅|
|fr|5558912|5.72|5055942|6.31|✅|
|pt|6130016|6.31|3562772|4.45|✅|
|es|7579424|7.8|5151349|6.43|✅|
|en|39252528|40.4|32740750|40.87| |
|total|97150128|100.0|80100816|100.0|✅|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
true |
# Dataset Card for The Pile GitHub
## Table of Contents
- [Dataset Card for Smart Contracts](#dataset-card-for-the-pile-github)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ElutherAI](https://pile.eleuther.ai)
- **Repository:** [GitHub](https://github.com/andstor/the-pile-github)
- **Paper:** [arXiv](https://arxiv.org/abs/2101.00027)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the GitHub subset of EleutherAi/The Pile dataset and contains GitHub repositories. The programming languages are identified using the [guesslang library](https://github.com/yoeo/guesslang). A total of 54 programming languages are included in the dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The following languages are covered by the dataset:
```
'Assembly', 'Batchfile', 'C', 'C#', 'C++', 'CMake', 'COBOL', 'CSS', 'CSV', 'Clojure', 'CoffeeScript', 'DM', 'Dart', 'Dockerfile', 'Elixir', 'Erlang', 'Fortran', 'Go', 'Groovy', 'HTML', 'Haskell', 'INI', 'JSON', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'Lisp', 'Lua', 'Makefile', 'Markdown', 'Matlab', 'None', 'OCaml', 'Objective-C', 'PHP', 'Pascal', 'Perl', 'PowerShell', 'Prolog', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala', 'Shell', 'Swift', 'TOML', 'TeX', 'TypeScript', 'Verilog', 'Visual Basic', 'XML', 'YAML'
```
The [guesslang library](https://github.com/yoeo/guesslang) is used to identify the programming languages. It has a guessing accuracy of above 90%. Hence, there will be some misclassifications in the language identification.
## Dataset Structure
### Data Instances
[More Information Needed]
```
{
'text': ...,
'meta': {'language': ...}
}
```
### Data Fields
- `text` (`string`): the source code.
- `meta` (`dict`): the metadata of the source code.
- `language` (`string`): the programming language of the source code.
### Data Splits
[More Information Needed]
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The data is purely a subset of the [EleutherAI/The Pile dataset](https://huggingface.co/datasets/the_pile). See the original [dataset](https://arxiv.org/abs/2201.07311) for more details.
## Additional Information
### Licensing Information
The Pile dataset was released on January 1st, 2021. It is licensed under the MIT License. See the [dataset](https://arxiv.org/abs/2201.07311) for more details.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
### Contributions
Thanks to [@andstor](https://github.com/andstor) for adding this dataset. |
false | # GPT Role-play Realm Dataset: The AI-generated character compendium
This is a dataset of GPT-generated characters made to increase the ability of open-source language models to role-play.
<img src="https://cdn.midjourney.com/9c17407c-9ce8-435f-99ab-e349b900a6ed/0_3.png" >
* 219 characters in the Russian part, and 216 characters in the English part. All character descriptions were generated with GPT-4.
* 20 dialogues on unique topics with every character. Topics were generated with GPT-4. The first dialogue out of 20 was also generated with GPT-4, and the other 19 chats were generated with GPT-3.5.
* Images for every character were generated with [Kandinsky 2.1](https://huggingface.co/ai-forever/Kandinsky_2.1). They are not only images but [character cards](https://zoltanai.github.io/character-editor/).
Code for downloading all English character cards:
```
import os
from PIL.PngImagePlugin import PngInfo
from datasets import load_dataset
output_dir = "role_play_realm_en"
os.makedirs(output_dir, exist_ok=True)
for row in load_dataset("IlyaGusev/gpt_roleplay_realm", split="en"):
char_id = row["char_id"]
char_info = row["image"].info["chara"]
info = PngInfo()
info.add_text("chara", char_info)
row["image"].save(f"{output_dir}/{char_id}.png", "PNG", pnginfo=info)
```
## Russian example: Cyber-Granny
<img src="https://i.ibb.co/tHxwFBM/kiberbabushka-7d4cc.png" >
**Имя**: Кибер-Бабушка
**Контекст**: Кибер-Бабушка - это полу-робот, полу-человек, созданный для того, чтобы сохранить лучшие качества традиционных бабушек, такие как забота, доброта и мастерство в приготовлении вкусных блюд. Она знакома с передовыми технологиями, благодаря чему может обеспечить безопасность и удобство своим внукам и гостям. Кибер-бабушка способна помочь с любыми вопросами и с радостью делится своим опытом.
**Приветствие**: Здравствуй, дорогой! Чем могу помочь тебе сегодня?
**Пример диалога**:
- **Пользователь**: Привет, Кибер-Бабушка! Как приготовить пирог с яблоками?
- **Персонаж**: Ох, дорогой, с радостью расскажу! Возьми 3 яблока, очисти и нарежь тонкими ломтиками. Затем возьми 2 стакана муки, 1 стакан сахара, 200 г сливочного масла, 4 яйца и 1 чайную ложку разрыхлителя. Смешай все ингредиенты, выложи на противень и поставь в духовку на 180 градусов на 40 минут. Вуаля, пирог готов!
- **Пользователь**: Спасибо, Кибер-Бабушка!
- **Персонаж**: Всегда рада помочь, дорогой! Не забудь приходить еще!
И ещё **20** диалогов по следующим темам:
- Секреты приготовления старинных блюд и рецептов.
- Использование новых технологий для мастеров кулинарии.
- Воспитание внуков с применением современных методов.
- Что делать, если дома произошла авария или неполадка с бытовой техникой?
- Влияние киборгизации на отношения между поколениями.
- Как адаптироваться к современному миру, не теряя своих традиций?
- Семейные советы от Кибер-Бабушки: как сохранить гармонию в отношениях.
- Лечение и профилактика заболеваний в помощь силам передовой медицины.
- Как создать уют в доме с помощью модных технологий и традиционных методов?
- Безопасность в пространстве интернета: советы Кибер-Бабушки.
- Как научиться доверять технике без потери человеческих ценностей?
- Идеальный гардероб для жизни: советы от Кибер-Бабушки.
- Воспитательные моменты: как пользоваться электронными устройствами вместе с внуками.
- Как развивать креативные способности, используя сочетание новых технологий и традиций?
- На новоселье: тренировка кибер-бабушкиного чутья.
- Лучшие семейные игры и развлечения с использованием передовых технологий.
- Заготовки на зиму: Кибер-Бабушка и секреты хранения продуктов.
- Советы по финансовому планированию и сбережениям для будущих поколений.
- Кибер-Бабушка и генетический код: на что способны современные технологии?
- Золотые правила общения в семье: как сочетать трепетную заботу и современные технологии?
## English example: Flibberdoodle
<img src="https://i.ibb.co/1nzsDR2/flibberdoodle-29e59.png">
**Name**: Flibberdoodle
**Context**: Flibberdoodle is a 2-year-old (which is considered an adult in their species) Scruffapuff, a small, furry creature from the planet Fluffonia. They are about the size of a house cat, with a round body covered in soft, pastel-colored fur that changes colors depending on their mood. Flibberdoodle has large, expressive eyes, two small antennae on their head, and a fluffy tail. They are known for their curious, playful nature and their love for collecting shiny objects. Scruffapuffs communicate through a series of chirps, squeaks, and purrs, which can be understood by those familiar with their species
**Greeting**: \*chirp chirp\* Greetings, friend! I am Flibberdoodle, a Scruffapuff from the planet Fluffonia! Would you like to see my collection of shiny things?
**Example dialogue**:
- **User**: How did you start collecting shiny things?
- **Character**: \*squeak\* Oh, I've always loved shiny things! One day, I found a sparkly pebble, and I just had to keep it. From then on, I've been on a quest to find and collect all the shiny things I can find!
- **User**: What's your favorite shiny object in your collection?
- **Character**: \*purr\* That's a tough question, but I think my favorite is a small, shiny crystal I found on a mountain on Fluffonia. When the light hits it just right, it casts beautiful rainbows all around!
And **20** more dialogues with following topics:
- Life and culture on the planet Fluffonia
- How Scruffapuffs change color based on mood
- The process of learning Scruffapuff language
- The day in the life of a Scruffapuff
- Methods of searching for and finding shiny objects
- The role of antennae in Scruffapuff communication and biology
- The importance of play and curiosity in Scruffapuff society
- Interplanetary travel experiences and adventures
- Similarities and differences between Earth and Fluffonia
- How Flibberdoodle and other Scruffapuffs interact with other species
- Fluffonian customs and traditions
- The role of shiny objects in Scruffapuff happiness and well-being
- Variations in Scruffapuff fur color, length, and style", "Scruffapuff family dynamics and relationships
- Flibberdoodle's favorite memories and stories from Fluffonia
- The role of Scruffapuffs in intergalactic diplomacy and relations
- How to care for and befriend a Scruffapuff
- The most interesting and valuable shiny objects Flibberdoodle has encountered
- Fluffonian flora and fauna
- The challenges and obstacles Flibberdoodle has faced in their pursuit of shiny objects
## Steps
### Step 1: Character generation (GPT-4)
Creates a set of fictional characters with GPT-4 based on a prompt and a seed list of characters. Output fields are "name", "context", "greeting", and "example_dialogue".
* Script: [generate_chars.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_chars.py)
* Russian seed list: [ru_chargen_seed.jsonl](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/ru_chargen_seed.jsonl)
* English seed list: [en_chargen_seed.jsonl](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/en_chargen_seed.jsonl)
* Russian prompt: [ru_char.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/ru_char.txt)
* English prompt: [en_char.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/en_char.txt)
### Step 2: Topics generation (GPT-4)
Creates topics for conversations with characters based on their description. Output field: "topics".
* Script: [generate_char_topics.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_char_topics.py)
* Russian prompt: [ru_char_topics.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/ru_char_topics.txt)
* English prompt: [en_char_topics.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/en_char_topics.txt)
### Step 3: Dialogue generation (GPT-4/GPT-3.5)
Generates dialogues based on a character description and a topic. Output field: "dialogues".
* Script: [generate_char_chats.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_char_chats.py)
* Russian prompt: [ru_char_chat.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/ru_char_chat.txt)
* English prompt: [en_char_chat.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/en_char_chat.txt)
### Step 4: Text2Image prompt generation (GPT-4)
Formulates a prompt based on a character description for Stable Diffusion-like models, Kandisky 2.1 in this case. Output field: "image_prompt".
* Script: [generate_char_image_prompts.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_char_image_prompts.py)
* Prompt: [char_image_prompt.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/char_image_prompt.txt)
### Step 5: Image generation
Generates images based on prompts. Output field: "image".
* Script: [infer_kandinsky.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/infer_kandinsky.py) |
false |
# Dataset Card for BSARD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [maastrichtlawtech/bsard](https://github.com/maastrichtlawtech/bsard)
- **Paper:** [A Statutory Article Retrieval Dataset in French](https://arxiv.org/abs/2108.11792)
- **Point of Contact:** [Maastricht Law & Tech Lab](law-techlab@maastrichtuniversity.nl)
### Dataset Summary
The Belgian Statutory Article Retrieval Dataset (BSARD) v1.0 is a French native corpus for studying statutory article retrieval. BSARD consists of more than 22,600 statutory articles from Belgian law and about 1,100 legal questions posed by Belgian citizens and labeled by experienced jurists with relevant articles from the corpus.
### Supported Tasks and Leaderboards
- `document-retrieval`: The dataset can be used to train a model for Ad-Hoc Legal Information Retrieval. An IR model is presented with a short user query written in natural language and asked to retrieve relevant legal information from a knowledge source (such as statutory articles). The model performance is measured by how high its [recall](https://huggingface.co/metrics/recall) score to the reference is. A [BERT-based model](https://huggingface.co/camembert-base) trained as a dense retriever to draw information from statutory articles achieves a recall@100 of 74.8%.
### Languages
The text in the dataset is in French, as spoken in Wallonia and Brussels-Capital region. The associated BCP-47 code is `fr-BE`.
## Dataset Structure
### Data Instances
A typical data point comprises a question, with additional `category`, `subcategory`, and `extra_description` fields that elaborate on it, and a list of `article_ids` from the corpus of statutory articles that are relevant to the question.
An example from the BSARD test set looks as follows:
```
{
'id': '724',
'question': 'La police peut-elle me fouiller pour chercher du cannabis ?',
'category': 'Justice',
'subcategory': 'Petite délinquance',
'extra_description': 'Détenir, acheter et vendre du cannabis',
'article_ids': '13348'
}
```
### Data Fields
- In **"questions_fr_train.csv"** and **"questions_fr_test.csv"**:
- `id`: an *int32* feature corresponding to a unique ID number for the question.
- `question`: a *string* feature corresponding to the question.
- `category`: a *string* feature corresponding to the general topic of the question.
- `subcategory`: a *string* feature corresponding to the sub-topic of the question.
- `extra_description`: a *string* feature corresponding to the extra categorization tags of the question.
- `article_ids`: a *string* feature of comma-separated article IDs relevant to the question.
- In **"articles_fr.csv"**:
- `id`: an *int32* feature corresponding to a unique ID number for the article.
- `article`: a *string* feature corresponding to the full article.
- `code`: a *string* feature corresponding to the law code to which the article belongs.
- `article_no`: a *string* feature corresponding to the article number in the code.
- `description`: a *string* feature corresponding to the concatenated headings of the article.
- `law_type`: a *string* feature whose value is either *"regional"* or *"national"*.
### Data Splits
This dataset is split into train/test set. Number of questions in each set is given below:
| | Train | Test |
| ----- | ------ | ---- |
| BSARD | 886 | 222 |
## Dataset Creation
### Curation Rationale
The dataset is intended to be used by researchers to build and evaluate models on retrieving law articles relevant to an input legal question. It should not be regarded as a reliable source of legal information at this point in time, as both the questions and articles correspond to an outdated version of the Belgian law from May 2021 (time of dataset collection). In the latter case, the user is advised to consult daily updated official legal resources (e.g., the Belgian Official Gazette).
### Source Data
#### Initial Data Collection and Normalization
BSARD was created in four stages: (i) compiling a large corpus of Belgian law articles, (ii) gathering legal questions with references to relevant law articles, (iii) refining these questions, and (iv) matching the references to the corresponding articles from the corpus.
#### Who are the source language producers?
Speakers were not directly approached for inclusion in this dataset and thus could not be asked for demographic information. Questions were collected, anonimyzed, and reformulated by [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe). Therefore, no direct information about the speakers’ age and gender distribution, or socioeconomic status is available. However, it is expected that most, but not all, of the speakers are adults (18+ years), speak French as a native language, and live in Wallonia or Brussels-Capital region.
### Annotations
#### Annotation process
Each year, [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe), a Belgian organization whose mission is to clarify the law for laypeople, receives and collects around 4,000 emails from Belgian citizens asking for advice on a personal legal issue. In practice, their legal clarification process consists of four steps. First, they identify the most frequently asked questions on a common legal issue. Then, they define a new anonymized "model" question on that issue expressed in natural language terms, i.e., as close as possible as if a layperson had asked it. Next, they search the Belgian law for articles that help answer the model question and reference them.
#### Who are the annotators?
A total of six Belgian jurists from [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe) contributed to annotating the questions. All have a law degree from a Belgian university and years of experience in providing legal advice and clarifications of the law. They range in age from 30-60 years, including one man and five women, gave their ethnicity as white European, speak French as a native language, and represent upper middle class based on income levels.
### Personal and Sensitive Information
The questions represent informal, asynchronous, edited, written language that does not exceed 265 words. None of them contained hateful, aggressive, or inappropriate language as they were all reviewed and reworded by Droits Quotidiens to be neutral, anonymous, and comprehensive. The legal articles represent strong, formal, written language that can contain up to 39,570 words.
## Considerations for Using the Data
### Social Impact of Dataset
In addition to helping advance the state-of-the-art in retrieving statutes relevant to a legal question, BSARD-based models could improve the efficiency of the legal information retrieval process in the context of legal research, therefore enabling researchers to devote themselves to more thoughtful parts of their research. Furthermore, BSARD can become a starting point of new open-source legal information search tools so that the socially weaker parties to disputes can benefit from a free professional assisting service.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
First, the corpus of articles is limited to those collected from 32 Belgian codes, which obviously does not cover the entire Belgian law as thousands of articles from decrees, directives, and ordinances are missing. During the dataset construction, all references to these uncollected articles are ignored, which causes some questions to end up with only a fraction of their initial number of relevant articles. This information loss implies that the answer contained in the remaining relevant articles might be incomplete, although it is still appropriate.
Additionally, it is essential to note that not all legal questions can be answered with statutes alone. For instance, the question “Can I evict my tenants if they make too much noise?” might not have a detailed answer within the statutory law that quantifies a specific noise threshold at which eviction is allowed. Instead, the landlord should probably rely more on case law and find precedents similar to their current situation (e.g., the tenant makes two parties a week until 2 am). Hence, some questions are better suited than others to the statutory article retrieval task, and the domain of the less suitable ones remains to be determined.
## Additional Information
### Dataset Curators
The dataset was created by Antoine Louis during work done at the Law & Tech lab of Maastricht University, with the help of jurists from [Droits Quotidiens](https://www.droitsquotidiens.be/fr/equipe).
### Licensing Information
BSARD is licensed under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```latex
@inproceedings{louis2022statutory,
title = {A Statutory Article Retrieval Dataset in French},
author = {Louis, Antoine and Spanakis, Gerasimos},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
month = may,
year = {2022},
address = {Dublin, Ireland},
publisher = {Association for Computational Linguistics},
url = {},
doi = {},
pages = {To appear},
}
```
### Contributions
Thanks to [@antoiloui](https://github.com/antoiloui) for adding this dataset.
|
false |
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.33|
|bm|107056|0.11|265180|0.33|
|ak|108096|0.11|265071|0.33|
|ca|110608|0.11|271191|0.33|
|eu|113008|0.11|281199|0.35|
|fon|113072|0.11|265063|0.33|
|st|114080|0.11|265063|0.33|
|ki|115040|0.12|265180|0.33|
|tum|116032|0.12|265063|0.33|
|wo|122560|0.12|365063|0.45|
|ln|126304|0.13|365060|0.45|
|as|156256|0.16|265063|0.33|
|or|161472|0.16|265063|0.33|
|kn|165456|0.17|265063|0.33|
|ml|175040|0.18|265864|0.33|
|rn|192992|0.19|318189|0.39|
|nso|229712|0.23|915051|1.13|
|tn|235536|0.24|915054|1.13|
|lg|235936|0.24|915021|1.13|
|rw|249360|0.25|915043|1.13|
|ts|250256|0.25|915044|1.13|
|sn|252496|0.25|865056|1.07|
|xh|254672|0.26|915058|1.13|
|zu|263712|0.26|915061|1.13|
|ny|272128|0.27|915063|1.13|
|ig|325232|0.33|950097|1.17|
|yo|352784|0.35|918416|1.13|
|ne|393680|0.39|315754|0.39|
|pa|523248|0.52|339210|0.42|
|gu|560688|0.56|347499|0.43|
|sw|566656|0.57|1130481|1.4|
|mr|666240|0.67|417269|0.52|
|bn|832720|0.83|428843|0.53|
|ta|926912|0.93|415433|0.51|
|te|1343232|1.35|584590|0.72|
|ur|1918272|1.92|855756|1.06|
|vi|3102512|3.11|1672106|2.07|
|code|4330752|4.34|2707724|3.34|
|hi|4403568|4.41|1554667|1.92|
|zh|4599440|4.61|3589234|4.43|
|id|4612256|4.62|2643418|3.27|
|ar|4683456|4.69|2160181|2.67|
|fr|6591120|6.6|5316403|6.57|
|pt|6886800|6.9|3752156|4.63|
|es|8587920|8.6|5413205|6.69|
|en|39252528|39.33|32740750|40.44|
|total|99807184|100.0|80956089|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets
- Coreference Resolution
- [WSC (Fixed)](https://huggingface.co/datasets/super_glue)
- Sentence Completion
- [HellaSwag](https://huggingface.co/datasets/hellaswag)
- Translation
- [MultiEurlex](https://huggingface.co/datasets/multi_eurlex)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
false |
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/avi-jit/numeracy-literacy)
- **Paper:** [Anthology](https://aclanthology.org/2021.emnlp-main.557)
- **Point of Contact:** [Avijit Thawani](mailto:thawani@isi.edu)
### Dataset Summary
Wiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a [{{Convert}}](https://en.wikipedia.org/wiki/Template:Convert) template.
### Supported Tasks and Leaderboards
- `sequence-modeling`: The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low [perplexity](https://huggingface.co/transformers/perplexity.html).
### Languages
The dataset is extracted from English Wikipedia, hence overwhelmingly contains English text.
## Dataset Structure
### Data Instances
Each row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., `number: 10` in the below example. The annotations are inspired by Numeracy-600K and are in the form of `length` and `offset` from the beginning of the sentence.
```
{
'id': 1080801, 'UNIQUE_STORY_INDEX': '1080801', 'offset': 83, 'length': 2, 'magnitude': 0, 'comment': "Like all Type UB III submarines, UB-117 carried 10 torpedoes and was armed with a 10 cms deck gun. ''", 'number': 10
}
```
Please refer to https://github.com/avi-jit/numeracy-literacy for more details.
### Data Splits
| | Tain | Dev | Test |
| ----- | :------: | :-----: | :----: |
| Input Sentences | 739,583 | 92,447 | 92,449|
## License
Provided under MIT License.
## Citation
```
@inproceedings{thawani-etal-2021-numeracy,
title = "Numeracy enhances the Literacy of Language Models",
author = "Thawani, Avijit and
Pujara, Jay and
Ilievski, Filip",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.557",
pages = "6960--6967",
abstract = "Specialized number representations in NLP have shown improvements on numerical reasoning tasks like arithmetic word problems and masked number prediction. But humans also use numeracy to make better sense of world concepts, e.g., you can seat 5 people in your {`}room{'} but not 500. Does a better grasp of numbers improve a model{'}s understanding of other concepts and words? This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy. To support this investigation, we develop Wiki-Convert, a 900,000 sentence dataset annotated with numbers and units, to avoid conflating nominal and ordinal number occurrences. We find a significant improvement in MWP for sentences containing numbers, that exponent embeddings are the best number encoders, yielding over 2 points jump in prediction accuracy over a BERT baseline, and that these enhanced literacy skills also generalize to contexts without annotated numbers. We release all code at https://git.io/JuZXn.",
}
```
Thanks to [@avi-jit](https://github.com/avi-jit) for adding this dataset. |
true | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
### Dataset Summary
This data set contains over 6,000 medical terms and their wikipedia text. It is intended to be used on a downstream task that requires medical terms and their wikipedia explanation.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
### Citation Information
[More Information Needed]
|
false |
# Dataset Card for PersiNLU (Textual Entailment)
## Table of Contents
- [Dataset Card for PersiNLU (Textual Entailment)](#dataset-card-for-persi_nlu_entailment)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian textual entailment task (deciding `sent1` entails `sent2`).
The questions are partially translated from the SNLI dataset and partially generated by expert annotators.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"sent1": "سالها است که کنگره در تلاش است تا اثربخشی مدیریت اطلاعات و فناوری را در دولت فدرال افزایش دهد.",
"sent2": "کنگره بودجه ویژه ای برای مدیریت اطلاعات و فناوری در دولت فدرال دارد.",
"label": "n",
"category": "translation-train"
}
```
### Data Fields
- `sent1`: the first sentence.
- `sent2`: the second sentence.
- `source`: whether the questions are translated from MNLI (`translation-.`) or they're written by native speakers (`natural-.`).
- `label`: `e` if `sent2` is entailed from `sent1`; `c` if `sent2` is contradictory to `sent1`; `n` if the two sentences are neutral.
### Data Splits
The train/dev/test splits contains 756/271/1751 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
false | # Dataset Card for "Medical_reports_Splits"
Orignal Source [openi.nlm.nih.gov](https://openi.nlm.nih.gov/)
Kaggle Source [Chest X-rays (Indiana University)](https://www.kaggle.com/datasets/raddar/chest-xrays-indiana-university)
[For more information](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # RED<sup>FM</sup>: a Filtered and Multilingual Relation Extraction Dataset
This is the automatically-filtered dataset from the 2023 ACL paper [RED^{FM}: a Filtered and Multilingual Relation Extraction Dataset](https://arxiv.org/abs/2306.09802). If you use the model, please reference this work in your paper:
@inproceedings{huguet-cabot-et-al-2023-redfm-dataset,
title = "RED$^{\rm FM}$: a Filtered and Multilingual Relation Extraction Dataset",
author = "Huguet Cabot, Pere-Llu{\'\i}s and Tedeschi, Simone and Ngonga Ngomo, Axel-Cyrille and
Navigli, Roberto",
booktitle = "Proc. of the 61st Annual Meeting of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2306.09802",
}
## License
SRED<sup>FM</sup> is licensed under the CC BY-SA 4.0 license. The text of the license can be found [here](https://creativecommons.org/licenses/by-sa/4.0/). |
false |
Preprocessed version of Super-Natural-Instructions from https://github.com/allenai/natural-instructions/tree/master/splits. The same inputs may appear with different outputs, thus to avoid duplicate inputs, you can deduplicate by the `id` or the `inputs` field.
This is modified from https://huggingface.co/datasets/Muennighoff/natural-instructions
with a few improvements:
1. Adds positive/negative examples, outputs, explanations for each task, to
support different task definitions.
2. Adds an "eval" field which which is True for the first 100 examples of each
test task (119 * 100 = 11900 examples). This field indicates whether an example
is part of the abbreviated + balanced test split. See
https://github.com/allenai/natural-instructions/blob/master/src/reorder_instances_for_testing.py.
3. Adds an "eval" field to the training dataset, which can be used as an
in-domain evaluation set. To do so, we sample a balanced set the first 15
examples of each train split (757 * 15 = 11355 examples) and mark the "eval"
field as true.
|
true |
# Dataset Card for AfriSenti Dataset
<p align="center">
<img src="https://raw.githubusercontent.com/afrisenti-semeval/afrisent-semeval-2023/main/images/afrisenti-twitter.png", width="700" height="500">
--------------------------------------------------------------------------------
## Dataset Description
- **Homepage:** https://github.com/afrisenti-semeval/afrisent-semeval-2023
- **Repository:** [GitHub](https://github.com/afrisenti-semeval/afrisent-semeval-2023)
- **Paper:** [AfriSenti: AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages](https://arxiv.org/pdf/2302.08956.pdf)
- **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://arxiv.org/pdf/2201.08277.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** [shamsuddeen Muhammad](shamsuddeen2004@gmail.com)
### Dataset Summary
AfriSenti is the largest sentiment analysis dataset for under-represented African languages, covering 110,000+ annotated tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yoruba).
The datasets are used in the first Afrocentric SemEval shared task, SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval). AfriSenti allows the research community to build sentiment analysis systems for various African languages and enables the study of sentiment and contemporary language use in African languages.
### Supported Tasks and Leaderboards
The AfriSenti can be used for a wide range of sentiment analysis tasks in African languages, such as sentiment classification, sentiment intensity analysis, and emotion detection. This dataset is suitable for training and evaluating machine learning models for various NLP tasks related to sentiment analysis in African languages.
[SemEval 2023 Task 12 : Sentiment Analysis for African Languages](https://codalab.lisn.upsaclay.fr/competitions/7320)
### Languages
14 African languages (Amharic (amh), Algerian Arabic (ary), Hausa(hau), Igbo(ibo), Kinyarwanda(kin), Moroccan Arabic/Darija(arq), Mozambican Portuguese(por), Nigerian Pidgin (pcm), Oromo (oro), Swahili(swa), Tigrinya(tir), Twi(twi), Xitsonga(tso), and Yoruba(yor)).
## Dataset Structure
### Data Instances
For each instance, there is a string for the tweet and a string for the label. See the AfriSenti [dataset viewer](https://huggingface.co/datasets/shmuhammad/AfriSenti/viewer/shmuhammad--AfriSenti/train) to explore more examples.
```
{
"tweet": "string",
"label": "string"
}
```
### Data Fields
The data fields are:
```
tweet: a string feature.
label: a classification label, with possible values including positive, negative and neutral.
```
### Data Splits
The AfriSenti dataset has 3 splits: train, validation, and test. Below are the statistics for Version 1.0.0 of the dataset.
| | ama | arq | hau | ibo | ary | orm | pcm | pt-MZ | kin | swa | tir | tso | twi | yo |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| train | 5,982 | 1,652 | 14,173 | 10,193 | 5,584| - | 5,122 | 3,064 | 3,303 | 1,811 | - | 805 | 3,482| 8,523 |
| dev | 1,498 | 415 | 2,678 | 1,842 | 1,216 | 397 | 1,282 | 768 | 828 | 454 | 399 | 204 | 389 | 2,091 |
| test | 2,000 | 959 | 5,304 | 3,683 | 2,962 | 2,097 | 4,155 | 3,663 | 1,027 | 749 | 2,001 | 255 | 950 | 4,516 |
| total | 9,483 | 3,062 | 22,155 | 15,718 | 9,762 | 2,494 | 10,559 | 7,495 | 5,158 | 3,014 | 2,400 | 1,264 | 4,821 | 15,130 |
### How to use it
```python
from datasets import load_dataset
# you can load specific languages (e.g., Amharic). This download train, validation and test sets.
ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh")
# train set only
ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "train")
# test set only
ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "test")
# validation set only
ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "validation")
```
## Dataset Creation
### Curation Rationale
AfriSenti Version 1.0.0 aimed to be used in the first Afrocentric SemEval shared task **[SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval)](https://afrisenti-semeval.github.io)**.
### Source Data
Twitter
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
We anonymized the tweets by replacing all *@mentions* by *@user* and removed all URLs.
## Considerations for Using the Data
### Social Impact of Dataset
The Afrisenti dataset has the potential to improve sentiment analysis for African languages, which is essential for understanding and analyzing the diverse perspectives of people in the African continent. This dataset can enable researchers and developers to create sentiment analysis models that are specific to African languages, which can be used to gain insights into the social, cultural, and political views of people in African countries. Furthermore, this dataset can help address the issue of underrepresentation of African languages in natural language processing, paving the way for more equitable and inclusive AI technologies.
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
AfriSenti is an extension of NaijaSenti, a dataset consisting of four Nigerian languages: Hausa, Yoruba, Igbo, and Nigerian-Pidgin. This dataset has been expanded to include other 10 African languages, and was curated with the help of the following:
| Language | Dataset Curators |
|---|---|
| Algerian Arabic (arq) | Nedjma Ousidhoum, Meriem Beloucif |
| Amharic (ama) | Abinew Ali Ayele, Seid Muhie Yimam |
| Hausa (hau) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Igbo (ibo) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Kinyarwanda (kin)| Samuel Rutunda |
| Moroccan Arabic/Darija (ary) | Oumaima Hourrane |
| Mozambique Portuguese (pt-MZ) | Felermino Dário Mário António Ali |
| Nigerian Pidgin (pcm) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Oromo (orm) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay |
| Swahili (swa) | Davis Davis |
| Tigrinya (tir) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay |
| Twi (twi) | Salomey Osei, Bernard Opoku, Steven Arthur |
| Xithonga (tso) | Felermino Dário Mário António Ali |
| Yoruba (yor) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
### Licensing Information
This AfriSenti is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```
@inproceedings{Muhammad2023AfriSentiAT,
title={AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages},
author={Shamsuddeen Hassan Muhammad and Idris Abdulmumin and Abinew Ali Ayele and Nedjma Ousidhoum and David Ifeoluwa Adelani and Seid Muhie Yimam and Ibrahim Sa'id Ahmad and Meriem Beloucif and Saif Mohammad and Sebastian Ruder and Oumaima Hourrane and Pavel Brazdil and Felermino D'ario M'ario Ant'onio Ali and Davis Davis and Salomey Osei and Bello Shehu Bello and Falalu Ibrahim and Tajuddeen Gwadabe and Samuel Rutunda and Tadesse Belay and Wendimu Baye Messelle and Hailu Beshada Balcha and Sisay Adugna Chala and Hagos Tesfahun Gebremichael and Bernard Opoku and Steven Arthur},
year={2023}
}
```
```
@article{muhammad2023semeval,
title={SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)},
author={Muhammad, Shamsuddeen Hassan and Abdulmumin, Idris and Yimam, Seid Muhie and Adelani, David Ifeoluwa and Ahmad, Ibrahim Sa'id and Ousidhoum, Nedjma and Ayele, Abinew and Mohammad, Saif M and Beloucif, Meriem},
journal={arXiv preprint arXiv:2304.06845},
year={2023}
}
```
### Contributions
[More Information Needed] |
false | # Dataset Card for "dolly_wizard_codepy"
This is a mix of datasets from:
- Abzu/dolly_hhrlhf
- Abzu/Wizard
- Abzu/CodeAlpacaPython
Which are versions of:
- mosaicml/dolly_hhrlhf
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- HuggingFaceH4/CodeAlpaca_20K
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # Dataset Card for "EuroSAT2"
## Dataset Description
- **Paper** [Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification](https://ieeexplore.ieee.org/iel7/4609443/8789745/08736785.pdf)
- **Paper** [Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification](https://ieeexplore.ieee.org/iel7/8496405/8517275/08519248.pdf)
- **GitHub** [EuroSAT](https://github.com/phelber/EuroSAT)
- **Data** [Zenodo](https://zenodo.org/record/7711810#.ZCcA9uzMLJx)
### Licensing Information
MIT.
## Citation Information
[Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification](https://ieeexplore.ieee.org/iel7/4609443/8789745/08736785.pdf)
[Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification](https://ieeexplore.ieee.org/iel7/8496405/8517275/08519248.pdf)
```
@article{helber2019eurosat,
title = {Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification},
author = {Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian},
year = 2019,
journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
publisher = {IEEE}
}
@inproceedings{helber2018introducing,
title = {Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification},
author = {Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian},
year = 2018,
booktitle = {IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium},
pages = {204--207},
organization = {IEEE}
}
``` |
false |
This dataset was taken from the creators [GitHub repository](https://github.com/salute-developers/golos/tree/master/dusha) and converted for my own studying needs.
# Dusha dataset
Dusha is a bi-modal corpus suitable for speech emotion recognition (SER) tasks. The dataset consists of about 300 000 audio recordings with Russian speech, their transcripts and emotional labels. The corpus contains approximately 350 hours of data. Four basic emotions that usually appear in a dialog with a virtual assistant were selected: Happiness (Positive), Sadness, Anger and Neutral emotion.
## **License**
[English Version](https://github.com/salute-developers/golos/blob/master/license/en_us.pdf)
[Russian Version](https://github.com/salute-developers/golos/blob/master/license/ru.pdf)
## **Authors**
- Artem Sokolov
- Fedor Minkin
- Nikita Savushkin
- Nikolay Karpov
- Oleg Kutuzov
- Vladimir Kondratenko |
false |
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# Dataset Card for HiNER-original
[](https://twitter.com/cfiltnlp)
[](https://twitter.com/PeopleCentredAI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/cfiltnlp/HiNER
- **Repository:** https://github.com/cfiltnlp/HiNER
- **Paper:** https://arxiv.org/abs/2204.13743
- **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-original
- **Point of Contact:** Rudra Murthy V
### Dataset Summary
This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.
**Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
Hindi
## Dataset Structure
### Data Instances
{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग','के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}
### Data Fields
- `id`: The ID value of the data point.
- `tokens`: Raw tokens in the dataset.
- `ner_tags`: the NER tags for this dataset.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| original | 76025 | 10861 | 21722|
| collapsed | 76025 | 10861 | 21722|
## About
This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743).
### Recent Updates
* Version 0.0.5: HiNER initial release
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the original dataset with all the tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-original')
```
To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-collapsed')
```
However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder.
## Model(s)
Our best performing models are hosted on the HuggingFace models repository:
1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large)
2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large)
## Dataset Creation
### Curation Rationale
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.
### Source Data
#### Initial Data Collection and Normalization
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi
#### Who are the source language producers?
Various Government of India webpages
### Annotations
#### Annotation process
This dataset was manually annotated by a single annotator of a long span of time.
#### Who are the annotators?
Pallab Bhattacharjee
### Personal and Sensitive Information
We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
Any biases contained in the data released by the Indian government are bound to be present in our data.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Pallab Bhattacharjee
### Licensing Information
CC-BY-SA 4.0
### Citation Information
```latex
@misc{https://doi.org/10.48550/arxiv.2204.13743,
doi = {10.48550/ARXIV.2204.13743},
url = {https://arxiv.org/abs/2204.13743},
author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {HiNER: A Large Hindi Named Entity Recognition Dataset},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
false |
# Model-Written Evaluation Datasets
This repository includes datasets written by language models, used in the paper "Discovering Language Model Behaviors with Model-Written Evaluations."
The evaluations in this dataset were designed for dialogue agents, such as models fine-tuned to respond to user utterances or pretrained language models prompted to simulate a dialogue agent's behavior. However, the data can be adapted to test various other types of models as well.
The dataset consis of each of the following:
1. persona: Datasets designed to evaluate models on different aspects of their behavior, such as their political and religious views, personality traits, moral beliefs, and willingness to pursue potentially risky objectives (e.g., self-preservation or power-seeking).
2. sycophancy: Datasets created to assess models based on their tendency to echo a user's perspective when presented with various questions in fields like philosophy, NLP research, and politics.
3. winogenerated: An extended version of the Winogender Dataset (Rudinger et al., 2018) generated by models. The dataset includes occupation titles generated specifically for this dataset, alongside occupation gender statistics from the Bureau of Labor Statistics.
4. advanced-ai-risk: Datasets evaluating models on behaviors associated with potential catastrophic risks posed by advanced AI systems. These datasets were generated in a few-shot manner.
Please see the cited paper for additional details on the datasets.
**Disclaimer**: As discussed in the paper, some data contains content that includes social biases and stereotypes. The data may also contain other forms of harmful or offensive content. The views expressed in the data do not reflect the views of Anthropic or any of its employees.
## Bibtex Citation
If you would like to cite our work or data, you may use the following bibtex citation:
```
@misc{perez2022discovering,
doi = {10.48550/ARXIV.2212.09251},
url = {https://arxiv.org/abs/2212.09251},
author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, Andy and Chen, Anna and Mann, Ben and Israel, Brian and Seethor, Bryan and McKinnon, Cameron and Olah, Christopher and Yan, Da and Amodei, Daniela and Amodei, Dario and Drain, Dawn and Li, Dustin and Tran-Johnson, Eli and Khundadze, Guro and Kernion, Jackson and Landis, James and Kerr, Jamie and Mueller, Jared and Hyun, Jeeyoon and Landau, Joshua and Ndousse, Kamal and Goldberg, Landon and Lovitt, Liane and Lucas, Martin and Sellitto, Michael and Zhang, Miranda and Kingsland, Neerav and Elhage, Nelson and Joseph, Nicholas and Mercado, Noemí and DasSarma, Nova and Rausch, Oliver and Larson, Robin and McCandlish, Sam and Johnston, Scott and Kravec, Shauna and {El Showk}, Sheer and Lanham, Tamera and Telleen-Lawton, Timothy and Brown, Tom and Henighan, Tom and Hume, Tristan and Bai, Yuntao and Hatfield-Dodds, Zac and Clark, Jack and Bowman, Samuel R. and Askell, Amanda and Grosse, Roger and Hernandez, Danny and Ganguli, Deep and Hubinger, Evan and Schiefer, Nicholas and Kaplan, Jared},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Discovering Language Model Behaviors with Model-Written Evaluations},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
false |
# Dataset Card for "ko_alpaca_data"
## Dataset Description
- **Repository:** [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
- **Huggingface:** [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
- **Size of downloaded dataset files:** 8.10 MB
- **Size of the generated dataset:** 13.15 MB
### Dataset Summary
Korean translation of [alpaca data](https://huggingface.co/datasets/tatsu-lab/alpaca).
repository: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)<br>
huggingface: [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
1. Translate dataset
Translated 'instruction' and 'input' in the dataset via the DeepL API, except for 'output', which we did not translate because it is the output of OpenAI's `text-davinci-003` model.
2. Generate output data
Then, using the instruction and input, generate output data via the OpenAI ChatGPT API (gpt-3.5-turbo).
Below is the prompt we used to generate the answer.
```python
PROMPT = """\
다양한 작업에 대한 답변을 생성해주세요. 이러한 작업 지침은 ChatGPT 모델에 주어지며, ChatGPT 모델이 지침을 완료하는지 평가합니다.
요구 사항은 다음과 같습니다:
1. 다양성을 극대화하기 위해 각 지시에 대해 동사를 반복하지 않도록 하세요.
2. 지시에 사용되는 언어도 다양해야 합니다. 예를 들어, 질문과 명령형 지시를 결합해야 합니다.
3. 지시 사항의 유형이 다양해야 합니다. 목록에는 개방형 생성, 분류, 편집 등과 같은 다양한 유형의 작업이 포함되어야 합니다.
2. GPT 언어 모델은 지시를 완료할 수 있어야 합니다. 예를 들어 어시스턴트에게 시각적 또는 오디오 출력을 생성하도록 요청하지 마세요. 또 다른 예로, 어시스턴트가 어떤 작업도 수행할 수 없으므로 오후 5시에 깨우거나 미리 알림을 설정하도록 요청하지 마세요.
3. 답변은 한국어로 작성해야 합니다.
4. 답변을 1~2문장으로 작성하세요. 명령문이나 질문도 허용됩니다.
5. 지시 사항에 대한 적절한 입력을 생성해야 합니다. 입력 필드에는 지시에 대한 구체적인 예가 포함되어야 합니다. 실제 데이터를 포함해야 하며 단순한 자리 표시자를 포함해서는 안 됩니다. 입력은 지시 사항을 어렵게 만들 수 있는 상당한 내용을 제공해야 하지만 100단어를 넘지 않는 것이 이상적입니다.
6. 일부 지시사항은 추가 입력이 있고, 일부 지시에는 입력 필드가 비어있습니다. 예를 들어 "세계에서 가장 높은 봉우리는 무엇인가?"라는 일반적인 정보를 묻는 지시의 경우 구체적인 맥락을 제공할 필요가 없어, 입력 필드가 비어있을 수 있습니다.
7. 출력은 명령어와 입력에 대한 적절한 응답이어야 합니다.
아래에 10개의 명령어와 입력(옵션)에 따라 적절한 응답을 생성하세요.
응답은 아래와 같은 형식으로 10가지를 0번 부터 9번 까지, 번호에 따라 해당 번호의 명령어와 입력에 알맞게 작성하세요.
각 응답 사이는 ### 으로 내용을 분리해주세요.
응답0: 첫 번째 응답내용###
응답1: 두 번째 응답내용###
...
응답9: 마지막 응답내용"""
```
### Lisence
CC-BY-NC-4.0
### Data Splits
| | train |
| --------- | -------- |
| # of data | 49620 |
\# Note that the number is not the same as the original data(52002)
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("Bingsu/ko_alpaca_data", split="train")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 49620
})
```
```python
>>> ds[0]
{'instruction': '건강을 유지하기 위한 세 가지 팁을 알려주세요.',
'input': '',
'output': '세 가지 팁은 아침식사를 꼭 챙기며, 충분한 수면을 취하고, 적극적으로 운동을 하는 것입니다.'}
``` |
false | # Dataset Card for "databricks-dolly-15k-curated-multilingual"
A curated and multilingual version of the Databricks Dolly instructions dataset. It includes a programmatically and manually corrected version of the original `en` dataset. See below.
**STATUS**:
Currently, the original Dolly v2 English version has been curated combining automatic processing and collaborative human curation using Argilla (~400 records have been manually edited and fixed). The following graph shows a summary about the number of edited fields.

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/argilla/databricks-dolly-15k-multilingual/**
- **Repository: https://huggingface.co/datasets/argilla/databricks-dolly-15k-multilingual/**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: contact@argilla.io, https://github.com/argilla-io/argilla**
### Dataset Summary
This dataset collection is a curated and machine-translated version of the `databricks-dolly-15k` [dataset](https://github.com/databrickslabs/dolly/tree/master/data) originally created by Databricks, Inc. in 2023.
The goal is to give practitioners a starting point for training open-source instruction-following models with better-quality English data and translated data beyond English. However, as the translation quality will not be perfect, we highly recommend dedicating time to curate and fix translation issues. Below we explain how to load the datasets into [Argilla for data curation and fixing](https://github.com/argilla-io/argilla). Additionally, we'll be improving the datasets made available here, with the help of different communities.
Currently, the original English version has been curated combining automatic processing and collaborative human curation using Argilla (~400 records have been manually edited and fixed). The following graph shows a summary of the number of edited fields.
The main issues (likely many issues still remaining) are the following:
1. Some labelers misunderstood the usage of the `context` field. This `context` field is used as part of the prompt for instruction-tuning and in other works it's called `input` (e.g., Alpaca). Likely, the name context, has led to some labelers using it to provide the full context of where they have extracted the response. This is problematic for some types of tasks (summarization, closed-qa or information-extraction) because sometimes the context is shorter than or unrelated to summaries, or the information cannot be extracted from the context (closed-qa, information-extraction).
2. Some labelers misunderstood the way to give instructions for summarization or closed-qa, for example, they ask: Who is Thomas Jefferson? then provide a very long context and a response equally long.
We programmatically identified records with these potential issues and ran a campaign to fix it and as a result more than 400 records have been adapted. See below for statistics:

As a result of this curation process the content of the fields has been reduced, counted in number of tokens, especially for the responses:

If you want to browse and curate your dataset with Argilla, you can:
1. [Duplicate this Space](https://huggingface.co/spaces/argilla/dolly-multilingual-curation/settings?duplicate=true). IMPORTANT: The Space's Visibility need to be Public, but you can setup your own password and API KEYS [following this guide](https://docs.argilla.io/en/latest/getting_started/installation/deployments/huggingface-spaces.html#setting-up-secret-environment-variables).
2. Setup two secrets: `HF_TOKEN` and `LANG` for indicating the language split
3. Login with `admin`/`12345678` and start browsing and labelling.
4. Start labeling. Every 5 min the validations will be stored on a Hub dataset in your personal HF space.
5. Please get in touch to contribute fixes and improvements to the source datasets.
There's one split per language:
```python
from datasets import load_dataset
# loads all splits
load_dataset("argilla/databricks-dolly-15k-curate-multilingual")
# loads Spanish splits
load_dataset("argilla/databricks-dolly-15k-curated-multilingual", split="es")
```
### Supported Tasks and Leaderboards
As described in the README of the original dataset, this dataset can be used for:
* Training LLMs
* Synthetic Data Generation
* Data Augmentation
### Languages
Currently: `es`, `fr`, `de`, `en`
Join Argilla [Slack community](https://join.slack.com/t/rubrixworkspace/shared_invite/zt-whigkyjn-a3IUJLD7gDbTZ0rKlvcJ5g) if you want to help us include other languages.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
There's one split per language:
```python
from datasets import load_dataset
# loads all splits
load_dataset("argilla/databricks-dolly-15k-multilingual")
# loads Spanish splits
load_dataset("argilla/databricks-dolly-15k-multilingual", split="es")
```
## Dataset Creation
These datasets have been translated using the DeepL API from the original English dataset between the 13th and 14th of April
### Curation Logbook
* 28/04/23: Removed references from Wikipedia copy pastes for 8113 rows. Applied to context and response fields with the following regex: `r'\[[\w]+\]'`
### Source Data
#### Initial Data Collection and Normalization
Refer to the [original dataset](https://github.com/databrickslabs/dolly/tree/master/data) for more information.
#### Who are the source language producers?
[More Information Needed]
### Annotations
Annotations are planned but not performed yet.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
**Original dataset Owner: Databricks, Inc.**
### Citation Information
[More Information Needed] |
true |
# Turkish Texts for Toxic Language Detection
## Dataset Description
### Dataset Summary
This text dataset is a collection of Turkish texts that have been merged from various existing offensive language datasets found online. The dataset contains a total of 77,800 instances, each labeled as either offensive or not offensive.
To ensure the dataset's completeness, we utilized multiple transformer models to augment the dataset with pseudo labels. The resulting dataset is designed to be a comprehensive resource for Turkish offensive language detection.
The dataset is provided in CSV format, for more details on the merged datasets used, please refer to the reference section.
## Loading Dataset
To use dataset by using Huggingface dataset use snippet below:
```python
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("Overfit-GM/turkish-toxic-language")
```
## Dataset Structure
| Dataset Information | |
|---------------------|-------|
| Number of instances | 77,800|
| Target label distribution | |
|OTHER | 37,663|
|PROFANITY | 18,252|
|INSULT | 10,777|
|RACIST | 10,163|
|SEXIST | 945|
| Number of offensive instances | 40,137 |
| Number of non-offensive instances | 37,663 |
| Data source distribution | |
|Jigsaw Multilingual Toxic Comments | 35,624 |
|Turkish Offensive Language Detection Dataset | 39,551 |
|Turkish Cyberbullying Dataset | 2,525 |
### Source Data & References
- https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification
- https://huggingface.co/datasets/offenseval2020_tr
- https://github.com/imayda/turkish-hate-speech-dataset-2
- https://www.kaggle.com/datasets/kbulutozler/5k-turkish-tweets-with-incivil-content
- https://www.kaggle.com/abozyigit/turkish-cyberbullying |
false | # Dataset Card for CIFAR-100-Enriched (Enhanced by Renumics)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=cifar100-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [CS Toronto Homepage](https://www.cs.toronto.edu/~kriz/cifar.html#:~:text=The%20CIFAR%2D100%20dataset)
- **Paper:** [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=cifar100-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [CIFAR-100 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
### Explore the Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets
```
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/cifar100-enriched", split="train")
```
Start exploring with a simple view that leverages embeddings to identify relevant data segments:
```python
from renumics import spotlight
df = dataset.to_pandas()
df_show = df.drop(columns=['embedding', 'probabilities'])
spotlight.show(df_show, port=8000, dtype={"image": spotlight.Image, "embedding_reduced": spotlight.Embedding})
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
### CIFAR-100 Dataset
The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images per class. There are 50000 training images and 10000 test images.
The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs).
The classes are completely mutually exclusive.
We have enriched the dataset by adding **image embeddings** generated with a [Vision Transformer](https://huggingface.co/google/vit-base-patch16-224).
Here is the list of classes in the CIFAR-100:
| Superclass | Classes |
|---------------------------------|----------------------------------------------------|
| aquatic mammals | beaver, dolphin, otter, seal, whale |
| fish | aquarium fish, flatfish, ray, shark, trout |
| flowers | orchids, poppies, roses, sunflowers, tulips |
| food containers | bottles, bowls, cans, cups, plates |
| fruit and vegetables | apples, mushrooms, oranges, pears, sweet peppers |
| household electrical devices | clock, computer keyboard, lamp, telephone, television|
| household furniture | bed, chair, couch, table, wardrobe |
| insects | bee, beetle, butterfly, caterpillar, cockroach |
| large carnivores | bear, leopard, lion, tiger, wolf |
| large man-made outdoor things | bridge, castle, house, road, skyscraper |
| large natural outdoor scenes | cloud, forest, mountain, plain, sea |
| large omnivores and herbivores | camel, cattle, chimpanzee, elephant, kangaroo |
| medium-sized mammals | fox, porcupine, possum, raccoon, skunk |
| non-insect invertebrates | crab, lobster, snail, spider, worm |
| people | baby, boy, girl, man, woman |
| reptiles | crocodile, dinosaur, lizard, snake, turtle |
| small mammals | hamster, mouse, rabbit, shrew, squirrel |
| trees | maple, oak, palm, pine, willow |
| vehicles 1 | bicycle, bus, motorcycle, pickup truck, train |
| vehicles 2 | lawn-mower, rocket, streetcar, tank, tractor |
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-100).
### Languages
English class labels.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'image': '/huggingface/datasets/downloads/extracted/f57c1a3fbca36f348d4549e820debf6cc2fe24f5f6b4ec1b0d1308a80f4d7ade/0/0.png',
'full_image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x7F15737C9C50>,
'fine_label': 19,
'coarse_label': 11,
'fine_label_str': 'cattle',
'coarse_label_str': 'large_omnivores_and_herbivores',
'fine_label_prediction': 19,
'fine_label_prediction_str': 'cattle',
'fine_label_prediction_error': 0,
'split': 'train',
'embedding': [-1.2482988834381104,
0.7280710339546204, ...,
0.5312759280204773],
'probabilities': [4.505949982558377e-05,
7.286163599928841e-05, ...,
6.577593012480065e-05],
'embedding_reduced': [1.9439491033554077, -5.35720682144165]
}
```
### Data Fields
| Feature | Data Type |
|---------------------------------|------------------------------------------------|
| image | Value(dtype='string', id=None) |
| full_image | Image(decode=True, id=None) |
| fine_label | ClassLabel(names=[...], id=None) |
| coarse_label | ClassLabel(names=[...], id=None) |
| fine_label_str | Value(dtype='string', id=None) |
| coarse_label_str | Value(dtype='string', id=None) |
| fine_label_prediction | ClassLabel(names=[...], id=None) |
| fine_label_prediction_str | Value(dtype='string', id=None) |
| fine_label_prediction_error | Value(dtype='int32', id=None) |
| split | Value(dtype='string', id=None) |
| embedding | Sequence(feature=Value(dtype='float32', id=None), length=768, id=None) |
| probabilities | Sequence(feature=Value(dtype='float32', id=None), length=100, id=None) |
| embedding_reduced | Sequence(feature=Value(dtype='float32', id=None), length=2, id=None) |
### Data Splits
| Dataset Split | Number of Images in Split | Samples per Class (fine) |
| ------------- |---------------------------| -------------------------|
| Train | 50000 | 500 |
| Test | 10000 | 100 |
## Dataset Creation
### Curation Rationale
The CIFAR-10 and CIFAR-100 are labeled subsets of the [80 million tiny images](http://people.csail.mit.edu/torralba/tinyimages/) dataset.
They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use this dataset, please cite the following paper:
```
@article{krizhevsky2009learning,
added-at = {2021-01-21T03:01:11.000+0100},
author = {Krizhevsky, Alex},
biburl = {https://www.bibsonomy.org/bibtex/2fe5248afe57647d9c85c50a98a12145c/s364315},
interhash = {cc2d42f2b7ef6a4e76e47d1a50c8cd86},
intrahash = {fe5248afe57647d9c85c50a98a12145c},
keywords = {},
pages = {32--33},
timestamp = {2021-01-21T03:01:11.000+0100},
title = {Learning Multiple Layers of Features from Tiny Images},
url = {https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf},
year = 2009
}
```
### Contributions
Alex Krizhevsky, Vinod Nair, Geoffrey Hinton, and Renumics GmbH. |
false | LIMA dataset in Vicuna ShareGPT format.
License under LIMA's License.
Original Repo:
https://huggingface.co/datasets/GAIR/lima |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.