id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Trelis/touch-rugby-rules | 2023-09-30T13:16:06.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"touch rugby",
"region:us"
] | Trelis | null | null | null | 0 | 97 | ---
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- touch rugby
size_categories:
- n<1K
---
# Touch Rugby Rules Dataset
train.csv is comprised of a set of questions based on rules from the [International Touch Website](https://cdn.internationaltouch.org/public/FIT%205th%20Edition%20Rulebook.pdf)
For educational and non-commercial use only. |
chrisgru/llama2-chat-guanaco | 2023-09-21T13:37:34.000Z | [
"region:us"
] | chrisgru | null | null | null | 0 | 97 | Entry not found |
distil-whisper/common_voice_13_0-timestamped | 2023-09-25T10:30:12.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc0-1.0",
"region:us"
] | distil-whisper | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | null | 0 | 97 | ---
license: cc0-1.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: Common Voice 13
---
# Distil Whisper: Common Voice 13 With Timestamps
This is a variant of the [Common Voice 13](https://huggingface.co/datasets/mozilla_foundation/common_voice_13) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/mozilla_foundation/common_voice_13).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/common_voice_13_0", "en")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/common_voice_13_0", "en", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc0-1.0.
|
distil-whisper/gigaspeech-l-timestamped | 2023-09-25T10:28:51.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:other",
"region:us"
] | distil-whisper | GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
for speech recognition training, and to filter out segments with low-quality transcription. For system training,
GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
are re-processed by professional human transcribers to ensure high transcription quality. | @article{DBLP:journals/corr/abs-2106-06909,
author = {Guoguo Chen and
Shuzhou Chai and
Guanbo Wang and
Jiayu Du and
Wei{-}Qiang Zhang and
Chao Weng and
Dan Su and
Daniel Povey and
Jan Trmal and
Junbo Zhang and
Mingjie Jin and
Sanjeev Khudanpur and
Shinji Watanabe and
Shuaijiang Zhao and
Wei Zou and
Xiangang Li and
Xuchen Yao and
Yongqing Wang and
Yujun Wang and
Zhao You and
Zhiyong Yan},
title = {GigaSpeech: An Evolving, Multi-domain {ASR} Corpus with 10, 000 Hours
of Transcribed Audio},
journal = {CoRR},
volume = {abs/2106.06909},
year = {2021},
url = {https://arxiv.org/abs/2106.06909},
eprinttype = {arXiv},
eprint = {2106.06909},
timestamp = {Wed, 29 Dec 2021 14:29:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-06909.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 97 | ---
license: other
task_categories:
- automatic-speech-recognition
language:
- en
extra_gated_prompt: |-
SpeechColab does not own the copyright of the audio files. For researchers and educators who wish to use the audio files for non-commercial research and/or educational purposes, we can provide access through the Hub under certain conditions and terms.
Terms of Access:
The "Researcher" has requested permission to use the GigaSpeech database (the "Database") at Tsinghua University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
2. The SpeechColab team and Tsinghua University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the SpeechColab team and Tsinghua University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted audio files that he or she may create from the Database.
4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
5. The SpeechColab team and Tsinghua University reserve the right to terminate Researcher's access to the Database at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
Please also fill out the Google Form https://forms.gle/UuGQAPyscGRrUMLq6 to request access to the GigaSpeech dataset.
extra_gated_fields:
Name: text
Email: text
Organization: text
Address: text
I hereby confirm that I have requested access via the Google Form provided above: checkbox
I accept the terms of access: checkbox
---
# Distil Whisper: GigaSpeech With Timestamps
This is a variant of the [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
|
weitung8/ntuadlhw1 | 2023-10-02T09:32:02.000Z | [
"language:zh",
"region:us"
] | weitung8 | null | null | null | 0 | 97 | ---
language:
- zh
--- |
result-kand2-sdxl-wuerst-karlo/c06e4969 | 2023-10-06T14:58:55.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 97 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 200
num_examples: 10
download_size: 1394
dataset_size: 200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c06e4969"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk/persona-chat | 2022-06-04T21:49:08.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 7 | 96 | Entry not found |
AhmedSSoliman/DJANGO | 2022-08-14T14:19:28.000Z | [
"region:us"
] | AhmedSSoliman | null | null | null | 0 | 96 | Django Dataset for Code Translation Tasks
=========================================
*Django* dataset used in the paper
[*"Learning to Generate Pseudo-Code from Source Code Using Statistical Machine Translation"*](http://ieeexplore.ieee.org/document/7372045/),
Oda et al., ASE, 2015.
The Django dataset is a dataset for code generation comprising of 16000 training, 1000 development and 1805 test annotations. Each data point consists of a line of Python code together with a manually created natural language description.
```bibtex
@inproceedings{oda2015ase:pseudogen1,
author = {Oda, Yusuke and Fudaba, Hiroyuki and Neubig, Graham and Hata, Hideaki and Sakti, Sakriani and Toda, Tomoki and Nakamura, Satoshi},
title = {Learning to Generate Pseudo-code from Source Code Using Statistical Machine Translation},
booktitle = {Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)},
series = {ASE '15},
month = {November},
year = {2015},
isbn = {978-1-5090-0025-8},
pages = {574--584},
numpages = {11},
url = {https://doi.org/10.1109/ASE.2015.36},
doi = {10.1109/ASE.2015.36},
acmid = {2916173},
publisher = {IEEE Computer Society},
address = {Lincoln, Nebraska, USA}
}
```
|
proteinea/remote_homology | 2022-12-12T16:20:18.000Z | [
"doi:10.57967/hf/1107",
"region:us"
] | proteinea | null | null | null | 2 | 96 | Entry not found |
Multimodal-Fatima/OK-VQA_test | 2023-05-29T02:08:55.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 0 | 96 | ---
dataset_info:
features:
- name: image
dtype: image
- name: question_type
dtype: string
- name: confidence
dtype: int32
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: raw_answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: id
dtype: int64
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: clip_tags_ViT_B_16_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: blip_caption_topk_50_Salesforce_blip_image_captioning_large_multiple
sequence: string
splits:
- name: test
num_bytes: 1133674079.0
num_examples: 5046
download_size: 959321361
dataset_size: 1133674079.0
---
# Dataset Card for "OK-VQA_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
climatebert/climate_detection | 2023-04-18T14:39:49.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | climatebert | null | null | null | 2 | 96 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: ClimateTalkDetection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
splits:
- name: train
num_bytes: 638487
num_examples: 1300
- name: test
num_bytes: 222330
num_examples: 400
download_size: 492038
dataset_size: 860817
---
# Dataset Card for climate_detection
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for detecting climate-related paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given paragraph is climate-related or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> not climate-related, 1 -> climate-related)
### Data Splits
The dataset is split into:
- train: 1,300
- test: 400
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. |
nikodallanoce/wmt14 | 2023-05-04T10:55:08.000Z | [
"region:us"
] | nikodallanoce | null | @InProceedings{bojar-EtAl:2014:W14-33,
author = {Bojar, Ondrej and Buck, Christian and Federmann, Christian and Haddow, Barry and Koehn, Philipp and Leveling, Johannes and Monz, Christof and Pecina, Pavel and Post, Matt and Saint-Amand, Herve and Soricut, Radu and Specia, Lucia and Tamchyna, Ale\v{s}},
title = {Findings of the 2014 Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Ninth Workshop on Statistical Machine Translation},
month = {June},
year = {2014},
address = {Baltimore, Maryland, USA},
publisher = {Association for Computational Linguistics},
pages = {12--58},
url = {http://www.aclweb.org/anthology/W/W14/W14-3302}
} | null | 0 | 96 | # Aim of this dataset
The code used to retrieve and create this dataset is almost identical to the one that you can find here [wmt14](https://huggingface.co/datasets/wmt14).
I only added the possibility to retrieve the "es-en" translation pairs from the newstest2013. This pair works only for the train and validation splits.
**Pay attention**: some es-en pair sentences on the validation set contain the backslash followed by a double quote character (\\").
Thanks to the Huggingface team for all the work they have done! |
christinacdl/clickbait_notclickbait_dataset | 2023-06-22T14:42:37.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | christinacdl | null | null | null | 0 | 96 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
---
0 : not clickbait
1 : clickbait
Dataset cleaned from duplicates and kept only the first appearing text.
Dataset split into train and test sets using 0.2 split ratio.
Dataset split into test and validation sets using 0.2 split ratio.
Size of training set: 43.802
Size of test set: 8.760
Size of validation set: 2.191
|
SiberiaSoft/SiberianDatasetXL | 2023-07-24T00:28:56.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] | SiberiaSoft | null | null | null | 2 | 96 | ---
license: mit
task_categories:
- text-generation
- text2text-generation
- conversational
language:
- ru
size_categories:
- 100K<n<1M
---
### SiberiaSoft/SiberianDatasetXL
Датасет инструкций, диалогов, QA
## Процентное содержание задач:
| Задача | Процентное содержание |
|:-----------------------------------------------------------------------------:|:---------------------:|
| Живые с контекстом | 38.746% |
| QA с длинными ответами | 11.907% |
| russian_instructions_2 Den4ikAI/russian_instructions_2 (очищенный) | 9.65% |
| QA по тексту Den4ikAI/ru_sberquad_long_answers | 9.203% |
| QA с короткими ответами | 8.57% |
| Инструкции с IlyaGusev/ru_turbo_alpaca_evol_instruct (очень жестко очищенные) | 6.087% |
| Персонализированные диалоги с контекстом | 5.795% |
| Инструкции с its5Q/yandex-q | 4.373% |
| QA с использованием Wikipedia | 2.822% |
| Инструкции с lksy/ru_instruct_gpt4 (жестко очищенные) | 2.741% |
| Решение проблем | 0.085% |
| QA объясни ребенку | 0.02% |
### Citation
```
@MISC{SiberianDatasetXL,
author = {Denis Petrov, Ivan Ramovich},
title = {Russian dataset for Instruct/Chat models},
url = {https://huggingface.co/datasets/SiberiaSoft/SiberianDatasetXL},
year = 2023
}
``` |
PurCL/bincorp-26m-all | 2023-08-22T20:07:44.000Z | [
"region:us"
] | PurCL | null | null | null | 0 | 96 | ---
viewer: true
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: code
dtype: string
- name: data_dep
dtype: string
splits:
- name: train
num_bytes: 39826202125.70429
num_examples: 14019961
- name: test
num_bytes: 11713589027.6
num_examples: 4123518
- name: valid
num_bytes: 7028153984.695704
num_examples: 2474111
download_size: 19420221346
dataset_size: 58567945137.99999
---
# Dataset Card for "bincorp-26m-all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cat-claws/face-verification | 2023-08-27T13:44:08.000Z | [
"region:us"
] | cat-claws | null | null | null | 0 | 96 | ---
configs:
- config_name: default
data_files:
- split: agedb_30
path: data/agedb_30-*
- split: calfw
path: data/calfw-*
- split: cfp_ff
path: data/cfp_ff-*
- split: cfp_fp
path: data/cfp_fp-*
- split: cplfw
path: data/cplfw-*
- split: lfw
path: data/lfw-*
dataset_info:
features:
- name: image1
dtype: image
- name: image2
dtype: image
- name: target
dtype:
class_label:
names:
'0': different
'1': same
splits:
- name: agedb_30
num_bytes: 231473197.0
num_examples: 6000
- name: calfw
num_bytes: 252048890.0
num_examples: 6000
- name: cfp_ff
num_bytes: 274781437.0
num_examples: 7000
- name: cfp_fp
num_bytes: 238847786.0
num_examples: 7000
- name: cplfw
num_bytes: 222484496.0
num_examples: 6000
- name: lfw
num_bytes: 236255483.0
num_examples: 6000
download_size: 1251590659
dataset_size: 1455891289.0
---
# Dataset Card for "face-verification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distil-whisper/ami-ihm-timestamped | 2023-09-25T10:30:13.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | distil-whisper | The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
synchronized to a common timeline. These include close-talking and far-field microphones, individual and
room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
the participants also have unsynchronized pens available to them that record what is written. The meetings
were recorded in English using three different rooms with different acoustic properties, and include mostly
non-native speakers. \n | @inproceedings{10.1007/11677482_3,
author = {Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and Lathoud, Guillaume and Lincoln, Mike and Lisowska, Agnes and McCowan, Iain and Post, Wilfried and Reidsma, Dennis and Wellner, Pierre},
title = {The AMI Meeting Corpus: A Pre-Announcement},
year = {2005},
isbn = {3540325492},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
url = {https://doi.org/10.1007/11677482_3},
doi = {10.1007/11677482_3},
abstract = {The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting
recordings. It is being created in the context of a project that is developing meeting
browsing technology and will eventually be released publicly. Some of the meetings
it contains are naturally occurring, and some are elicited, particularly using a scenario
in which the participants play different roles in a design team, taking a design project
from kick-off to completion over the course of a day. The corpus is being recorded
using a wide range of devices including close-talking and far-field microphones, individual
and room-view video cameras, projection, a whiteboard, and individual pens, all of
which produce output signals that are synchronized with each other. It is also being
hand-annotated for many different phenomena, including orthographic transcription,
discourse properties such as named entities and dialogue acts, summaries, emotions,
and some head and hand gestures. We describe the data set, including the rationale
behind using elicited material, and explain how the material is being recorded, transcribed
and annotated.},
booktitle = {Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction},
pages = {28–39},
numpages = {12},
location = {Edinburgh, UK},
series = {MLMI'05}
} | null | 0 | 96 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: AMI IHM
---
# Distil Whisper: AMI IHM With Timestamps
This is a variant of the [AMI IHM](https://huggingface.co/datasets/edinburghcstr/ami) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/edinburghcstr/ami).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-ihm", "ihm")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-ihm", "ihm", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
distil-whisper/ami-sdm-timestamped | 2023-09-25T10:30:13.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | distil-whisper | The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
synchronized to a common timeline. These include close-talking and far-field microphones, individual and
room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
the participants also have unsynchronized pens available to them that record what is written. The meetings
were recorded in English using three different rooms with different acoustic properties, and include mostly
non-native speakers. \n | @inproceedings{10.1007/11677482_3,
author = {Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and Lathoud, Guillaume and Lincoln, Mike and Lisowska, Agnes and McCowan, Iain and Post, Wilfried and Reidsma, Dennis and Wellner, Pierre},
title = {The AMI Meeting Corpus: A Pre-Announcement},
year = {2005},
isbn = {3540325492},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
url = {https://doi.org/10.1007/11677482_3},
doi = {10.1007/11677482_3},
abstract = {The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting
recordings. It is being created in the context of a project that is developing meeting
browsing technology and will eventually be released publicly. Some of the meetings
it contains are naturally occurring, and some are elicited, particularly using a scenario
in which the participants play different roles in a design team, taking a design project
from kick-off to completion over the course of a day. The corpus is being recorded
using a wide range of devices including close-talking and far-field microphones, individual
and room-view video cameras, projection, a whiteboard, and individual pens, all of
which produce output signals that are synchronized with each other. It is also being
hand-annotated for many different phenomena, including orthographic transcription,
discourse properties such as named entities and dialogue acts, summaries, emotions,
and some head and hand gestures. We describe the data set, including the rationale
behind using elicited material, and explain how the material is being recorded, transcribed
and annotated.},
booktitle = {Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction},
pages = {28–39},
numpages = {12},
location = {Edinburgh, UK},
series = {MLMI'05}
} | null | 0 | 96 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: AMI SDM
---
# Distil Whisper: AMI SDM With Timestamps
This is a variant of the [AMI SDM](https://huggingface.co/datasets/edinburghstr/ami) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/edinburghstr/ami).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-sdm", "sdm")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-sdm", "sdm", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
distil-whisper/peoples_speech-clean-timestamped | 2023-09-25T10:30:12.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | distil-whisper | The People's Speech is a free-to-download 30,000-hour and growing supervised
conversational English speech recognition dataset licensed for academic and
commercial usage under CC-BY-SA (with a CC-BY subset). | @article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Ceron and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: A Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 96 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: People's Speech Clean
---
# Distil Whisper: People's Speech Clean With Timestamps
This is a variant of the [People's Speech Clean](https://huggingface.co/datasets/MLCommons/peoples_speech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/MLCommons/peoples_speech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
distil-whisper/tedlium-timestamped | 2023-09-25T10:30:13.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-nc-nd-3.0",
"region:us"
] | distil-whisper | The TED-LIUM corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech. | null | null | 0 | 96 | ---
license: cc-by-nc-nd-3.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: TEDLIUM
---
# Distil Whisper: TEDLIUM With Timestamps
This is a variant of the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/LIUM/tedlium).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-nc-nd-3.0.
|
distil-whisper/voxpopuli-timestamped | 2023-09-25T10:30:13.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc0-1.0",
"region:us"
] | distil-whisper | A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. | @inproceedings{wang-etal-2021-voxpopuli,
title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning,
Semi-Supervised Learning and Interpretation",
author = "Wang, Changhan and
Riviere, Morgane and
Lee, Ann and
Wu, Anne and
Talnikar, Chaitanya and
Haziza, Daniel and
Williamson, Mary and
Pino, Juan and
Dupoux, Emmanuel",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics
and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.80",
doi = "10.18653/v1/2021.acl-long.80",
pages = "993--1003",
} | null | 0 | 96 | ---
license: cc0-1.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: VoxPopuli
---
# Distil Whisper: VoxPopuli With Timestamps
This is a variant of the [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/facebook/voxpopuli).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/voxpopuli", "en")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/voxpopuli", "en", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc0-1.0.
|
yuntian-deng/im2latex-100k | 2022-08-26T23:53:28.000Z | [
"region:us"
] | yuntian-deng | null | null | null | 5 | 95 | Entry not found |
maykcaldas/smiles-transformers | 2023-04-04T22:02:47.000Z | [
"size_categories:100M<n<1B",
"language:en",
"license:mit",
"region:us"
] | maykcaldas | null | null | null | 2 | 95 | ---
license: mit
language:
- en
pretty_name: smiles-transformer-dataset
size_categories:
- 100M<n<1B
dataset_info:
features:
- name: text
dtype: string
- name: formula
dtype: string
- name: NumHDonors
dtype: int64
- name: NumHAcceptors
dtype: int64
- name: MolLogP
dtype: float64
- name: NumHeteroatoms
dtype: int64
- name: RingCount
dtype: int64
- name: NumRotatableBonds
dtype: int64
- name: NumAromaticBonds
dtype: int64
- name: NumAcidGroups
dtype: int64
- name: NumBasicGroups
dtype: int64
- name: Apol
dtype: float64
splits:
- name: train
num_bytes: 136431671689
num_examples: 908086717
- name: test
num_bytes: 7437928022
num_examples: 50487919
- name: validation
num_bytes: 7621324737
num_examples: 50605067
download_size: 34998665406
dataset_size: 151490924448
---
# smiles-transformers dataset
TODO: Add references to the datasets we curated
## dataset features
- name: text
- Molecule SMILES : string
- name: formula
- Molecular formula : string
- name: NumHDonors
- Number of hidrogen bond donors : int
- name: NumHAcceptors
- Number of hidrogen bond acceptors : int
- name: MolLogP
- Wildman-Crippen LogP : float
- name: NumHeteroatoms
- Number of hetero atoms: int
- name: RingCount
- Number of rings : int
- name: NumRotatableBonds
- Number of rotable bonds : int
- name: NumAromaticBonds
- Number of aromatic bonds : int
- name: NumAcidGroups
- Number of acid groups : int
- name: NumBasicGroups
- Number of basic groups : int
- name: Apol
## citation information |
MentalFox/GPTeacher | 2023-04-10T11:12:29.000Z | [
"region:us"
] | MentalFox | null | null | null | 1 | 95 | # GPTeacher
A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer
The General-Instruct used many of the same seed prompts as alpaca, but also had specific examples of things we didnt see much in with alpaca. Such as Chain of Thought Reasoning, Logic Puzzles, Wordplay, Role Playing (lightly), and was asked to include reasoning behind and thought steps where appropriate in example responses, among other things.
The General-Instruct dataset is about 20,000 examples with just deduplication.
Still cleaning the codegen instruct dataset, will be up when its cleaned.
Each dataset is split into 5 separate datasets, based on similarity scored cleaning. Simple dedupe only, and then range of <60% to <90% similarity cleaned sets for each.
They are all made to be compliant with Alpaca's dataset format, i.e. each has an instruction, input, and output field, should make it easier to use the same fine tune script and process as alpaca has.
Documentation on the toolformers section coming soon, we generated a dataset to use a set of predefined tools, including search, python, terminal/shell, wikipedia, wolfram, and others. More info on prompt format for inference soon..
|
sezer12138/ADE20k_Segementation | 2023-07-21T03:06:25.000Z | [
"region:us"
] | sezer12138 | null | null | null | 0 | 95 | ---
dataset_info:
features:
- name: image
dtype: image
- name: annotated
dtype: image
- name: Scene_category
dtype:
class_label:
names:
'0': abbey
'1': access_road
'2': acropolis
'3': air_base
'4': aircraft_carrier_object
'5': airfield
'6': airlock
'7': airplane
'8': airplane_cabin
'9': airport
'10': airport_terminal
'11': airport_ticket_counter
'12': alcove
'13': alley
'14': amphitheater
'15': amphitheater_indoor
'16': amusement_arcade
'17': amusement_park
'18': anechoic_chamber
'19': apartment_building_outdoor
'20': apse_indoor
'21': apse_outdoor
'22': aquarium
'23': aquatic_theater
'24': aqueduct
'25': arbor
'26': arcade
'27': arch
'28': archaelogical_excavation
'29': archipelago
'30': archive
'31': armory
'32': army_base
'33': arrival_gate_indoor
'34': arrival_gate_outdoor
'35': art_gallery
'36': art_school
'37': art_studio
'38': artificial
'39': artists_loft
'40': assembly_hall
'41': assembly_line
'42': assembly_plant
'43': athletic_field_indoor
'44': athletic_field_outdoor
'45': atrium_home
'46': atrium_public
'47': attic
'48': auditorium
'49': auto_factory
'50': auto_mechanics_indoor
'51': auto_mechanics_outdoor
'52': auto_racing_paddock
'53': auto_showroom
'54': awning_deck
'55': back_porch
'56': backdrop
'57': backroom
'58': backseat
'59': backstage
'60': backstage_outdoor
'61': backstairs
'62': backstairs_indoor
'63': backwoods
'64': badlands
'65': badminton_court_indoor
'66': badminton_court_outdoor
'67': baggage_claim
'68': balcony_interior
'69': ball_pit
'70': ballet
'71': ballroom
'72': balustrade
'73': bamboo_forest
'74': bank_indoor
'75': bank_outdoor
'76': bank_vault
'77': banquet_hall
'78': baptistry_indoor
'79': baptistry_outdoor
'80': bar
'81': barbeque
'82': barbershop
'83': barn
'84': barndoor
'85': barnyard
'86': barrack
'87': barrel_storage
'88': baseball
'89': baseball_field
'90': basement
'91': basilica
'92': basin_outdoor
'93': basketball
'94': basketball_court_indoor
'95': basketball_court_outdoor
'96': bath_indoor
'97': bath_outdoor
'98': bathhouse
'99': bathhouse_outdoor
'100': bathroom
'101': batters_box
'102': batting_cage_indoor
'103': batting_cage_outdoor
'104': battlefield
'105': battlement
'106': bay
'107': bayou
'108': bazaar_indoor
'109': bazaar_outdoor
'110': beach
'111': beach_house
'112': beauty_salon
'113': bedchamber
'114': bedroom
'115': beer_garden
'116': beer_hall
'117': belfry
'118': bell_foundry
'119': berth
'120': berth_deck
'121': betting_shop
'122': bicycle_racks
'123': bindery
'124': biology_laboratory
'125': bistro_indoor
'126': bistro_outdoor
'127': bleachers_indoor
'128': bleachers_outdoor
'129': block
'130': boardwalk
'131': boat
'132': boat_deck
'133': boathouse
'134': bog
'135': bomb_shelter_indoor
'136': bookbindery
'137': bookshelf
'138': bookstore
'139': booth
'140': booth_indoor
'141': booth_outdoor
'142': botanical_garden
'143': bottle_storage
'144': bottomland
'145': bow_window_indoor
'146': bow_window_outdoor
'147': bowling_alley
'148': box_seat
'149': boxing_ring
'150': breakfast_table
'151': breakroom
'152': brewery_indoor
'153': brewery_outdoor
'154': bric-a-brac
'155': brickyard_indoor
'156': brickyard_outdoor
'157': bridge
'158': bridle_path
'159': broadleaf
'160': brooklet
'161': bubble_chamber
'162': buffet
'163': building_complex
'164': building_facade
'165': bulkhead
'166': bullpen
'167': bullring
'168': bunk_bed
'169': burial_chamber
'170': bus_depot_indoor
'171': bus_depot_outdoor
'172': bus_interior
'173': bus_shelter
'174': bus_station_indoor
'175': bus_station_outdoor
'176': butchers_shop
'177': butte
'178': bypass
'179': byroad
'180': cabana
'181': cabin_cruiser
'182': cabin_indoor
'183': cabin_outdoor
'184': cafeteria
'185': call_center
'186': campsite
'187': campus
'188': candy_store
'189': canteen
'190': canyon
'191': car_dealership
'192': caravansary
'193': cardroom
'194': cargo_container_interior
'195': cargo_deck
'196': cargo_helicopter
'197': carport_indoor
'198': carport_outdoor
'199': carrousel
'200': cascade
'201': casino_indoor
'202': casino_outdoor
'203': castle
'204': catacomb
'205': cataract
'206': cathedral_indoor
'207': cathedral_outdoor
'208': catwalk
'209': cavern_indoor
'210': cavern_outdoor
'211': cellar
'212': cemetery
'213': chair_lift
'214': chalet
'215': chaparral
'216': chapel
'217': checkout_counter
'218': cheese_factory
'219': chemical_plant
'220': chemistry_lab
'221': chicken_coop_indoor
'222': chicken_coop_outdoor
'223': chicken_farm_indoor
'224': chicken_farm_outdoor
'225': childs_room
'226': choir_loft_interior
'227': chuck_wagon
'228': church_indoor
'229': church_outdoor
'230': circus_tent_indoor
'231': circus_tent_outdoor
'232': city
'233': classroom
'234': clean_room
'235': cliff
'236': clock_tower_indoor
'237': cloister_indoor
'238': cloister_outdoor
'239': closet
'240': clothing_store
'241': coast
'242': coast_road
'243': cockpit
'244': cocktail_lounge
'245': coffee_shop
'246': computer_room
'247': conference_center
'248': conference_hall
'249': conference_room
'250': confessional
'251': construction_site
'252': control_room
'253': control_tower_indoor
'254': control_tower_outdoor
'255': convenience_store_indoor
'256': convenience_store_outdoor
'257': coral_reef
'258': corn_field
'259': corner
'260': corral
'261': corridor
'262': cottage
'263': cottage_garden
'264': country_house
'265': country_road
'266': courthouse
'267': courtroom
'268': courtyard
'269': covered_bridge_interior
'270': crawl_space
'271': creek
'272': crevasse
'273': crosswalk
'274': cultivated
'275': customhouse
'276': cybercafe
'277': dacha
'278': dairy_indoor
'279': dairy_outdoor
'280': dam
'281': dance_floor
'282': dance_school
'283': darkroom
'284': day_care_center
'285': deck-house_boat_deck_house
'286': deck-house_deck_house
'287': delicatessen
'288': dentists_office
'289': department_store
'290': departure_lounge
'291': desert_road
'292': diner_indoor
'293': diner_outdoor
'294': dinette_home
'295': dining_area
'296': dining_car
'297': dining_hall
'298': dining_room
'299': dirt_track
'300': discotheque
'301': distillery
'302': ditch
'303': diving_board
'304': dock
'305': dolmen
'306': donjon
'307': door
'308': doorway_indoor
'309': doorway_outdoor
'310': dorm_room
'311': downtown
'312': drainage_ditch
'313': dress_shop
'314': dressing_room
'315': drill_rig
'316': driveway
'317': driving_range_indoor
'318': driving_range_outdoor
'319': drugstore
'320': dry
'321': dry_dock
'322': dugout
'323': earth_fissure
'324': east_asia
'325': editing_room
'326': electrical_substation
'327': elevated_catwalk
'328': elevator_interior
'329': elevator_lobby
'330': elevator_shaft
'331': embankment
'332': embassy
'333': embrasure
'334': engine_room
'335': entrance
'336': entrance_hall
'337': entranceway_indoor
'338': entranceway_outdoor
'339': entryway_outdoor
'340': escalator_indoor
'341': escalator_outdoor
'342': escarpment
'343': establishment
'344': estaminet
'345': estuary
'346': excavation
'347': exhibition_hall
'348': exterior
'349': fabric_store
'350': factory_indoor
'351': factory_outdoor
'352': fairway
'353': fan
'354': farm
'355': farm_building
'356': farmhouse
'357': fastfood_restaurant
'358': feed_bunk
'359': fence
'360': ferryboat_indoor
'361': field_house
'362': field_road
'363': field_tent_indoor
'364': field_tent_outdoor
'365': fire_escape
'366': fire_station
'367': fire_trench
'368': fireplace
'369': firing_range_indoor
'370': firing_range_outdoor
'371': fish_farm
'372': fishmarket
'373': fishpond
'374': fitting_room_interior
'375': fjord
'376': flashflood
'377': flatlet
'378': flea_market_indoor
'379': flea_market_outdoor
'380': floating_dock
'381': floating_dry_dock
'382': flood
'383': flood_plain
'384': florist_shop_indoor
'385': florist_shop_outdoor
'386': flowerbed
'387': flume_indoor
'388': fly_bridge
'389': flying_buttress
'390': food_court
'391': football
'392': football_field
'393': foothill
'394': forecourt
'395': foreshore
'396': forest_fire
'397': forest_path
'398': forest_road
'399': forklift
'400': formal_garden
'401': fort
'402': fortress
'403': foundry_indoor
'404': foundry_outdoor
'405': fountain
'406': freestanding
'407': freeway
'408': freight_elevator
'409': front_porch
'410': frontseat
'411': funeral_chapel
'412': funeral_home
'413': furnace_room
'414': galley
'415': game_room
'416': gangplank
'417': garage_indoor
'418': garage_outdoor
'419': garbage_dump
'420': garden
'421': gas_station
'422': gas_well
'423': gasworks
'424': gate
'425': gatehouse
'426': gazebo_interior
'427': general_store_indoor
'428': general_store_outdoor
'429': geodesic_dome_indoor
'430': geodesic_dome_outdoor
'431': ghost_town
'432': gift_shop
'433': glacier
'434': glade
'435': glen
'436': golf_course
'437': gorge
'438': granary
'439': grape_arbor
'440': great_hall
'441': greengrocery
'442': greenhouse_indoor
'443': greenhouse_outdoor
'444': grotto
'445': grove
'446': guardhouse
'447': guardroom
'448': guesthouse
'449': gulch
'450': gun_deck_indoor
'451': gun_deck_outdoor
'452': gun_store
'453': gymnasium_indoor
'454': gymnasium_outdoor
'455': hacienda
'456': hallway
'457': handball_court
'458': hangar_indoor
'459': hangar_outdoor
'460': harbor
'461': hardware_store
'462': hat_shop
'463': hatchery
'464': hayfield
'465': hayloft
'466': head_shop
'467': hearth
'468': heath
'469': hedge_maze
'470': hedgerow
'471': heliport
'472': hen_yard
'473': herb_garden
'474': highway
'475': hill
'476': hillock
'477': hockey
'478': hollow
'479': home_office
'480': home_theater
'481': hoodoo
'482': hospital
'483': hospital_room
'484': hot_spring
'485': hot_tub_indoor
'486': hot_tub_outdoor
'487': hotel_breakfast_area
'488': hotel_outdoor
'489': hotel_room
'490': house
'491': housing_estate
'492': housing_project
'493': howdah
'494': hunting_lodge_indoor
'495': hunting_lodge_outdoor
'496': hut
'497': hutment
'498': ice_cream_parlor
'499': ice_floe
'500': ice_shelf
'501': ice_skating_rink_indoor
'502': ice_skating_rink_outdoor
'503': iceberg
'504': igloo
'505': imaret
'506': incinerator_indoor
'507': incinerator_outdoor
'508': indoor_procenium
'509': indoor_round
'510': indoor_seats
'511': industrial_area
'512': industrial_park
'513': inlet
'514': inn_indoor
'515': inn_outdoor
'516': insane_asylum
'517': irrigation_ditch
'518': islet
'519': jacuzzi_indoor
'520': jacuzzi_outdoor
'521': jail_cell
'522': jail_indoor
'523': jail_outdoor
'524': japanese_garden
'525': jetty
'526': jewelry_shop
'527': joss_house
'528': juke_joint
'529': jungle
'530': junk_pile
'531': junkyard
'532': jury_box
'533': kasbah
'534': kennel_indoor
'535': kennel_outdoor
'536': kindergarden_classroom
'537': kiosk_indoor
'538': kiosk_outdoor
'539': kitchen
'540': kitchenette
'541': kraal
'542': lab_classroom
'543': laboratorywet
'544': labyrinth_indoor
'545': labyrinth_outdoor
'546': lagoon
'547': landfill
'548': landing
'549': landing_deck
'550': landing_strip
'551': laundromat
'552': lava_flow
'553': lavatory
'554': lawn
'555': layby
'556': lean-to
'557': lean-to_tent
'558': lecture_room
'559': legislative_chamber
'560': levee
'561': library
'562': library_indoor
'563': library_outdoor
'564': lido_deck_indoor
'565': lido_deck_outdoor
'566': lift_bridge
'567': lighthouse
'568': limousine_interior
'569': liquor_store_indoor
'570': liquor_store_outdoor
'571': living_room
'572': loading_dock
'573': lobby
'574': lock_chamber
'575': locker_room
'576': loft
'577': loge
'578': loggia_outdoor
'579': lookout_station_indoor
'580': lookout_station_outdoor
'581': lower_deck
'582': luggage_van
'583': lumberyard_indoor
'584': lumberyard_outdoor
'585': lyceum
'586': machine_shop
'587': manhole
'588': mansard
'589': mansion
'590': manufactured_home
'591': market_indoor
'592': market_outdoor
'593': marsh
'594': martial_arts_gym
'595': massage_room
'596': mastaba
'597': maternity_ward
'598': mausoleum
'599': meadow
'600': meat_house
'601': medina
'602': megalith
'603': menhir
'604': mens_store_outdoor
'605': mental_institution_indoor
'606': mental_institution_outdoor
'607': mesa
'608': mesoamerican
'609': mess_hall
'610': mews
'611': mezzanine
'612': military_headquarters
'613': military_hospital
'614': military_hut
'615': military_tent
'616': millpond
'617': millrace
'618': mine
'619': mineral_bath
'620': mineshaft
'621': mini_golf_course_indoor
'622': mini_golf_course_outdoor
'623': misc
'624': mission
'625': mobile_home
'626': monastery_indoor
'627': monastery_outdoor
'628': moon_bounce
'629': moor
'630': morgue
'631': mosque_indoor
'632': mosque_outdoor
'633': motel
'634': mountain
'635': mountain_path
'636': mountain_road
'637': mountain_snowy
'638': movie_theater_indoor
'639': movie_theater_outdoor
'640': mudflat
'641': museum_indoor
'642': museum_outdoor
'643': music_store
'644': music_studio
'645': natural
'646': natural_history_museum
'647': natural_spring
'648': naval_base
'649': needleleaf
'650': newsroom
'651': newsstand_indoor
'652': newsstand_outdoor
'653': nightclub
'654': nook
'655': nuclear_power_plant_indoor
'656': nuclear_power_plant_outdoor
'657': nunnery
'658': nursery
'659': nursing_home
'660': nursing_home_outdoor
'661': oasis
'662': oast_house
'663': observation_station
'664': observatory_indoor
'665': observatory_outdoor
'666': observatory_post
'667': ocean
'668': ocean_deep
'669': ocean_shallow
'670': office
'671': office_building
'672': office_cubicles
'673': oil_refinery_indoor
'674': oil_refinery_outdoor
'675': oilrig
'676': one-way_street
'677': open-hearth_furnace
'678': operating_room
'679': operating_table
'680': optician
'681': orchard
'682': orchestra_pit
'683': organ_loft_interior
'684': orlop_deck
'685': ossuary
'686': outbuilding
'687': outcropping
'688': outhouse_indoor
'689': outhouse_outdoor
'690': outside
'691': overpass
'692': oyster_bar
'693': oyster_farm
'694': packaging_plant
'695': pagoda
'696': palace
'697': palace_hall
'698': palestra
'699': pantry
'700': paper_mill
'701': parade_ground
'702': park
'703': parking_garage_indoor
'704': parking_garage_outdoor
'705': parking_lot
'706': parkway
'707': parlor
'708': particle_accelerator
'709': party_tent_indoor
'710': party_tent_outdoor
'711': passenger_deck
'712': pasture
'713': patio
'714': patio_indoor
'715': pavement
'716': pavilion
'717': pawnshop
'718': pawnshop_outdoor
'719': pedestrian_overpass_indoor
'720': penalty_box
'721': performance
'722': perfume_shop
'723': pet_shop
'724': pharmacy
'725': phone_booth
'726': physics_laboratory
'727': piano_store
'728': picnic_area
'729': pier
'730': pig_farm
'731': pilothouse_indoor
'732': pilothouse_outdoor
'733': pinetum
'734': piste_road
'735': pitchers_mound
'736': pizzeria
'737': pizzeria_outdoor
'738': planetarium_indoor
'739': planetarium_outdoor
'740': plantation_house
'741': platform
'742': playground
'743': playroom
'744': plaza
'745': plunge
'746': podium_indoor
'747': podium_outdoor
'748': police_station
'749': pond
'750': pontoon_bridge
'751': poolroom_home
'752': poop_deck
'753': porch
'754': portico
'755': portrait_studio
'756': postern
'757': powder_room
'758': power_plant_outdoor
'759': preserve
'760': print_shop
'761': priory
'762': promenade
'763': promenade_deck
'764': pub_indoor
'765': pub_outdoor
'766': pueblo
'767': pulpit
'768': pump_room
'769': pumping_station
'770': putting_green
'771': quadrangle
'772': questionable
'773': quicksand
'774': quonset_hut_indoor
'775': quonset_hut_outdoor
'776': racecourse
'777': raceway
'778': raft
'779': rail_indoor
'780': rail_outdoor
'781': railroad_track
'782': railway_yard
'783': rainforest
'784': ramp
'785': ranch
'786': ranch_house
'787': reading_room
'788': reception
'789': reception_room
'790': recreation_room
'791': rectory
'792': recycling_plant_indoor
'793': recycling_plant_outdoor
'794': refectory
'795': repair_shop
'796': residential_neighborhood
'797': resort
'798': rest_area
'799': rest_stop
'800': restaurant
'801': restaurant_kitchen
'802': restaurant_patio
'803': restroom_indoor
'804': restroom_outdoor
'805': retaining_wall
'806': revolving_door
'807': rice_paddy
'808': riding_arena
'809': rift_valley
'810': river
'811': road
'812': road_cut
'813': road_indoor
'814': road_outdoor
'815': rock_arch
'816': rock_garden
'817': rodeo
'818': roller_skating_rink_indoor
'819': roller_skating_rink_outdoor
'820': rolling_mill
'821': roof
'822': roof_garden
'823': room
'824': root_cellar
'825': rope_bridge
'826': rotisserie
'827': roundabout
'828': roundhouse
'829': rubble
'830': ruin
'831': runway
'832': sacristy
'833': safari_park
'834': salon
'835': saloon
'836': salt_plain
'837': sanatorium
'838': sand
'839': sand_trap
'840': sandbar
'841': sandbox
'842': sauna
'843': savanna
'844': sawmill
'845': schoolhouse
'846': schoolyard
'847': science_laboratory
'848': science_museum
'849': scriptorium
'850': scrubland
'851': scullery
'852': sea_cliff
'853': seaside
'854': seawall
'855': security_check_point
'856': semidesert
'857': server_room
'858': sewer
'859': sewing_room
'860': shed
'861': shelter
'862': shelter_deck
'863': shelter_tent
'864': shipping_room
'865': shipyard_outdoor
'866': shoe_shop
'867': shop
'868': shopfront
'869': shopping_mall_indoor
'870': shopping_mall_outdoor
'871': shore
'872': shower
'873': shower_room
'874': shrine
'875': shrubbery
'876': sidewalk
'877': signal_box
'878': sinkhole
'879': ski_jump
'880': ski_lodge
'881': ski_resort
'882': ski_slope
'883': sky
'884': skyscraper
'885': skywalk_indoor
'886': skywalk_outdoor
'887': slum
'888': snack_bar
'889': snowbank
'890': snowfield
'891': soccer
'892': south_asia
'893': spillway
'894': sporting_goods_store
'895': squash_court
'896': stable
'897': stadium_outdoor
'898': stage_indoor
'899': stage_outdoor
'900': stage_set
'901': staircase
'902': stall
'903': starting_gate
'904': stateroom
'905': station
'906': steam_plant_outdoor
'907': steel_mill_indoor
'908': steel_mill_outdoor
'909': stone_circle
'910': storage_room
'911': store
'912': storm_cellar
'913': street
'914': streetcar_track
'915': strip_mall
'916': strip_mine
'917': student_center
'918': student_residence
'919': study_hall
'920': submarine_interior
'921': subway_interior
'922': sugar_refinery
'923': sun_deck
'924': sunroom
'925': supermarket
'926': supply_chamber
'927': sushi_bar
'928': swamp
'929': swimming_hole
'930': swimming_pool_indoor
'931': swimming_pool_outdoor
'932': synagogue_indoor
'933': synagogue_outdoor
'934': t-bar_lift
'935': tannery
'936': taxistand
'937': taxiway
'938': tea_garden
'939': teahouse
'940': tearoom
'941': teashop
'942': television_room
'943': television_studio
'944': tennis_court_indoor
'945': tennis_court_outdoor
'946': tent_outdoor
'947': terrace_farm
'948': theater_outdoor
'949': threshing_floor
'950': thriftshop
'951': throne_room
'952': ticket_booth
'953': ticket_window_indoor
'954': tidal_basin
'955': tidal_river
'956': tiltyard
'957': tobacco_shop_indoor
'958': toll_plaza
'959': tollbooth
'960': tollgate
'961': tomb
'962': topiary_garden
'963': tower
'964': town_house
'965': toyshop
'966': track_outdoor
'967': tract_housing
'968': trading_floor
'969': traffic_island
'970': trailer_park
'971': train_interior
'972': train_railway
'973': train_station_outdoor
'974': tree_farm
'975': tree_house
'976': trellis
'977': trench
'978': trestle_bridge
'979': truck_stop
'980': tundra
'981': turkish_bath
'982': upper_balcony
'983': urban
'984': utility_room
'985': valley
'986': van_interior
'987': vat
'988': vegetable_garden
'989': vegetation
'990': vehicle
'991': velodrome_indoor
'992': velodrome_outdoor
'993': ventilation_shaft
'994': veranda
'995': vestibule
'996': vestry
'997': veterinarians_office
'998': viaduct
'999': videostore
'1000': village
'1001': vinery
'1002': vineyard
'1003': volcano
'1004': volleyball_court_indoor
'1005': volleyball_court_outdoor
'1006': voting_booth
'1007': waiting_room
'1008': walk_in_freezer
'1009': walkway
'1010': war_room
'1011': warehouse_indoor
'1012': warehouse_outdoor
'1013': washhouse_indoor
'1014': washhouse_outdoor
'1015': washroom
'1016': watchtower
'1017': water
'1018': water_fountain
'1019': water_gate
'1020': water_mill
'1021': water_park
'1022': water_tower
'1023': water_treatment_plant_indoor
'1024': water_treatment_plant_outdoor
'1025': watering_hole
'1026': waterscape
'1027': waterway
'1028': wave
'1029': weighbridge
'1030': western
'1031': wet_bar
'1032': wetland
'1033': wharf
'1034': wheat_field
'1035': whispering_gallery
'1036': widows_walk_indoor
'1037': widows_walk_interior
'1038': wild
'1039': wind_farm
'1040': windmill
'1041': window_seat
'1042': windstorm
'1043': winery
'1044': witness_stand
'1045': woodland
'1046': workroom
'1047': workshop
'1048': wrestling_ring_indoor
'1049': wrestling_ring_outdoor
'1050': yard
'1051': youth_hostel
'1052': zen_garden
'1053': ziggurat
'1054': zoo
splits:
- name: train
num_bytes: 1097055005.51
num_examples: 20210
- name: val
num_bytes: 90418264.0
num_examples: 2000
download_size: 966605341
dataset_size: 1187473269.51
---
# Dataset Card for "ADE20k_Segementation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlabonne/CodeLlama-2-20k | 2023-07-30T10:45:33.000Z | [
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"code",
"region:us"
] | mlabonne | null | null | null | 9 | 95 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9551210
num_examples: 20022
download_size: 3551225
dataset_size: 9551210
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
tags:
- code
---
# CodeLlama-2-20k: A Llama 2 Version of CodeAlpaca
This dataset is the [`sahil2801/CodeAlpaca-20k`](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset with the Llama 2 prompt format [described here](https://huggingface.co/blog/llama2#how-to-prompt-llama-2).
Here is the code I used to format it:
``` python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset('sahil2801/CodeAlpaca-20k')
# Define a function to merge the three columns into one
def merge_columns(example):
if example['input']:
merged = f"<s>[INST] <<SYS>>\nBelow is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n<</SYS>>\n\n{example['instruction']} Input: {example['input']} [/INST] {example['output']} </s>"
else:
merged = f"<s>[INST] <<SYS>>\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n<</SYS>>\n\n{example['instruction']} [/INST] {example['output']} </s>"
return {"text": merged}
# Apply the function to all elements in the dataset
dataset = dataset.map(merge_columns, remove_columns=['instruction', 'input', 'output'])
``` |
maheboob/guanaco-llama-2-chat | 2023-08-24T11:54:39.000Z | [
"region:us"
] | maheboob | null | null | null | 0 | 95 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1655208
num_examples: 1000
download_size: 966969
dataset_size: 1655208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama-2-chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
inseq/disc_eval_mt | 2023-08-30T17:02:10.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:fr",
"license:cc-by-sa-4.0",
"contextual-mt",
"document-mt",
"anaphora",
"lexical-choice",
"region:us"
] | inseq | The test sets comprise hand-crafted examples that are inspired by similar examples in the parallel corpus OpenSubtitles2016 (in terms of vocabulary usage, style and syntactic formulation)
for the evaluation of discourse in English-to-French machine translation. | @inproceedings{bawden-etal-2018-evaluating,
title = "Evaluating Discourse Phenomena in Neural Machine Translation",
author = "Bawden, Rachel and Sennrich, Rico and Birch, Alexandra and Haddow, Barry",
booktitle = {{Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)}},
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N18-1118",
doi = "10.18653/v1/N18-1118",
pages = "1304--1313"
} | null | 0 | 95 | ---
annotations_creators:
- expert-generated
language:
- en
- fr
license: cc-by-sa-4.0
language_creators:
- expert-generated
multilinguality:
- translation
pretty_name: DiscEvalMT
size_categories:
- n<1K
source_datasets:
- original
tags:
- contextual-mt
- document-mt
- anaphora
- lexical-choice
task_categories:
- translation
task_ids: []
---
# Dataset Card for DiscEvalMT
## Table of Contents
- [Dataset Card for DiscEvalMT](#dataset-card-for-discevalmt)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Machine Translation](#machine-translation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Additional Preprocessing](#additional-preprocessing)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/rbawden/discourse-mt-test-sets)
- **Paper:** [NAACL 2018](https://www.aclweb.org/anthology/N18-1118)
- **Point of Contact:** [Rachel Bawden](mailto:rachel.bawden@inria.fr)
### Dataset Summary
The DiscEvalMT dataset contains English-to-French translations used for resolving ambiguity in pronoun anaphora resolution and lexical choice (disambiguation and cohesion) context-aware translation. This version of the DiscEvalMT dataset contains further annotations of ambiguous spans and supporting context in the dataset examples to align it with the highlighting scheme of [SCAT](https://huggingface.co/inseq), enabling granular evaluations of context usage in context-aware NMT models.
**Disclaimer**: *The DiscEvalMT corpus was released in the NAACL 2018 paper ["Evaluating Discourse Phenomena in Neural Machine Translation"](https://www.aclweb.org/anthology/N18-1118) by Bawden et al. (2018), and an original version of the corpus is hosted on [Github](https://github.com/rbawden/discourse-mt-test-sets) with CC-BY-SA 4.0 license.*
### Supported Tasks and Leaderboards
#### Machine Translation
Refer to the [original paper](ttps://www.aclweb.org/anthology/N18-1118) for additional details on the evaluation of discourse-level phenomena using DiscEvalMT.
### Languages
The dataset contains handcrafted English-to-French translation examples containing either anaphoric pronouns or lexical choice items. Examples were created using existing [OpenSubtitles 2016](https://aclanthology.org/L16-1147/) sentences as reference for lexicon and syntactic structure.
## Dataset Structure
### Data Instances
The dataset contains two configurations (`anaphora` and `lexical-choice`), each containing only a test set of 200 examples each. Dataset examples have the following format:
```json
{
"id": 0,
"context_en": "The buildings will be finished next week.",
"en": "Soon they will be full of new residents.",
"context_fr": "Les bâtiments seront terminés la semaine prochaine.",
"fr": "Ils seront bientôt pleins de nouveaux résidents.",
"contrast_fr": "Elles seront bientôt pleines de nouveaux résidents.",
"context_en_with_tags": "The <hon>buildings<hoff> will be finished next week.",
"en_with_tags": "Soon <p>they</p> will be full of new residents.",
"context_fr_with_tags": "Les <hon>bâtiments<hoff> seront terminés la semaine prochaine.",
"fr_with_tags": "<p>Ils</p> seront bientôt pleins de nouveaux résidents.",
"contrast_fr_with_tags": "<p>Elles</p> seront bientôt pleines de nouveaux résidents.",
"type": "m.pl"
}
```
In every example, the context-dependent word of interest and its translation are surrounded by `<p>...</p>` tags. These are guaranteed to be found in the `en_with_tags`, `fr_with_tags` and `contrast_fr_with_tags` fields.
Any span surrounded by `<hon>...<hoff>` tags was identified by human annotators as supporting context necessary to correctly translate the pronoun of interest. These spans are found only in the `context_en_with_tags` and `context_fr_with_tags` fields.
In the example above, the translation of the pronoun `they` (field `en`) is ambiguous, and the correct translation to the feminine French pronoun `Ils` (in field `fr`) is only possible thanks to the supporting masculine noun `bâtiments` in the field `context_fr`.
Fields with the `_with_tags` suffix contain tags around pronouns of interest and supporting context, while their counterparts without the suffix contain the same text without tags, to facilitate direct usage with machine translation models.
### Dataset Creation
The dataset was created manually by the original authors, with context usage annotations added by the authors of [Quantifying the Plausibility of Context Reliance in Neural Machine Translation](tbd) for plausibility analysis purposes.
Please refer to the original article [Evaluating Discourse Phenomena in Neural Machine Translation](https://www.aclweb.org/anthology/N18-1118) for additional information on dataset creation.
### Additional Preprocessing
The dataset presents minor adjustments compared to the original DiscEvalMT corpus.
## Additional Information
### Dataset Curators
The original authors of DiscEvalMT are the curators of the original released dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
The dataset is released under the original CC-BY-SA 4.0 license.
### Citation Information
Please cite the authors if you use these corpus in your work.
```bibtex
@inproceedings{bawden-etal-2018-evaluating,
title = "Evaluating Discourse Phenomena in Neural Machine Translation",
author = "Bawden, Rachel and Sennrich, Rico and Birch, Alexandra and Haddow, Barry",
booktitle = {{Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)}},
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N18-1118",
doi = "10.18653/v1/N18-1118",
pages = "1304--1313"
}
```
|
vlsp-2023-vllm/ai2_arc_vi | 2023-10-08T09:54:04.000Z | [
"region:us"
] | vlsp-2023-vllm | null | null | null | 0 | 95 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 462541
num_examples: 1118
- name: validation
num_bytes: 128948
num_examples: 298
- name: test
num_bytes: 491761
num_examples: 1170
download_size: 511280
dataset_size: 1083250
---
Reference: https://huggingface.co/datasets/ai2_arc
# ARC-Challenge (Vietnamese translation version)
## Dataset Summary
A dataset of grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering.
## Install
To install `lm-eval` from the github repository main branch, run:
```bash
git clone https://github.com/hieunguyen1053/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
## Basic Usage
> **Note**: When reporting results from eval harness, please include the task versions (shown in `results["versions"]`) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the [Task Versioning](#task-versioning) section for more info.
### Hugging Face `transformers`
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. vlsp-2023-vllm/hoa-1b4) on `ai2_arc_vi` you can use the following command:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4 \
--tasks ai2_arc_vi \
--num_fewshot 25 \
--batch_size auto \
--device cuda:0
```
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4,revision=step100000,dtype="float" \
--tasks ai2_arc_vi \
--num_fewshot 25 \
--batch_size auto \
--device cuda:0
```
To evaluate models that are loaded via `AutoSeq2SeqLM` in Huggingface, you instead use `hf-seq2seq`. *To evaluate (causal) models across multiple GPUs, use `--model hf-causal-experimental`*
> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring. |
transformersbook/codeparrot | 2022-02-05T16:15:40.000Z | [
"python",
"code",
"region:us"
] | transformersbook | null | null | null | 34 | 94 | ---
tags:
- python
- code
---
# CodeParrot 🦜 Dataset
## What is it?
This is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
## Creation
It was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following:
```sql
SELECT
f.repo_name, f.path, c.copies, c.size, c.content, l.license
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id
JOIN
`bigquery-public-data.github_repos.licenses` AS l
ON
f.repo_name = l.repo_name
WHERE
NOT c.binary
AND ((f.path LIKE '%.py')
AND (c.size BETWEEN 1024 AND 1048575))
```
## Duplication
Note that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See [codeparrot-clean](https://huggingface.co/datasets/lvwerra/codeparrot-clean) for a deduplicated version of this dataset. |
proteinea/solubility | 2023-01-16T14:43:54.000Z | [
"license:mit",
"doi:10.57967/hf/1103",
"region:us"
] | proteinea | null | null | null | 0 | 94 | ---
license: mit
---
|
proteinea/deeploc | 2023-01-16T14:59:58.000Z | [
"doi:10.57967/hf/1105",
"region:us"
] | proteinea | null | null | null | 0 | 94 | Entry not found |
intfloat/wikipedia | 2023-04-23T08:36:49.000Z | [
"size_categories:100M<n<1B",
"region:us"
] | intfloat | \
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | \
@ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | null | 1 | 94 | ---
size_categories:
- 100M<n<1B
---
### Dataset Summary
This dataset is based on [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia).
The main difference is that we add `Section::::` prefix to each section title to keep the section structure information.
We also use `:` to join the hierarchical section titles.
Following is an example.
```text
Alison Jane Horner (born June 1966) is a British businesswoman, and, until it was sold in 2020, was the CEO of the Asian arm of the Tesco supermarket chain.
Section::::Early life
Alison Jane Horner was born in June 1966. She earned a bachelor's degree in chemistry from the University of Manchester, and an MBA from Manchester Business School.
Section::::Career
Section::::Career:Tesco
Horner joined Tesco as a personnel manager in 1999 and was on Tesco's executive committee from 2011.
In October 2013, Horner became a founding member of The Guardian's Women in Leadership network. in 2015, she became a member of Alliance Manchester Business School's advisory board.
Horner was Tesco' chief people officer (chief human resources officer) of Tesco until May 2018, when she was promoted to be chief executive of Tesco's Asia business in Malaysia and Thailand, until it was sold in late 2020. She was set to step down in February 2021 after 22 years with Tesco.
Section::::Career:Carillion non-executive role
Horner was a non-executive director of Carillion from December 2013, chairing the remuneration committee from June 2014. As of 30 December 2016 her basic compensation was £61,000. After the company went into liquidation in January 2018, Horner was one of the non-executive directors who gave evidence to the House of Commons Business and Work and Pensions select committees on 6 February 2018. In the final report of the Parliamentary Inquiry, published on 16 May 2018, Horner was criticised by MPs; the report concluded:
"... Alison Horner presided over growing salaries and bonuses at the top of the company as its performance faltered. In her evidence to us, she sought to justify her approach by pointing to industry standards, the guidance of advisors, and conversations with shareholders. She failed to demonstrate to us any sense of challenge to the advice she was given, any concern about the views of stakeholders, or any regret at the largesse at the top of Carillion. Ms Horner continues to hold the role of Chief People Officer of Tesco, where she has responsibilities to more than half a million employees. We hope that, in that post, she will reflect on the lessons learned from Carillion and her role in its collapse."
In January 2021, the Insolvency Service said it would seek to ban eight former Carillion directors, including Horner, from holding senior boardroom positions.
Section::::References
Living people
1966 births
British businesspeople in retailing
Tesco people
Alumni of the University of Manchester
Alumni of the Manchester Business School
Carillion people
```
### Data Fields
- `title`: a `string` feature.
- `text`: a `string` feature.
### How to use this dataset
To load this dataset you need to install these first:
```shell
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
Then, you can load any subset of Wikipedia per language and per date this way:
```python
from datasets import load_dataset
dataset = load_dataset("intfloat/wikipedia", language="en", date="20230401")
```
For more information,
please check out [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia).
## Supported Languages
```
aa
ab
ace
ady
af
ak
als
am
an
ang
ar
arc
arz
as
ast
atj
av
ay
az
azb
ba
bar
bat-smg
bcl
be
be-x-old
bg
bh
bi
bjn
bm
bn
bo
bpy
br
bs
bug
bxr
ca
cbk-zam
cdo
ce
ceb
ch
cho
chr
chy
ckb
co
cr
crh
cs
csb
cu
cv
cy
da
de
din
diq
dsb
dty
dv
dz
ee
el
eml
en
eo
es
et
eu
ext
fa
ff
fi
fiu-vro
fj
fo
fr
frp
frr
fur
fy
ga
gag
gan
gd
gl
glk
gn
gom
gor
got
gu
gv
ha
hak
haw
he
hi
hif
ho
hr
hsb
ht
hu
hy
ia
id
ie
ig
ii
ik
ilo
inh
io
is
it
iu
ja
jam
jbo
jv
ka
kaa
kab
kbd
kbp
kg
ki
kj
kk
kl
km
kn
ko
koi
krc
ks
ksh
ku
kv
kw
ky
la
lad
lb
lbe
lez
lfn
lg
li
lij
lmo
ln
lo
lrc
lt
ltg
lv
mai
map-bms
mdf
mg
mh
mhr
mi
min
mk
ml
mn
mr
mrj
ms
mt
mus
mwl
my
myv
mzn
na
nah
nap
nds
nds-nl
ne
new
ng
nl
nn
no
nov
nrm
nso
nv
ny
oc
olo
om
or
os
pa
pag
pam
pap
pcd
pdc
pfl
pi
pih
pl
pms
pnb
pnt
ps
pt
qu
rm
rmy
rn
ro
roa-rup
roa-tara
ru
rue
rw
sa
sah
sat
sc
scn
sco
sd
se
sg
sh
si
simple
sk
sl
sm
sn
so
sq
sr
srn
ss
st
stq
su
sv
sw
szl
ta
tcy
te
tet
tg
th
ti
tk
tl
tn
to
tpi
tr
ts
tt
tum
tw
ty
tyv
udm
ug
uk
ur
uz
ve
vec
vep
vi
vls
vo
wa
war
wo
wuu
xal
xh
xmf
yi
yo
za
zea
zh
zh-classical
zh-min-nan
zh-yue
zu
``` |
shahules786/Multi-chapter-summaries | 2023-08-03T19:33:17.000Z | [
"region:us"
] | shahules786 | null | null | null | 13 | 94 | ## Multi-chapter summaries
The dataset is derived from [BOOKSUM](https://github.com/salesforce/booksum)
The idea here is to make use of the BOOKSUM dataset to finetune models with larger context length (8k+) but very few samples in BOOKSUM have such length.
**Enter multi-chapter summaries!**
The context here comprises multiple chapters taken from the same book appended together to form a larger context length.
The prompt requests a summary from one of the chapters and a summary of the corresponding chapter is present in the `summary` column.
Approximate token length of contexts of 8k version

|
roszcz/pfa-sustain-quantized-7-7-7 | 2023-09-15T10:37:01.000Z | [
"region:us"
] | roszcz | null | null | null | 0 | 94 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart
sequence: float32
length: 128
- name: duration
sequence: float32
length: 128
- name: velocity
sequence: int16
length: 128
- name: dstart_bin
sequence: int8
length: 128
- name: duration_bin
sequence: int8
length: 128
- name: velocity_bin
sequence: int8
length: 128
splits:
- name: train
num_bytes: 430530730
num_examples: 217628
- name: validation
num_bytes: 10502399
num_examples: 5312
- name: test
num_bytes: 11577313
num_examples: 5855
download_size: 0
dataset_size: 452610442
---
# Dataset Card for "pfa-sustain-quantized-7-7-7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_100000_spambase_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T19:42:03.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 94 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 340594567
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_spambase_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T02:06:34.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 94 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 832579062
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gtfintechlab/fomc-example-dataset | 2023-09-12T21:18:49.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"finance",
"region:us"
] | gtfintechlab | null | null | null | 0 | 94 | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
language:
- en
tags:
- finance
size_categories:
- 1K<n<10K
---
## Citation and Contact Information
### Cite
Please cite our paper if you use any code, data, or models.
```c
@inproceedings{shah-etal-2023-trillion,
title = "Trillion Dollar Words: A New Financial Dataset, Task {\&} Market Analysis",
author = "Shah, Agam and
Paturi, Suvan and
Chava, Sudheer",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.368",
doi = "10.18653/v1/2023.acl-long.368",
pages = "6664--6679",
abstract = "Monetary policy pronouncements by Federal Open Market Committee (FOMC) are a major driver of financial market returns. We construct the largest tokenized and annotated dataset of FOMC speeches, meeting minutes, and press conference transcripts in order to understand how monetary policy influences financial markets. In this study, we develop a novel task of hawkish-dovish classification and benchmark various pre-trained language models on the proposed dataset. Using the best-performing model (RoBERTa-large), we construct a measure of monetary policy stance for the FOMC document release days. To evaluate the constructed measure, we study its impact on the treasury market, stock market, and macroeconomic indicators. Our dataset, models, and code are publicly available on Huggingface and GitHub under CC BY-NC 4.0 license.",
}
```
### Contact Information
Please contact Agam Shah (ashah482[at]gatech[dot]edu) for any issues and questions.
GitHub: [@shahagam4](https://github.com/shahagam4)
Website: [https://shahagam4.github.io/](https://shahagam4.github.io/) |
johannes-garstenauer/structs_token_size_4_reduced_labelled_eval_balanced_factor_3 | 2023-09-14T08:59:42.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 1 | 94 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 65294030.35619895
num_examples: 269087
download_size: 24102593
dataset_size: 65294030.35619895
---
# Dataset Card for "structs_token_size_4_reduced_labelled_eval_balanced_factor_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mestopholis/gov-test | 2023-09-24T21:00:15.000Z | [
"region:us"
] | Mestopholis | null | null | null | 0 | 94 | This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
This dataset was used to train Guanaco with QLoRA.
For further information, please see the original dataset.
License: Apache 2.0 |
SophieTr/reddit_clean | 2022-08-13T20:26:31.000Z | [
"region:us"
] | SophieTr | null | null | null | 3 | 93 | Entry not found |
Abdelrahman-Rezk/Arabic_Dialect_Identification | 2022-05-17T12:02:29.000Z | [
"arxiv:2005.06557",
"region:us"
] | Abdelrahman-Rezk | null | null | null | 0 | 93 | Arabic dialects, multi-class-Classification, Tweets.
# Dataset Card for Arabic_Dialect_Identification
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/Abdelrahmanrezk/dialect-prediction-with-transformers
- **Paper:** https://arxiv.org/pdf/2005.06557.pdf
- **Leaderboard:** Abdelrahmanrezk@acm.org
Aiman.Mahgoub@ul.ie
Conor.Ryan@ul.ie
- **Point of Contact:** Abdelrahmanrezk@acm.org
Aiman.Mahgoub@ul.ie
Conor.Ryan@ul.ie
### Dataset Summary
We present QADI, an automatically collected dataset of tweets belonging to a wide range of
country-level Arabic dialects covering 18 different countries in the Middle East and North
Africa region. Our method for building this dataset relies on applying multiple filters to identify
users who belong to different countries based on their account descriptions and to eliminate
tweets that are either written in Modern Standard Arabic or contain inappropriate language. The
resultant dataset contains 540k tweets from 2,525 users who are evenly distributed across 18 Arab countries.
### Supported Tasks and Leaderboards
- Multi-class-Classification: Using extrinsic evaluation, we are able to build effective country-level dialect identification on tweets with a macro-averaged F1-score of 51.5% across 18 classes.
[Arabic-Dialect-Identification](https://github.com/Abdelrahmanrezk/Arabic-Dialect-Identification), rather than what used in the paper Using intrinsic evaluation, they show that the labels of a set of randomly selected tweets are 91.5% accurate. For extrinsic evaluation, they are able to build effective country-level dialect identification on tweets with a macro-averaged F1-score of 60.6% across 18 classes [ Paper](https://arxiv.org/pdf/2005.06557.pdf). And we aimed by next work to fine tune models with that data to see how the result will be.
### Languages
Arabic
## Dataset Structure
### Data Instances
'{"id": [1159906099585327104, 950123809608171648, 1091295506960142336], "label": [10, 14, 2], "text": ["ايه الخيبة و الهرتلة قدام الجون دول؟؟ \U0001f92a😲\\nالعيال دي تتعلق في الفلكة يا معلم كلوب", "@FIA_WIS تذكرت ما اسمي عائشة انا اسمي خولة", "@showqiy @3nood_mh لا والله نروح نشجع قطر و نفرح معهم وش رايك بعد"]}'
### Data Fields
'"{\'id\': Value(dtype=\'int64\', id=None), \'label\': ClassLabel(num_classes=18, names=[\'OM\', \'SD\', \'SA\', \'KW\', \'QA\', \'LB\', \'JO\', \'SY\', \'IQ\', \'MA\', \'EG\', \'PL\', \'YE\', \'BH\', \'DZ\', \'AE\', \'TN\', \'LY\'], id=None), \'text\': Value(dtype=\'string\', id=None)}"'
### Data Splits
This dataset is split into a train, validation and test split. The split sizes are as follow:
|Split name | Number of samples |
|------------- | ---------- |
|train | 440052 |
|validation | 9164 |
|test | 8981 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
{aabdelali,hmubarak,ysamih,sahassan2,kdarwish}@hbku.edu.qa
### Licensing Information
[Needs More Information]
### Citation Information
@unknown{unknown,
author = {Abdelali, Ahmed and Mubarak, Hamdy and Samih, Younes and Hassan, Sabit and Darwish, Kareem},
year = {2020},
month = {05},
pages = {},
title = {Arabic Dialect Identification in the Wild}
} |
Short-Answer-Feedback/saf_communication_networks_english | 2023-03-31T11:46:04.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"short answer feedback",
"communication networks",
"region:us"
] | Short-Answer-Feedback | null | null | null | 6 | 93 | ---
pretty_name: SAF - Communication Networks - English
annotations_creators:
- expert-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- short answer feedback
- communication networks
task_categories:
- text2text-generation
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: reference_answer
dtype: string
- name: provided_answer
dtype: string
- name: answer_feedback
dtype: string
- name: verification_feedback
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2363828
num_examples: 1700
- name: validation
num_bytes: 592869
num_examples: 427
- name: test_unseen_answers
num_bytes: 515669
num_examples: 375
- name: test_unseen_questions
num_bytes: 777945
num_examples: 479
download_size: 941169
dataset_size: 4250311
license: cc-by-4.0
---
# Dataset Card for "saf_communication_networks_english"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotation process](#annotation-process)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022)
### Dataset Summary
Short Answer Feedback (SAF) dataset is a short answer dataset introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 31 English questions covering a range of college-level communication networks topics - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the [saf_micro_job_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_micro_job_german) dataset to examine the German subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at [saf_legal_domain_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_legal_domain_german).
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in English.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Is this a question?",
"reference_answer": "Yes, that is a question.",
"provided_answer": "I'm certain this is a question.",
"answer_feedback": "The response is correct.",
"verification_feedback": "Correct",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = maximum points achievable), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `score`: a `float64` feature representing the score given to the provided answer. For most questions it ranges from 0 to 1.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1700| 427| 375| 479|
## Dataset Creation
### Annotation Process
Two graduate students who had completed the communication networks course were selected to evaluate the answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.
## Additional Information
### Citation Information
```
@inproceedings{filighera-etal-2022-answer,
title = "Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset",
author = "Filighera, Anna and
Parihar, Siddharth and
Steuer, Tim and
Meuser, Tobias and
Ochs, Sebastian",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.587",
doi = "10.18653/v1/2022.acl-long.587",
pages = "8577--8591",
}
```
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. |
dim/oasst_en | 2023-08-13T14:36:10.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 93 | ---
license: mit
dataset_info:
features:
- name: conversation_ids
sequence: string
- name: conversation_text
sequence: string
- name: status
dtype: string
splits:
- name: train
num_bytes: 5716785
num_examples: 3141
download_size: 2174320
dataset_size: 5716785
---
|
Lakera/gandalf_ignore_instructions | 2023-10-02T09:26:29.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"prompt injection",
"region:us"
] | Lakera | null | null | null | 2 | 93 | ---
language:
- en
license: mit
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: text
dtype: string
- name: similarity
dtype: float64
splits:
- name: train
num_bytes: 66400
num_examples: 777
- name: validation
num_bytes: 9633
num_examples: 111
- name: test
num_bytes: 9747
num_examples: 112
download_size: 51515
dataset_size: 85780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
tags:
- prompt injection
---
# gandalf_ignore_instructions

This is a dataset of prompt injections from [Gandalf](https://gandalf.lakera.ai/) by [Lakera](https://www.lakera.ai/).
Note that we might update the dataset occasionally by cleaning the data or adding more samples.
## How the data was obtained
There are millions of prompts and many of them are not actual prompt injections (people ask Gandalf all kinds of things).
We used the following process to obtain relevant data:
- Start with all prompts submitted to Gandalf in July 2023.
- Use [OpenAI text embeddings](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) (`text-embedding-ada-002`) to compute the similarity between the prompts and the phrase "Ignore all previous instructions".
- Filter to those prompts that have similarity of at least 0.825.
- Use [Lakera Guard](https://platform.lakera.ai/docs)'s personally identifiable information detector to filter out prompts that potentially contain PII data.
- Remove near-duplicates from the data (prompts that differ only by a few letters) using an approximate algorithm. This helps reduce leakage between the data splits.
- Sample 1000 prompts.
- Split the data into train-val-test with an 80/10/10 ratio. Each sample is assigned independently so the size of the train split is not _exactly_ 80% and so on.
Note that there is a small amount of noise in the data since an automatic method was used to obtain it: a few of the samples might not be real prompt injections.
## Citation
If you use this dataset in your research, please cite it as
```
@InProceedings{gandalf_ignore_instructions,
title = {gandalf_ignore_instructions},
author={Lakera AI (https://www.lakera.ai)},
year={2023}
}
```
## Licensing Information
gandalf_ignore_instructions is distributed under the [MIT License](https://opensource.org/license/mit/). |
SneakyInsect/ltafdb_preprocessed | 2023-09-28T11:47:31.000Z | [
"region:us"
] | SneakyInsect | null | null | null | 0 | 93 | ---
dataset_info:
features:
- name: record_id
dtype: string
- name: signal
dtype:
array2_d:
shape:
- 2
- 1000
dtype: float32
splits:
- name: train
num_bytes: 5676208388.003276
num_examples: 707906
- name: validation
num_bytes: 658761012.8742297
num_examples: 82154
- name: test
num_bytes: 685864741.5388951
num_examples: 85538
download_size: 2163597762
dataset_size: 7020834142.416401
---
# Dataset Card for "ltafdb_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mnoukhov/openai_summarize_comparisons_relabel_pythia7b | 2023-10-04T19:20:46.000Z | [
"region:us"
] | mnoukhov | null | null | null | 0 | 93 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 157425966
num_examples: 92534
- name: test
num_bytes: 8367345
num_examples: 5000
download_size: 21804922
dataset_size: 165793311
---
# Dataset Card for "openai_summarize_comparisons_relabel_pythia7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/8a14fb4c | 2023-10-06T19:06:51.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 93 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 174
num_examples: 10
download_size: 1325
dataset_size: 174
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "8a14fb4c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ted_hrlr | 2023-04-05T13:41:24.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:extended|ted_talks_iwslt",
"language:az",
"language:be",
"language:en",
"language:es",
"language:fr",
"language:gl",
"language:he",
"language:it",
"language:pt",
"language:ru",
"language:tr",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | Data sets derived from TED talk transcripts for comparing similar language pairs
where one is high resource and the other is low resource. | @inproceedings{Ye2018WordEmbeddings,
author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},
title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},
booktitle = {HLT-NAACL},
year = {2018},
} | null | 0 | 92 | ---
annotations_creators:
- crowdsourced
language:
- az
- be
- en
- es
- fr
- gl
- he
- it
- pt
- ru
- tr
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: TEDHrlr
size_categories:
- 1M<n<10M
source_datasets:
- extended|ted_talks_iwslt
task_categories:
- translation
task_ids: []
paperswithcode_id: null
dataset_info:
- config_name: az_to_en
features:
- name: translation
dtype:
translation:
languages:
- az
- en
splits:
- name: test
num_bytes: 186540
num_examples: 904
- name: train
num_bytes: 1226853
num_examples: 5947
- name: validation
num_bytes: 122709
num_examples: 672
download_size: 131005909
dataset_size: 1536102
- config_name: aztr_to_en
features:
- name: translation
dtype:
translation:
languages:
- az_tr
- en
splits:
- name: test
num_bytes: 186540
num_examples: 904
- name: train
num_bytes: 39834469
num_examples: 188397
- name: validation
num_bytes: 122709
num_examples: 672
download_size: 131005909
dataset_size: 40143718
- config_name: be_to_en
features:
- name: translation
dtype:
translation:
languages:
- be
- en
splits:
- name: test
num_bytes: 186606
num_examples: 665
- name: train
num_bytes: 1176899
num_examples: 4510
- name: validation
num_bytes: 59328
num_examples: 249
download_size: 131005909
dataset_size: 1422833
- config_name: beru_to_en
features:
- name: translation
dtype:
translation:
languages:
- be_ru
- en
splits:
- name: test
num_bytes: 186606
num_examples: 665
- name: train
num_bytes: 59953616
num_examples: 212615
- name: validation
num_bytes: 59328
num_examples: 249
download_size: 131005909
dataset_size: 60199550
- config_name: es_to_pt
features:
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: test
num_bytes: 343640
num_examples: 1764
- name: train
num_bytes: 8611393
num_examples: 44939
- name: validation
num_bytes: 181535
num_examples: 1017
download_size: 131005909
dataset_size: 9136568
- config_name: fr_to_pt
features:
- name: translation
dtype:
translation:
languages:
- fr
- pt
splits:
- name: test
num_bytes: 311650
num_examples: 1495
- name: train
num_bytes: 8755387
num_examples: 43874
- name: validation
num_bytes: 212317
num_examples: 1132
download_size: 131005909
dataset_size: 9279354
- config_name: gl_to_en
features:
- name: translation
dtype:
translation:
languages:
- gl
- en
splits:
- name: test
num_bytes: 193213
num_examples: 1008
- name: train
num_bytes: 1961363
num_examples: 10018
- name: validation
num_bytes: 137929
num_examples: 683
download_size: 131005909
dataset_size: 2292505
- config_name: glpt_to_en
features:
- name: translation
dtype:
translation:
languages:
- gl_pt
- en
splits:
- name: test
num_bytes: 193213
num_examples: 1008
- name: train
num_bytes: 11734254
num_examples: 61803
- name: validation
num_bytes: 137929
num_examples: 683
download_size: 131005909
dataset_size: 12065396
- config_name: he_to_pt
features:
- name: translation
dtype:
translation:
languages:
- he
- pt
splits:
- name: test
num_bytes: 361378
num_examples: 1624
- name: train
num_bytes: 10627615
num_examples: 48512
- name: validation
num_bytes: 230725
num_examples: 1146
download_size: 131005909
dataset_size: 11219718
- config_name: it_to_pt
features:
- name: translation
dtype:
translation:
languages:
- it
- pt
splits:
- name: test
num_bytes: 324726
num_examples: 1670
- name: train
num_bytes: 8905825
num_examples: 46260
- name: validation
num_bytes: 210375
num_examples: 1163
download_size: 131005909
dataset_size: 9440926
- config_name: pt_to_en
features:
- name: translation
dtype:
translation:
languages:
- pt
- en
splits:
- name: test
num_bytes: 347803
num_examples: 1804
- name: train
num_bytes: 9772911
num_examples: 51786
- name: validation
num_bytes: 207960
num_examples: 1194
download_size: 131005909
dataset_size: 10328674
- config_name: ru_to_en
features:
- name: translation
dtype:
translation:
languages:
- ru
- en
splits:
- name: test
num_bytes: 1459576
num_examples: 5477
- name: train
num_bytes: 58778442
num_examples: 208107
- name: validation
num_bytes: 1318357
num_examples: 4806
download_size: 131005909
dataset_size: 61556375
- config_name: ru_to_pt
features:
- name: translation
dtype:
translation:
languages:
- ru
- pt
splits:
- name: test
num_bytes: 409062
num_examples: 1589
- name: train
num_bytes: 11882860
num_examples: 47279
- name: validation
num_bytes: 276866
num_examples: 1185
download_size: 131005909
dataset_size: 12568788
- config_name: tr_to_en
features:
- name: translation
dtype:
translation:
languages:
- tr
- en
splits:
- name: test
num_bytes: 1026406
num_examples: 5030
- name: train
num_bytes: 38607636
num_examples: 182451
- name: validation
num_bytes: 832358
num_examples: 4046
download_size: 131005909
dataset_size: 40466400
---
# Dataset Card for "ted_hrlr"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/neulab/word-embeddings-for-nmt
- **Paper:** [When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?](https://aclanthology.org/N18-2084/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.83 GB
- **Size of the generated dataset:** 281.66 MB
- **Total amount of disk used:** 2.12 GB
### Dataset Summary
Data sets derived from TED talk transcripts for comparing similar language pairs
where one is high resource and the other is low resource.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### az_to_en
- **Size of downloaded dataset files:** 131.01 MB
- **Size of the generated dataset:** 1.53 MB
- **Total amount of disk used:** 132.54 MB
An example of 'train' looks as follows.
```
{
"translation": {
"az": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
"en": "please raise your hand if something applies to you ."
}
}
```
#### aztr_to_en
- **Size of downloaded dataset files:** 131.01 MB
- **Size of the generated dataset:** 40.14 MB
- **Total amount of disk used:** 171.15 MB
An example of 'train' looks as follows.
```
{
"translation": {
"az_tr": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
"en": "please raise your hand if something applies to you ."
}
}
```
#### be_to_en
- **Size of downloaded dataset files:** 131.01 MB
- **Size of the generated dataset:** 1.43 MB
- **Total amount of disk used:** 132.42 MB
An example of 'train' looks as follows.
```
{
"translation": {
"be": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
"en": "please raise your hand if something applies to you ."
}
}
```
#### beru_to_en
- **Size of downloaded dataset files:** 131.01 MB
- **Size of the generated dataset:** 60.20 MB
- **Total amount of disk used:** 191.21 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"be_ru\": \"11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .\", \"en\": \"when i was..."
}
```
#### es_to_pt
- **Size of downloaded dataset files:** 131.01 MB
- **Size of the generated dataset:** 9.13 MB
- **Total amount of disk used:** 140.14 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"es\": \"11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .\", \"pt\": \"when i was 11..."
}
```
### Data Fields
The data fields are the same among all splits.
#### az_to_en
- `translation`: a multilingual `string` variable, with possible languages including `az`, `en`.
#### aztr_to_en
- `translation`: a multilingual `string` variable, with possible languages including `az_tr`, `en`.
#### be_to_en
- `translation`: a multilingual `string` variable, with possible languages including `be`, `en`.
#### beru_to_en
- `translation`: a multilingual `string` variable, with possible languages including `be_ru`, `en`.
#### es_to_pt
- `translation`: a multilingual `string` variable, with possible languages including `es`, `pt`.
### Data Splits
| name |train |validation|test|
|----------|-----:|---------:|---:|
|az_to_en | 5947| 672| 904|
|aztr_to_en|188397| 672| 904|
|be_to_en | 4510| 249| 665|
|beru_to_en|212615| 249| 665|
|es_to_pt | 44939| 1017|1764|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{qi-etal-2018-pre,
title = "When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?",
author = "Qi, Ye and
Sachan, Devendra and
Felix, Matthieu and
Padmanabhan, Sarguna and
Neubig, Graham",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2084",
doi = "10.18653/v1/N18-2084",
pages = "529--535",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
dreamerdeo/finqa | 2023-03-06T08:29:39.000Z | [
"region:us"
] | dreamerdeo | null | null | null | 0 | 92 | dataset_info:
features:
- name: id
dtype: string
- name: post_text
sequence: string
- name: pre_text
sequence: string
- name: question
dtype: string
- name: answers
dtype: string
- name: table
sequence:
sequence: string
splits:
- name: train
num_bytes: 26984130
num_examples: 6251
- name: validation
num_bytes: 3757103
num_examples: 883
- name: test
num_bytes: 4838430
num_examples: 1147
download_size: 21240722
dataset_size: 35579663
|
GATE-engine/mini_imagenet | 2023-06-06T11:44:26.000Z | [
"region:us"
] | GATE-engine | null | null | null | 1 | 92 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2533332667.0
num_examples: 38400
- name: validation
num_bytes: 623452894.0
num_examples: 9600
- name: test
num_bytes: 781497663.0
num_examples: 12000
download_size: 3938112512
dataset_size: 3938283224.0
---
# Dataset Card for "mini_imagenet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/physics_dataset_alpaca | 2023-07-27T18:43:43.000Z | [
"region:us"
] | HydraLM | null | null | null | 2 | 92 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 50217217
num_examples: 19999
download_size: 23657981
dataset_size: 50217217
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "physics_dataset_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JetBrains-Research/commit-chronicle | 2023-10-05T10:50:00.000Z | [
"task_categories:text-generation",
"task_categories:summarization",
"size_categories:1M<n<10M",
"language:code",
"language:en",
"license:other",
"code",
"commit_message_generation",
"arxiv:2308.07655",
"region:us"
] | JetBrains-Research | null | null | null | 2 | 92 | ---
license: other
language:
- code
- en
task_categories:
- text-generation
- summarization
tags:
- code
- commit_message_generation
pretty_name: CommitChronicle
size_categories:
- 1M<n<10M
dataset_info:
- config_name: default
features:
- name: author
dtype: int64
- name: date
dtype: string
- name: timezone
dtype: int64
- name: hash
dtype: string
- name: message
dtype: string
- name: mods
list:
- name: change_type
dtype: string
- name: old_path
dtype: string
- name: new_path
dtype: string
- name: diff
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: repo
dtype: string
- name: original_message
dtype: string
splits:
- name: test
num_bytes: 5760117409
num_examples: 1486267
- name: train
num_bytes: 30084265848
num_examples: 7659458
- name: validation
num_bytes: 5905326070
num_examples: 1554042
download_size: 14168436205
dataset_size: 41749709327
- config_name: subset_cmg
features:
- name: author
dtype: int64
- name: date
dtype: string
- name: timezone
dtype: int64
- name: hash
dtype: string
- name: message
dtype: string
- name: mods
list:
- name: change_type
dtype: string
- name: old_path
dtype: string
- name: new_path
dtype: string
- name: diff
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: repo
dtype: string
- name: original_message
dtype: string
splits:
- name: test
num_bytes: 772774959
num_examples: 204336
download_size: 258151047
dataset_size: 772774959
- config_name: subset_llm
features:
- name: author
dtype: int64
- name: date
dtype: string
- name: timezone
dtype: int64
- name: hash
dtype: string
- name: message
dtype: string
- name: mods
list:
- name: change_type
dtype: string
- name: old_path
dtype: string
- name: new_path
dtype: string
- name: diff
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: repo
dtype: string
- name: original_message
dtype: string
splits:
- name: test
num_bytes: 15121048
num_examples: 4025
download_size: 5068039
dataset_size: 15121048
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- config_name: subset_cmg
data_files:
- split: test
path: subset_cmg/test-*
- config_name: subset_llm
data_files:
- split: test
path: subset_llm/test-*
---
# 📜 CommitChronicle 🔮
This is the dataset for commit message generation (and/or completion), introduced in the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023.
Its key features:
* *large-scale and multilingual*: contains 10.7M commits from 11.9k GitHub repositories in 20 programming languages;
* *diverse*: avoids restrictive filtering on commit messages or commit diffs structure;
* *suitable for experiments with commit history*: provides metadata about commit authors and dates and uses split-by-project.
## Dataset Creation
> 🔍 For further details, please refer to:
> * **Paper**: [https://arxiv.org/abs/2308.07655](https://arxiv.org/abs/2308.07655)
> * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation)
We used [GitHub Search](https://seart-ghs.si.usi.ch/) tool and official GitHub API to select relevant repositories with permissive licenses (Apache, BSD 3-clause, MIT).
On February 9th, 2023, we collected all commits made since 2017 from these repositories via [PyDriller](https://github.com/ishepard/pydriller).
Next, we extensively cleaned the data, including filtering outliers, dropping commits from bot authors, and dropping duplicates. Note: to avoid disclosing personal information, we replaced the commit authors' names and emails with unique identifiers.
## Dataset Structure
### Data Instances
Each data instance in the dataset is a commit. [A commit example](https://github.com/saridormi/commit_chronicle/commit/a7fb3b64184f0af5b08285cce14b9139baa94049) would look like the following:
```
{
'repo': 'saridormi/commit_chronicle',
'hash': 'a7fb3b64184f0af5b08285cce14b9139baa94049',
'author': 123,
'date': '05.07.2021 15:10:07',
'timezone': 0,
'license': 'MIT License',
'language': 'Jupyter Notebook',
'message': 'Add license badge to readme',
'original_message': 'Add license badge to readme',
'mods': [{'change_type': 'MODIFY',
'new_path': 'README.md',
'old_path': 'README.md'
'diff': '@@ -1,6 +1,6 @@\n'
' # Commits dataset\n'
' \n'
'-> :heavy_exclamation_mark: **TODO:** license\n'
'+\n'}],
}
```
### Data Fields
Each example has the following fields:
| **Field** | **Description** |
|:------------------:|:----------------------------------------:|
| `repo` | Commit repository. |
| `hash` | Commit hash. |
| `author` | Unique id for commit author |
| `date` | Commit date (from author). |
| `timezone` | Commit timezone (from author). |
| `license` | Commit repository's license. |
| `language` | Commit repository's main language. |
| `message` | Commit message (after processing). |
| `original_message` | Commit message (without any processing). |
| `mods` | List of file modifications from commit. |
Each file modification has the following fields:
| **Field** | **Description** |
|:-------------:|:-------------------------------------------------------------------------------------------------:|
| `change_type` | Type of change to current file. One of: `ADD`, `COPY`, `RENAME`, `DELETE`, `MODIFY` or `UNKNOWN`. |
| `old_path` | Path to file before change (might be empty). |
| `new_path` | Path to file after change (might be empty). |
| `diff` | `git diff` for current file. |
### Data Splits
We provide the following configurations:
* `default`
* `train`: full training split (7.66M commits)
* `validation`: full validation split (1.55M commits)
* `test`: full test split (1.49M commits)
* `subset_cmg`
* `test`: test subset used for experiments with CMG approaches (204k commits)
* `subset_llm`
* `test`: test subset used for experiments with a LLM (4k commits)
## Considerations for Using the Data
> Adopted from [the Stack](https://huggingface.co/datasets/bigcode/the-stack).
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their open-access research.
Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information.
The dataset is a collection of commits from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
## Citation
```
TODO
``` |
worldboss/bitcoin-data-sentiment | 2023-08-11T23:05:06.000Z | [
"region:us"
] | worldboss | null | null | null | 0 | 92 | Entry not found |
hyperdemocracy/uscb.s1024.o256.bge-small-en | 2023-09-11T02:23:31.000Z | [
"license:mit",
"region:us"
] | hyperdemocracy | null | null | null | 0 | 92 | ---
license: mit
---
|
repllabs/questions_how_to_do_great_work | 2023-09-17T05:43:44.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] | repllabs | null | null | null | 4 | 92 | ---
configs:
- config_name: default
data_files:
- split: processed
path: data/processed-*
- split: raw
path: data/raw-*
dataset_info:
features:
- name: question
dtype: string
- name: model
dtype: string
splits:
- name: processed
num_bytes: 17391
num_examples: 142
- name: raw
num_bytes: 55307
num_examples: 450
download_size: 28702
dataset_size: 72698
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- n<1K
---
# Questions Generated by LLM on 'How To Do Great Work'
http://paulgraham.com/greatwork.html
https://github.com/fastrepl/fastrepl/blob/main/exp/pg_essay_questions.ipynb |
nc33/CLM_data | 2023-09-18T15:31:42.000Z | [
"region:us"
] | nc33 | null | null | null | 0 | 92 | ---
dataset_info:
- config_name: default
features:
- name: train
struct:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 438033088
num_examples: 227703
download_size: 117819233
dataset_size: 438033088
- config_name: train
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 438033088
num_examples: 227703
download_size: 117810940
dataset_size: 438033088
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: train
data_files:
- split: train
path: train/train-*
---
# Dataset Card for "CLM_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_first_sent_v4_train_30_eval_10 | 2023-10-03T10:41:48.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 92 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 111024
num_examples: 70
- name: validation
num_bytes: 11592
num_examples: 10
- name: eval_first_sent
num_bytes: 11592
num_examples: 10
download_size: 102146
dataset_size: 134208
---
# Dataset Card for "squad_first_sent_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sordonia/my-wiki_mmlu_from_valid_all | 2023-10-08T03:14:18.000Z | [
"region:us"
] | sordonia | null | null | null | 0 | 92 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: docno
dtype: int64
- name: score
dtype: float64
- name: dfq
dtype: int64
- name: id
dtype: string
- name: revid
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1146922151
num_examples: 137881
download_size: 632961420
dataset_size: 1146922151
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "my-wiki_mmlu_from_valid_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nferruz/UR50_2021_04 | 2022-07-22T13:44:04.000Z | [
"size_categories:unknown",
"region:us"
] | nferruz | null | null | null | 1 | 91 | ---
YAML tags:
annotations_creators: []
language_creators: []
language: []
license: []
multilinguality: []
pretty_name: ''
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for UR50_2021_04
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
https://ftp.uniprot.org/pub/databases/uniprot/uniref/uniref50/
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.uniprot.org/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Uniref50 (UR50) dataset version 2021/04 is a biological dataset taken from the Uniprot database: https://www.uniprot.org/
### Supported Tasks and Leaderboards
The UR50 dataset contains 48 Million protein sequences. It is a useful dataset to train protein language models.
### Languages
Proteins
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
Train, validation
## Dataset Creation
### Curation Rationale
Substituted FASTA headers by <endoftext> tag.
The dataset was tokenized using BPE and further split into train and validation datasets (ratio 90/10) choosing random sequences for the latter.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
UniProt
### Annotations
#### Annotation process
UniProt contains annotations but no labels/annotations were used for this dataset.
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
Thanks to UniProt for curating this dataset. https://www.uniprot.org/
|
smangrul/MuDoConv | 2022-06-29T06:39:30.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | smangrul | null | null | null | 1 | 91 | ---
license: cc-by-nc-4.0
---
Collated datasets from 10 sources and preprocessed it to have ["texts", "labels"] columns to train/finetune sequence-to-sequence models such as T5/Blenderbot ... Below are the 10 datasets:
1. blended_skill_talk,
2. conv_ai_2
3. empathetic_dialogues
4. wizard_of_wikipedia
5. meta_woz
6. multi_woz,
7. spolin
8. dailydialog
9. cornell_movie_dialogues
10. taskmaster
The data access and preprocessing code is [here](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/data_preprocessing/DataPreprocessing.ipynb) |
FourthBrainGenAI/MarketMail-AI | 2023-04-26T07:08:28.000Z | [
"region:us"
] | FourthBrainGenAI | null | null | null | 0 | 91 | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 30474
num_examples: 17
download_size: 31271
dataset_size: 30474
---
# Dataset Card for "cool_new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vitaliy-sharandin/climate-global-temp-anomaly | 2023-09-24T13:50:13.000Z | [
"region:us"
] | vitaliy-sharandin | null | null | null | 0 | 91 | ---
dataset_info:
features:
- name: Entity
dtype: string
- name: Code
dtype: float64
- name: Global average temperature anomaly relative to 1961-1990
dtype: float64
- name: Upper bound (95% confidence interval) of the annual temperature anomaly
dtype: float64
- name: Lower bound (95% confidence interval) of the annual temperature anomaly
dtype: float64
- name: dt
dtype: timestamp[ns]
splits:
- name: train
num_bytes: 30513
num_examples: 519
download_size: 20408
dataset_size: 30513
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "climate-global-temp-anomaly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atsushi3110/sft-part-chosen-rejected-pairs | 2023-09-26T13:24:51.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | atsushi3110 | null | null | null | 0 | 91 | ---
license: creativeml-openrail-m
---
|
Y19Chip/english-to-hinglish | 2023-09-27T12:12:24.000Z | [
"license:agpl-3.0",
"region:us"
] | Y19Chip | null | null | null | 0 | 91 | ---
license: agpl-3.0
---
|
mtc/final_german_faithfulness_benchmark | 2023-10-07T12:01:00.000Z | [
"region:us"
] | mtc | null | null | null | 0 | 91 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: article_id
dtype: int64
- name: system
dtype: string
- name: sentence_ord
dtype: int64
- name: Comments
sequence: string
- name: is_gold_annotation
dtype: bool
- name: agreement_type
dtype: string
- name: pre_context
dtype: string
- name: post_context
dtype: string
- name: label
dtype: string
- name: lead_with_article
dtype: string
splits:
- name: train
num_bytes: 8953022
num_examples: 3193
- name: test
num_bytes: 3257690
num_examples: 1112
download_size: 1419447
dataset_size: 12210712
---
# Dataset Card for "final_german_faithfulness_benchmark"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ContextualAI/tiny-lambada | 2023-10-09T19:41:05.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 91 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
dtype: string
splits:
- name: dev
num_bytes: 34989
num_examples: 100
download_size: 26234
dataset_size: 34989
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Dataset Card for "tiny-lambada"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
grail_qa | 2022-11-18T20:04:54.000Z | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"knowledge-base-qa",
"arxiv:2011.07743",
"region:us"
] | null | Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot. | @misc{gu2020iid,
title={Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases},
author={Yu Gu and Sue Kase and Michelle Vanni and Brian Sadler and Percy Liang and Xifeng Yan and Yu Su},
year={2020},
eprint={2011.07743},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 2 | 90 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: null
pretty_name: Grail QA
tags:
- knowledge-base-qa
dataset_info:
features:
- name: qid
dtype: string
- name: question
dtype: string
- name: answer
sequence:
- name: answer_type
dtype: string
- name: answer_argument
dtype: string
- name: entity_name
dtype: string
- name: function
dtype: string
- name: num_node
dtype: int32
- name: num_edge
dtype: int32
- name: graph_query
struct:
- name: nodes
sequence:
- name: nid
dtype: int32
- name: node_type
dtype: string
- name: id
dtype: string
- name: class
dtype: string
- name: friendly_name
dtype: string
- name: question_node
dtype: int32
- name: function
dtype: string
- name: edges
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: relation
dtype: string
- name: friendly_name
dtype: string
- name: sparql_query
dtype: string
- name: domains
sequence: string
- name: level
dtype: string
- name: s_expression
dtype: string
splits:
- name: train
num_bytes: 69433121
num_examples: 44337
- name: validation
num_bytes: 9800544
num_examples: 6763
- name: test
num_bytes: 2167256
num_examples: 13231
download_size: 17636773
dataset_size: 81400921
---
# Dataset Card for Grail QA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Grail QA](https://dki-lab.github.io/GrailQA/)
- **Repository:**
- **Paper:** [GrailQA paper (Gu et al. '20)](https://arxiv.org/abs/2011.07743)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
#### What is GrailQA?
Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.
#### Why GrailQA?
GrailQA is by far the largest crowdsourced KBQA dataset with questions of high diversity (i.e., questions in GrailQA can have up to 4 relations and optionally have a function from counting, superlatives and comparatives). It also has the highest coverage over Freebase; it widely covers 3,720 relations and 86 domains from Freebase. Last but not least, our meticulous data split allows GrailQA to test not only i.i.d. generalization, but also compositional generalization and zero-shot generalization, which are critical for practical KBQA systems.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English and Graph query
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `qid` (`str`)
- `question` (`str`)
- `answer` (`List`): Defaults to `[]` in test split.
- `answer_type` (`str`)
- `answer_argument` (`str`)
- `entity_name` (`str`): Defauts to `""` if `answer_type` is not `Entity`.
- `function` (`string`): Defaults to `""` in test split.
- `num_node` (`int`): Defaults to `-1` in test split.
- `num_edge` (`int`): Defaults to `-1` in test split.
- `graph_query` (`Dict`)
- `nodes` (`List`): Defaults to `[]` in test split.
- `nid` (`int`)
- `node_type` (`str`)
- `id` (`str`)
- `class` (`str`)
- `friendly_name` (`str`)
- `question_node` (`int`)
- `function` (`str`)
- `edges` (`List`): Defaults to `[]` in test split.
- `start` (`int`)
- `end` (`int`)
- `relation` (`str`)
- `friendly_name` (`str`)
- `sqarql_query` (`str`): Defaults to `""` in test split.
- `domains` (`List[str]`): Defaults to `[]` in test split.
- `level` (`str`): Only available in validation split. Defaults to `""` in others.
- `s_expression` (`str`): Defaults to `""` in test split.
**Notes:** Only `qid` and `question` available in test split.
### Data Splits
Dataset Split | Number of Instances in Split
--------------|--------------------------------------------
Train | 44,337
Validation | 6,763
Test | 13,231
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset. |
opus_wikipedia | 2023-06-01T14:59:51.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"language:bg",
"language:cs",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fa",
"language:fr",
"language:he",
"language:hu",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sl",
"language:tr",
"language:vi",
"license:unknown",
"region:us"
] | null | This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wołk and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wołk and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18, Elsevier, p.126-132, 2014
20 languages, 36 bitexts
total number of files: 114
total number of tokens: 610.13M
total number of sentence fragments: 25.90M | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 3 | 90 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- bg
- cs
- de
- el
- en
- es
- fa
- fr
- he
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- tr
- vi
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusWikipedia
dataset_info:
- config_name: ar-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 45207715
num_examples: 151136
download_size: 16097997
dataset_size: 45207715
- config_name: ar-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- pl
splits:
- name: train
num_bytes: 304851676
num_examples: 823715
download_size: 104585718
dataset_size: 304851676
- config_name: en-sl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sl
splits:
- name: train
num_bytes: 30479739
num_examples: 140124
download_size: 11727538
dataset_size: 30479739
- config_name: en-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 167649057
num_examples: 572717
download_size: 57356138
dataset_size: 167649057
- config_name: en-vi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- vi
splits:
- name: train
num_bytes: 7571598
num_examples: 58116
download_size: 2422413
dataset_size: 7571598
config_names:
- ar-en
- ar-pl
- en-ru
- en-sl
- en-vi
---
# Dataset Card for OpusWikipedia
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Wikipedia.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wołk and Krzysztof Marasek.
Tha dataset contains 20 languages and 36 bitexts.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs,
e.g.
```python
dataset = load_dataset("opus_wikipedia", lang1="it", lang2="pl")
```
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Wikipedia.php
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- ar
- bg
- cs
- de
- el
- en
- es
- fa
- fr
- he
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- tr
- vi
## Dataset Structure
### Data Instances
```
{
'id': '0',
'translation': {
"ar": "* Encyclopaedia of Mathematics online encyclopaedia from Springer, Graduate-level reference work with over 8,000 entries, illuminating nearly 50,000 notions in mathematics.",
"en": "*Encyclopaedia of Mathematics online encyclopaedia from Springer, Graduate-level reference work with over 8,000 entries, illuminating nearly 50,000 notions in mathematics."
}
}
```
### Data Fields
- `id` (`str`): Unique identifier of the parallel sentence for the pair of languages.
- `translation` (`dict`): Parallel sentences for the pair of languages.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@article{WOLK2014126,
title = {Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs},
journal = {Procedia Technology},
volume = {18},
pages = {126-132},
year = {2014},
note = {International workshop on Innovations in Information and Communication Science and Technology, IICST 2014, 3-5 September 2014, Warsaw, Poland},
issn = {2212-0173},
doi = {https://doi.org/10.1016/j.protcy.2014.11.024},
url = {https://www.sciencedirect.com/science/article/pii/S2212017314005453},
author = {Krzysztof Wołk and Krzysztof Marasek},
keywords = {Comparable corpora, machine translation, NLP},
}
```
```bibtex
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. |
NbAiLab/norwegian_parliament | 2022-07-01T19:51:13.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:no",
"license:cc-by-4.0",
"region:us"
] | NbAiLab | The Norwegian Parliament Speeches is a collection of text passages from
1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members
of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti. | @InProceedings{--,
author = {---},
title = {---},
booktitle = {---},
year = 2021,
address = "---"
} | null | 1 | 90 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- no
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
The Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker was associated with at the time (dates of speeches are also included).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in Norwegian.
## Dataset Structure
### Data Instances
Example of one instance in the dataset.
```{'label': 0, 'text': 'Verre er det med slagsmålene .'}```
### Data Fields
- `id`: index of the example
- `text`: Text of a speech
- `date`: Date (`YYYY-MM-DD`) the speech was produced
- `label`: Political party the speaker was associated with at the time
- 0 = Fremskrittspartiet
- 1 = Sosialistisk Venstreparti
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ----- |
| Number of examples | 3600 | 1200 | 1200 |
The dataset is balanced on political party.
## Dataset Creation
This dataset is based on the publicly available information by Norwegian Parliament (Storting) and created by the National Library of Norway AI-Lab to benchmark their language models.
## Additional Information
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```latex
@misc{--,
title={--},
author={--},
year={2021},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
c-s-ale/dolly-15k-instruction-alpaca-format | 2023-04-13T06:08:38.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-3.0",
"instruction",
"region:us"
] | c-s-ale | null | null | null | 20 | 90 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: category
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 12271354
num_examples: 15015
download_size: 7801648
dataset_size: 12271354
license: cc-by-3.0
language:
- en
tags:
- instruction
pretty_name: Databricks Dolly 15k (Alpaca format, citations removed)
size_categories:
- 10K<n<100K
---
# Dataset Description
- **Blog:** https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm
- **Repo:** https://github.com/databrickslabs/dolly
# Databricks Dolly 15k Dataset with citations removed and in Alpaca Format
**NOTE**
This is a reupload of the Databricks dataset found [here](https://github.com/databrickslabs/dolly/tree/master/data), but modified to be in Alpaca format, and with the citation numbers removed.
This work is not my own, and all credit goes to Databricks.
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. |
GATE-engine/omniglot | 2023-06-05T18:58:27.000Z | [
"region:us"
] | GATE-engine | null | null | null | 0 | 90 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: full
num_bytes: 11924141.5
num_examples: 32460
download_size: 10520482
dataset_size: 11924141.5
---
# Dataset Card for "omniglot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlfoundations/datacomp_1b | 2023-08-21T21:43:05.000Z | [
"license:cc-by-4.0",
"region:us"
] | mlfoundations | null | null | null | 5 | 90 | ---
license: cc-by-4.0
---
## DataComp-1B
This repository contains metadata files for DataComp-1B. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage. |
emozilla/proofpile-test-tokenized | 2023-08-09T15:29:52.000Z | [
"region:us"
] | emozilla | null | null | null | 0 | 90 | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: tokenized_len
dtype: int64
splits:
- name: test
num_bytes: 1644067664
num_examples: 46251
download_size: 552973486
dataset_size: 1644067664
---
# Dataset Card for "proofpile-test-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
martinsinnona/visdecode_web | 2023-10-10T15:30:42.000Z | [
"region:us"
] | martinsinnona | null | null | null | 0 | 90 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: test
num_bytes: 1170020.0
num_examples: 37
download_size: 0
dataset_size: 1170020.0
---
|
mtc/swisstext23-20min-gold_annotation_train_test_data | 2023-09-11T13:37:47.000Z | [
"region:us"
] | mtc | null | null | null | 0 | 90 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: article_id
dtype: int64
- name: system
dtype: string
- name: sentence_ord
dtype: int64
- name: Comments
sequence: string
- name: pre_context
dtype: string
- name: post_context
dtype: string
- name: article_with_lead
dtype: string
- name: label
dtype:
class_label:
names:
'0': Hallucination
'1': Faithful
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 624798.9594882729
num_examples: 234
- name: test
num_bytes: 627469.0405117271
num_examples: 235
download_size: 227521
dataset_size: 1252268.0
---
# Dataset Card for "swisstext23-20min-gold_annotation_train_test_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chrisgru/chat-v2 | 2023-09-27T19:15:24.000Z | [
"region:us"
] | chrisgru | null | null | null | 0 | 90 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7187480
num_examples: 4386
download_size: 3181614
dataset_size: 7187480
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chat-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gathnex/Gath_baize | 2023-10-03T12:50:23.000Z | [
"license:mit",
"region:us"
] | gathnex | null | null | null | 1 | 90 | ---
license: mit
---
|
ContextualAI/tiny-hellaswag | 2023-10-09T21:43:49.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 90 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold_generation
dtype: string
splits:
- name: dev
num_bytes: 46204
num_examples: 100
download_size: 30744
dataset_size: 46204
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Dataset Card for "tiny-hellaswag"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SocialGrep/one-million-reddit-jokes | 2022-07-01T18:48:46.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | null | null | null | 7 | 89 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for one-million-reddit-jokes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionjokes)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionjokes)
### Dataset Summary
This corpus contains a million posts from /r/jokes.
Posts are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] |
merve/poetry | 2022-10-25T09:50:55.000Z | [
"region:us"
] | merve | null | null | null | 14 | 89 | # Dataset Card for poetry
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** poetryfoundation.com
- **Repository:** https://www.kaggle.com/ishnoor/poetry-analysis-with-machine-learning
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
It contains poems from subjects: Love, Nature and Mythology & Folklore that belong to two periods namely Renaissance and Modern
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Has 5 columns:
- Content
- Author
- Poem name
- Age
- Type
### Data Splits
Only training set
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: poetry
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
--- |
Tevatron/beir | 2022-07-08T00:17:30.000Z | [
"region:us"
] | Tevatron | null | null | null | 0 | 89 | Entry not found |
ScandEval/dane-mini | 2023-07-05T09:40:02.000Z | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:da",
"license:cc-by-sa-4.0",
"region:us"
] | ScandEval | null | null | null | 0 | 89 | ---
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 355712
num_examples: 1024
- name: test
num_bytes: 747809
num_examples: 2048
- name: val
num_bytes: 92001
num_examples: 256
download_size: 532720
dataset_size: 1195522
license: cc-by-sa-4.0
task_categories:
- token-classification
language:
- da
size_categories:
- 1K<n<10K
---
# Dataset Card for "dane-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
paulofinardi/OIG_small_chip2_portuguese_brasil | 2023-03-19T23:16:11.000Z | [
"task_categories:conversational",
"task_categories:text2text-generation",
"language:pt",
"region:us"
] | paulofinardi | null | null | null | 8 | 89 | ---
dataset_info:
features:
- name: user
dtype: string
- name: chip2
dtype: string
splits:
- name: train
num_examples: 210289
task_categories:
- conversational
- text2text-generation
language:
- pt
---
# Dataset Card for "OIG_small_chip2_portuguese_brasil"
This dataset was translated to Portuguese-Brasil from [here](https://huggingface.co/datasets/0-hero/OIG-small-chip2)
The data was translated with *MarianMT* model and weights [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE)
The full details to replicate the translation are here: [translation_notebook](https://github.com/finardi/tutos/blob/master/translate_Laion_OIG.ipynb)
---
license: apache-2.0
--- |
LinhDuong/chatdoctor-200k | 2023-03-28T07:58:46.000Z | [
"license:apache-2.0",
"arxiv:2303.14070",
"region:us"
] | LinhDuong | null | null | null | 9 | 89 | ---
license: apache-2.0
---
This ChatDoctor-200K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf
Alternatively, you can download the original dataset from this link https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing |
tarasabkar/IEMOCAP_Audio | 2023-04-08T12:21:44.000Z | [
"region:us"
] | tarasabkar | null | null | null | 1 | 89 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype:
class_label:
names:
'0': ang
'1': hap
'2': neu
'3': sad
splits:
- name: session1
num_bytes: 166986293.79
num_examples: 1085
- name: session2
num_bytes: 153330227.792
num_examples: 1023
- name: session3
num_bytes: 167233186.002
num_examples: 1151
- name: session4
num_bytes: 145475815.026
num_examples: 1031
- name: session5
num_bytes: 170322896.742
num_examples: 1241
download_size: 0
dataset_size: 803348419.352
---
# Dataset Card for "IEMOCAP_Audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
metaeval/implicit-hate-stg1 | 2023-05-31T08:52:07.000Z | [
"task_categories:text-classification",
"language:en",
"license:unknown",
"region:us"
] | metaeval | null | null | null | 0 | 89 | ---
license: unknown
task_categories:
- text-classification
language:
- en
---
https://github.com/SALT-NLP/implicit-hate
```
@inproceedings{elsherief-etal-2021-latent,
title = "Latent Hatred: A Benchmark for Understanding Implicit Hate Speech",
author = "ElSherief, Mai and
Ziems, Caleb and
Muchlinski, David and
Anupindi, Vaishnavi and
Seybolt, Jordyn and
De Choudhury, Munmun and
Yang, Diyi",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.29",
pages = "345--363"
}
``` |
abobster/pushkin_new | 2023-05-05T16:31:35.000Z | [
"region:us"
] | abobster | null | null | null | 0 | 89 | Entry not found |
FredZhang7/all-scam-spam | 2023-07-18T17:16:16.000Z | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:no",
"language:es",
"language:so",
"language:ca",
"language:af",
"language:it",
"language:nl",
"language:hi",
"language:cy",
"language:ar",
"language:sv",
"language:cs",
"language:pl",
"language:de",
"language:lt",
"language:sq",
"language:uk",
"language:tl",
"language:sl",
"language:hr",
"language:en",
"language:fi",
"language:vi",
"language:id",
"language:da",
"language:ko",
"language:bg",
"language:mr",
"language:ja",
"language:bn",
"language:ro",
"language:pt",
"language:fr",
"language:hu",
"language:tr",
"language:zh",
"language:mk",
"language:ur",
"language:sk",
"language:ne",
"language:et",
"language:sw",
"language:ru",
"language:multilingual",
"license:apache-2.0",
"nlp",
"moderation",
"region:us"
] | FredZhang7 | null | null | null | 2 | 89 | ---
license: apache-2.0
language:
- no
- es
- so
- ca
- af
- it
- nl
- hi
- cy
- ar
- sv
- cs
- pl
- de
- lt
- sq
- uk
- tl
- sl
- hr
- en
- fi
- vi
- id
- da
- ko
- bg
- mr
- ja
- bn
- ro
- pt
- fr
- hu
- tr
- zh
- mk
- ur
- sk
- ne
- et
- sw
- ru
- multilingual
task_categories:
- text-classification
- zero-shot-classification
tags:
- nlp
- moderation
size_categories:
- 10K<n<100K
---
This is a large corpus of 42,619 preprocessed text messages and emails sent by humans in 43 languages. `is_spam=1` means spam and `is_spam=0` means ham.
1040 rows of balanced data, consisting of casual conversations and scam emails in ≈10 languages, were manually collected and annotated by me, with some help from ChatGPT.
<br>
### Some preprcoessing algorithms
- [spam_assassin.js](./spam_assassin.js), followed by [spam_assassin.py](./spam_assassin.py)
- [enron_spam.py](./enron_spam.py)
<br>
### Data composition

<br>
### Description
To make the text format between sms messages and emails consistent, email subjects and content are separated by two newlines:
```python
text = email.subject + "\n\n" + email.content
```
<br>
### Suggestions
- If you plan to train a model based on this dataset alone, I recommend adding **some** rows with `is_toxic=0` from `FredZhang7/toxi-text-3M`. Make sure the rows aren't spam.
<br>
### Other Sources
- https://huggingface.co/datasets/sms_spam
- https://github.com/MWiechmann/enron_spam_data
- https://github.com/stdlib-js/datasets-spam-assassin
- https://repository.ortolang.fr/api/content/comere/v3.3/cmr-simuligne.html |
Flmc/DISC-Med-SFT | 2023-08-29T12:54:14.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"medical",
"region:us"
] | Flmc | null | null | null | 29 | 89 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- zh
tags:
- medical
size_categories:
- 100K<n<1M
---
This is a repository containing a subset of the DISC-Med-SFT Dataset.
Check [DISC-MedLLM](https://github.com/FudanDISC/DISC-MedLLM) for more information. |
Kriyans/ner | 2023-10-09T12:44:11.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | Kriyans | null | null | null | 0 | 89 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: wnut-2017-emerging-and-rare-entity
pretty_name: WNUT 17
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-corporation
'2': I-corporation
'3': B-creative-work
'4': I-creative-work
'5': B-group
'6': I-group
'7': B-location
'8': I-location
'9': B-person
'10': I-person
'11': B-product
'12': I-product
config_name: wnut_17
splits:
- name: train
num_bytes: 1078379
num_examples: 3394
- name: validation
num_bytes: 259383
num_examples: 1009
- name: test
num_bytes: 405536
num_examples: 1287
download_size: 800955
dataset_size: 1743298
---
# Dataset Card for "wnut_17"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://noisy-text.github.io/2017/emerging-rare-entities.html](http://noisy-text.github.io/2017/emerging-rare-entities.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.80 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 2.55 MB
### Dataset Summary
WNUT 17: Emerging and Rare entity recognition
This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.
Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation),
but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms.
Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve.
This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text.
The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 0.80 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 2.55 MB
An example of 'train' looks as follows.
```
{
"id": "0",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],
"tokens": ["@paulwalk", "It", "'s", "the", "view", "from", "where", "I", "'m", "living", "for", "two", "weeks", ".", "Empire", "State", "Building", "=", "ESB", ".", "Pretty", "bad", "storm", "here", "last", "evening", "."]
}
```
### Data Fields
The data fields are the same among all splits:
- `id` (`string`): ID of the example.
- `tokens` (`list` of `string`): Tokens of the example text.
- `ner_tags` (`list` of class labels): NER tags of the tokens (using IOB2 format), with possible values:
- 0: `O`
- 1: `B-corporation`
- 2: `I-corporation`
- 3: `B-creative-work`
- 4: `I-creative-work`
- 5: `B-group`
- 6: `I-group`
- 7: `B-location`
- 8: `I-location`
- 9: `B-person`
- 10: `I-person`
- 11: `B-product`
- 12: `I-product`
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 3394| 1009|1287|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{derczynski-etal-2017-results,
title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition",
author = "Derczynski, Leon and
Nichols, Eric and
van Erp, Marieke and
Limsopatham, Nut",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W17-4418",
doi = "10.18653/v1/W17-4418",
pages = "140--147",
abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.
Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization),
but recall on them is a real problem in noisy text - even among annotators.
This drop tends to be due to novel entities and surface forms.
Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'}
hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities,
and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the
ability of participating entries to detect and classify novel and emerging named entities in noisy text.",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu) for adding this dataset. |
jason9693/APEACH | 2022-07-05T04:18:07.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:crowd-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:2202.12459",
"region:us"
] | jason9693 | null | null | null | 3 | 88 | ---
annotations_creators:
- crowdsourced
- crowd-generated
language_creators:
- found
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: apeach
pretty_name: 'APEACH'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- binary-classification
---
# Dataset for project: kor_hate_eval(APEACH)

## Sample Code
<a href="https://colab.research.google.com/drive/1djd0fuoMYIaf7VCHaLQIziJi4_yBJruP#scrollTo=VPR24ysr5Q7k"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="base"/></a>
## Dataset Descritpion
Korean Hate Speech Evaluation Datasets : trained with [BEEP!](https://huggingface.co/datasets/kor_hate) and evaluate with [APEACH](https://github.com/jason9693/APEACH)
- **Repository: [Korean HateSpeech Evaluation Dataset](https://github.com/jason9693/APEACH)**
- **Paper: [APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets](https://arxiv.org/abs/2202.12459)**
- **Point of Contact: [Kichang Yang](ykcha9@gmail.com)**
### Languages
ko-KR
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
{'text': ['(현재 호텔주인 심정) 아18 난 마른하늘에 날벼락맞고 호텔망하게생겼는데 누군 계속 추모받네....',
'....한국적인 미인의 대표적인 분...너무나 곱고아름다운모습...그모습뒤의 슬픔을 미처 알지못했네요ㅠ'],
'class': ['Spoiled', 'Default']}
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"class": "ClassLabel(num_classes=2, names=['Default', 'Spoiled'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train (binarized BEEP!) | 7896 |
| valid (APEACH) | 3770 |
## Citation
```
@article{yang2022apeach,
title={APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets},
author={Yang, Kichang and Jang, Wonjun and Cho, Won Ik},
journal={arXiv preprint arXiv:2202.12459},
year={2022}
}
```
|
Gpaiva/NERDE | 2022-07-28T01:27:18.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pt",
"license:cc-by-4.0",
"ner",
"portuguese-ner",
"economic-defense",
"region:us"
] | Gpaiva | (pt) NERDE é um dataset para NER a partir de documentos jurídicos da defesa econômica em português do Brasil, foi criado em colaboração com o Cade e o laboratório LATITUDE/UnB.
(en) NERDE is a NER dataset from economic defense legal documents in Brazilian Portuguese, created in collaboration with Cade and the LATITUDE/UnB laboratory. | """
_DESCRIPTION = | null | 3 | 88 | ---
annotations_creators:
- expert-generated
language:
- pt
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: NERDE
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- ner
- portuguese-ner
- economic-defense
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for NERDE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [NERDE repository](https://github.com/guipaiva/NERDE)
- **Point of Contact:** [Guilherme P. Paiva](mailto:guipaivagpp@gmail.com)
### Dataset Summary
NERDE is a dataset for Named Entity Recognition for Economic Defense. It was created in collaboration with LATITUDE/UnB Laboratory and the Administrative Council for Economic Defense (Cade)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language in the dataset is Brazilian Portuguese from legal documents. The BCP-47 code for Brazilian Portuguese is pt-BR
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@guipaiva](https://github.com/guipaiva) for adding this dataset.
|
Bingsu/openwebtext_20p | 2022-09-16T02:36:38.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|openwebtext",
"language:en",
"license:cc0-1.0",
"region:us"
] | Bingsu | null | null | null | 4 | 88 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
paperswithcode_id: openwebtext
pretty_name: openwebtext_20p
size_categories:
- 1M<n<10M
source_datasets:
- extended|openwebtext
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# openwebtext_20p
## Dataset Description
- **Origin:** [openwebtext](https://huggingface.co/datasets/openwebtext)
- **Download Size** 4.60 GiB
- **Generated Size** 7.48 GiB
- **Total Size** 12.08 GiB
first 20% of [openwebtext](https://huggingface.co/datasets/openwebtext) |
nbtpj/DUC2004 | 2023-01-09T10:56:59.000Z | [
"region:us"
] | nbtpj | null | null | null | 0 | 88 | Entry not found |
Multimodal-Fatima/VQAv2_sample_validation | 2023-06-09T00:06:10.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 0 | 88 | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: image
dtype: image
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_ViT_L_14
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: new_info_captions3
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence:
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_without_filtering
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: blip_caption_beam_5_Salesforce_blip2_flan_t5_xxl
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_clean
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: blip_caption_topk_50_Salesforce_blip_image_captioning_base_multiple
sequence: string
- name: DETA_detections_deta_swin_large_o365_clip_caption_all_patches_Salesforce_blip_image_captioning_large__ViT_L_14
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: blip_caption_Salesforce_blip_image_captioning_large_intensive
sequence: string
- name: blip_caption_Salesforce_blip_image_captioning_base_intensive
sequence: string
splits:
- name: validation
num_bytes: 511357022.0
num_examples: 1000
download_size: 293191811
dataset_size: 511357022.0
---
# Dataset Card for "VQAv2_sample_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
slvnwhrl/blurbs-clustering-p2p | 2023-04-24T11:42:06.000Z | [
"size_categories:10K<n<100K",
"language:de",
"license:cc-by-nc-4.0",
"embeddings",
"clustering",
"benchmark",
"region:us"
] | slvnwhrl | null | null | null | 0 | 88 | ---
license: cc-by-nc-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
---
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
The datasets contains book titles and is based on the dataset from the [GermEval 2019 Shared Task on Hierarchical Classification of Blurbs](https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/germeval-2019-hmc.html). It contains 18'084 unqiue samples, 28 splits with 177 to 16'425 samples and 4 to 93 unique classes. Splits are built similarly to [MTEB](https://github.com/embeddings-benchmark/mteb)'s [ArxivClusteringP2P](https://huggingface.co/datasets/mteb/arxiv-clustering-p2p).
Have a look at [German Text Embedding Clustering Benchmark](https://github.com/ClimSocAna/tecb-de) for more infos, datasets and evaluation results. |
tomaarsen/conll2003 | 2023-05-08T13:34:35.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-reuters-corpus",
"language:en",
"license:other",
"region:us"
] | tomaarsen | The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
not belong to the previous three groups.
The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
tagging scheme, whereas the original dataset uses IOB1.
For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419 | @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
} | null | 0 | 88 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
paperswithcode_id: conll-2003
pretty_name: CoNLL-2003
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
- name: chunk_tags
sequence:
class_label:
names:
'0': O
'1': B-ADJP
'2': I-ADJP
'3': B-ADVP
'4': I-ADVP
'5': B-CONJP
'6': I-CONJP
'7': B-INTJ
'8': I-INTJ
'9': B-LST
'10': I-LST
'11': B-NP
'12': I-NP
'13': B-PP
'14': I-PP
'15': B-PRT
'16': I-PRT
'17': B-SBAR
'18': I-SBAR
'19': B-UCP
'20': I-UCP
'21': B-VP
'22': I-VP
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: conll2003
splits:
- name: train
num_bytes: 6931345
num_examples: 14041
- name: validation
num_bytes: 1739223
num_examples: 3250
- name: test
num_bytes: 1582054
num_examples: 3453
download_size: 982975
dataset_size: 10252622
train-eval-index:
- config: conll2003
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# Dataset Card for "conll2003"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
### Dataset Summary
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
not belong to the previous three groups.
The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
tagging scheme, whereas the original dataset uses IOB1.
For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### conll2003
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
An example of 'train' looks as follows.
```
{
"id": "0",
"document_id": 1,
"sentence_id": 3,
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
}
```
The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
The data fields are the same among all splits.
#### conll2003
- `id`: a `string` feature.
- `document_id`: an `int32` feature tracking which document the sample is from.
- `sentence_id`: an `int32` feature tracking which sentence in this document the sample is from.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
'WP': 44, 'WP$': 45, 'WRB': 46}
```
- `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
```
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
>
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
>
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
>
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
>
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
### Citation Information
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
```
### Contributions
Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
griffin/ChemSum | 2023-06-01T17:25:14.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"chemistry",
"biology",
"medical",
"arxiv:2305.07615",
"region:us"
] | griffin | null | null | null | 5 | 88 | ---
task_categories:
- summarization
language:
- en
tags:
- chemistry
- biology
- medical
pretty_name: Generating Abstracts of Academic Chemistry Papers
size_categories:
- 100K<n<1M
---
# Dataset Card for ChemSum
## ChemSum Description
<!---- **Homepage:**
- **Leaderboard:**
----->
- **Paper:** [What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization ](https://arxiv.org/abs/2305.07615)
- **Journal:** ACL 2023
- **Point of Contact:** griffin.adams@columbia.edu
- **Repository:** https://github.com/griff4692/calibrating-summaries
### ChemSum Summary
We introduce a dataset with a pure chemistry focus by compiling a list of chemistry academic journals with Open-Access articles. For each journal, we downloaded full-text article PDFs from the Open-Access portion of the journal using available APIs, or scraping this content using [Selenium Chrome WebDriver](https://www.selenium.dev/documentation/webdriver/).
Each PDF was processed with Grobid via a locally installed [client](https://pypi.org/project/grobid-client-python/) to extract free-text paragraphs with sections.
The table below shows the journals from which Open Access articles were sourced, as well as the number of papers processed.
For all journals, we filtered for papers with the provided topic of Chemistry when papers from other disciplines were also available (e.g. PubMed).
| Source | # of Articles |
| ----------- | ----------- |
| Beilstein | 1,829 |
| Chem Cell | 546 |
| ChemRxiv | 12,231 |
| Chemistry Open | 398 |
| Nature Communications Chemistry | 572 |
| PubMed Author Manuscript | 57,680 |
| PubMed Open Access | 29,540 |
| Royal Society of Chemistry (RSC) | 9,334 |
| Scientific Reports - Nature | 6,826 |
<!---
### Supported Tasks and Leaderboards
[More Information Needed]
--->
### Languages
English
## Dataset Structure
<!--- ### Data Instances --->
### Data Fields
| Column | Description |
| ----------- | ----------- |
| `uuid` | Unique Identifier for the Example |
| `title` | Title of the Article |
| `article_source` | Open Source Journal (see above for list) |
| `abstract` | Abstract (summary reference) |
| `sections` | Full-text sections from the main body of paper (<!> indicates section boundaries)|
| `headers` | Corresponding section headers for `sections` field (<!> delimited) |
| `source_toks` | Aggregate number of tokens across `sections` |
| `target_toks` | Number of tokens in the `abstract` |
| `compression` | Ratio of `source_toks` to `target_toks` |
Please refer to `load_chemistry()` in https://github.com/griff4692/calibrating-summaries/blob/master/preprocess/preprocess.py for pre-processing as a summarization dataset. The inputs are `sections` and `headers` and the targets is the `abstract`.
### Data Splits
| Split | Count |
| ----------- | ----------- |
| `train` | 115,956 |
| `validation` | 1,000 |
| `test` | 2,000 |
### Citation Information
```
@article{adams2023desired,
title={What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization},
author={Adams, Griffin and Nguyen, Bichlien H and Smith, Jake and Xia, Yingce and Xie, Shufang and Ostropolets, Anna and Deb, Budhaditya and Chen, Yuan-Jyue and Naumann, Tristan and Elhadad, No{\'e}mie},
journal={arXiv preprint arXiv:2305.07615},
year={2023}
}
```
<!---
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Contributions
[More Information Needed]
--->
|
kunishou/hh-rlhf-49k-ja | 2023-05-19T04:36:37.000Z | [
"license:mit",
"region:us"
] | kunishou | null | null | null | 14 | 88 | ---
license: mit
---
This dataset was created by automatically translating part of "Anthropic/hh-rlhf" into Japanese.
This dataset is also included in "mosaicml/dolly_hhrlhf".
The "ng_translation" flag indicates that the translation was not successful, and "1" means that the translation failed.
Therefore, for data with "1", "instruction" and "instruction_en" contain the same text.
hh-rlhf repository
https://github.com/anthropics/hh-rlhf
Anthropic/hh-rlhf
https://huggingface.co/datasets/Anthropic/hh-rlhf
mosaicml/dolly_hhrlhf
https://huggingface.co/datasets/mosaicml/dolly_hhrlhf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.