id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BAAI/COIG | BAAI | 2023-07-12T15:38:35Z | 97 | 344 | null | [
"language:zh",
"license:apache-2.0",
"arxiv:2204.07705",
"arxiv:2212.10560",
"arxiv:2212.09689",
"arxiv:2304.07987",
"region:us"
] | 2023-07-12T15:38:35Z | 2023-04-16T11:09:32.000Z | 2023-04-16T11:09:32 | ---
license: apache-2.0
arxiv: 2304.07987
language:
- zh
---
# This is the Chinese Open Instruction Generalist project
We propose the Chinese Open Instruction Generalist (**COIG**) project to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. We welcome all researchers in the community to contribute to the corpus set and collaborate with us. We only release the first chip of COIG to help the Chinese LLMs' development in the exploration stage and appeal to more researchers joining us in building COIG. We introduce a manually verified translated general instruction corpus, a manually annotated exam instruction corpus, a human value alignment instruction corpus, a multi-round counterfactual correction chat corpus, and a leetcode instruction corpus. We provide these new instruction corpora to assist the community with instruction tuning on Chinese LLMs. These instruction corpora are also template workflows for how new Chinese instruction corpora can be built and expanded effectively.
It is best to download the individual data files directly that you wish to use instead of using HF load_datasets. All datasets can be downloaded from: https://huggingface.co/datasets/BAAI/COIG/tree/main
This dataset card is modified from [OIG](https://huggingface.co/datasets/laion/OIG).
### Translated Instructions (66,858)
There are 66,858 instructions in total, which are composed of 1,616 task descriptions in [Super-NaturalInstructions](https://arxiv.org/abs/2204.07705) along with a single instance for each of them, 175 seed tasks in [Self-Instruct](https://arxiv.org/abs/2212.10560), and 66,007 instructions from [Unnatural Instructions](https://arxiv.org/abs/2212.09689). To reduce the cost and further improve the quality of the instruction corpus, we separate the translation procedure into three phases: automatic translation, manual verification, and manual correction. These strict quality verification procedures assure the reliability of the translated corpus.
### Exam Instructions (63,532)
The Chinese National College Entrance Examination, Middle School Entrance Examinations, and Civil Servant Examination are the main Chinese commonsense tests. These exams contain various question formats and detailed analysis that can be used as the Chain-of-Thought (**CoT**) corpus. We extract six informative elements from original exam questions, including instruction, question context, question, answer, answer analysis, and coarse-grained subject. There are six main coarse-grained subjects: Chinese, English, Politics, Biology, History, and Geology. There are very few Math, Physics, and Chemistry questions in the corpus because these questions are often with complex symbols which are hard to annotate. For many choice questions, we recommend that the researchers utilize this corpus to further post-process it using prompts or post-process it to blank-filling questions to increase the instructions' diversity further.
### Human Value Alignment Instructions (34,471)
To respect and reflect the major difference caused by different cultural backgrounds, different from other tasks in COIG that leverage one unified collection of instruction-following samples, we categorize the value alignment data into two separate series:
- A set of samples that present shared human values in the Chinese-speaking world. In total, we choose 50 instructions as the augmentation seeds, and produce 3k resulting instructions following samples for general-purpose value alignment in the Chinese-speaking world.
- Some additional sets of samples that present regional-culture or country-specific human values.
### Counterfactural Correction Multi-round Chat (13,653)
The Counterfactual Correction Multi-round Chat dataset (CCMC) is constructed based on the [CN-DBpedia knowledge graph dataset](https://link.springer.com/chapter/10.1007/978-3-319-60045-1_44) with the aim of alleviating and resolving the pain points of hallucination and factual inconsistency in current LLMs. The CCMC dataset includes 5 rounds of role-playing chat between a student and a teacher, and the corresponding knowledge they refer to. The dataset contains ~13,000 dialogues with an average of 5 rounds per dialogue, resulting in ~65,000 rounds of chat.
### Leetcode Instructions (11,737)
Given that the code-related tasks potentially contribute to the ability emergence of LLMs, we argue that code-related tasks aligned with the Chinese natural language should be considered in our datasets. Therefore, we build the Leetcode instructions from a **CC-BY-SA-4.0** license [collection](https://github.com/doocs/leetcode) of 2,589 programming questions. The questions contain problem descriptions, multiple programming languages, and explanations (834 questions do not have explanations).
## Support this project
Your contributions and feedback support the open source ecosystem, improve the bot and provide datasets for future AI research. To participate you can:
Submit Github issues, track issues and help create datasets that need improvement. https://github.com/BAAI-Zlab/COIG
## Update: May 27, 2023
- v0.3: Update counterfactural_correction_multi_round_chat.tar.gz and make sure all round responses can be decoded as json.
- v0.2: Update exam_instructions.jsonl, translated_instructions.jsonl and human_value_alignment_instructions_part2.json.
- v0.1: Release the five datasets of COIG.
## Disclaimer
These datasets contain synthetic data and in some cases data that includes humans trying to get the language model to say toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to reduce or eliminate undesirable content from the instruction tuning datasets.
## License
The COIG dataset that is authored by BAAI is released under an Apache 2.0 license. However, the data also includes content licensed under other permissive licenses such as unnatural instructions data which is licensed under MIT License, or web-crawled data which is used under fair use principles.
## BibTeX & Citation
```
@misc{zhang2023chinese,
title={Chinese Open Instruction Generalist: A Preliminary Release},
author={Ge Zhang and Yemin Shi and Ruibo Liu and Ruibin Yuan and Yizhi Li and Siwei Dong and Yu Shu and Zhaoqun Li and Zekun Wang and Chenghua Lin and Wenhao Huang and Jie Fu},
year={2023},
eprint={2304.07987},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.27045565843582153,
-0.7566365599632263,
-0.05124392732977867,
0.049140091985464096,
-0.014516793191432953,
-0.24443793296813965,
-0.4278051555156708,
-0.3220820426940918,
-0.1138974130153656,
0.524774432182312,
-0.46028393507003784,
-0.719230592250824,
-0.2570340931415558,
-0.0388461127... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
junelee/remon_without_nsfw | junelee | 2023-06-04T13:57:20Z | 97 | 8 | null | [
"region:us"
] | 2023-06-04T13:57:20Z | 2023-06-04T13:56:35.000Z | 2023-06-04T13:56:35 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
takaaki-inada/databricks-dolly-15k-ja-zundamon | takaaki-inada | 2023-06-17T10:41:52Z | 97 | 3 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | 2023-06-17T10:41:52Z | 2023-06-17T10:35:48.000Z | 2023-06-17T10:35:48 | ---
license: cc-by-sa-3.0
---
This dataset was based on "kunishou/databricks-dolly-15k-ja".
This dataset is licensed under CC BY SA 3.0
Last Update : 2023-05-11
databricks-dolly-15k-ja
https://github.com/kunishou/databricks-dolly-15k-ja
databricks-dolly-15k
https://github.com/databrickslabs/dolly/tree/master/data
| [
-0.053703926503658295,
-0.2853350341320038,
0.21254394948482513,
0.6660799384117126,
-0.30044829845428467,
-0.07064441591501236,
0.41683223843574524,
-0.16161103546619415,
0.4273582696914673,
0.8106434345245361,
-1.1053396463394165,
-0.4381899833679199,
-0.33156493306159973,
0.246486976742... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yxchng/cc15m_yfcc15m | yxchng | 2023-06-27T01:54:21Z | 97 | 0 | null | [
"region:us"
] | 2023-06-27T01:54:21Z | 2023-06-26T07:52:11.000Z | 2023-06-26T07:52:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Amani27/massive_translation_dataset | Amani27 | 2023-07-25T14:54:44Z | 97 | 3 | null | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:de",
"language:es",
"language:hi",
"language:fr",
"language:it",
"language:ar",
"language:nl",
"language:ja",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | 2023-07-25T14:54:44Z | 2023-07-20T16:09:42.000Z | 2023-07-20T16:09:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: "train.csv"
- split: validation
path: "validation.csv"
- split: test
path: "test.csv"
license: cc-by-4.0
task_categories:
- translation
language:
- en
- de
- es
- hi
- fr
- it
- ar
- nl
- ja
- pt
size_categories:
- 10K<n<100K
---
# Dataset Card for Massive Dataset for Translation
### Dataset Summary
This dataset is derived from AmazonScience/MASSIVE dataset for translation task purpose.
### Supported Tasks and Leaderboards
Translation
### Languages
1. English (en_US)
2. German (de_DE)
3. Hindi (hi_IN)
4. Spanish (es_ES)
5. French (fr_FR)
6. Italian (it_IT)
7. Arabic (ar_SA)
8. Dutch (nl_NL)
9. Japanese (ja_JP)
10. Portugese (pt_PT)
| [
-0.05988640338182449,
-0.5848149657249451,
0.12974430620670319,
0.6689025163650513,
-0.26682573556900024,
0.4683915972709656,
-0.44361042976379395,
-0.3154108226299286,
0.2807669937610626,
0.5733789205551147,
-0.6117907762527466,
-0.9325914978981018,
-1.0398167371749878,
0.5595973134040833... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NischayDnk/bertvsllm_demodatav2 | NischayDnk | 2023-07-23T19:40:44Z | 97 | 0 | null | [
"region:us"
] | 2023-07-23T19:40:44Z | 2023-07-23T19:40:42.000Z | 2023-07-23T19:40:42 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/sickr-pl-sts | PL-MTEB | 2023-08-10T13:16:52Z | 97 | 0 | null | [
"license:cc-by-nc-sa-3.0",
"region:us"
] | 2023-08-10T13:16:52Z | 2023-08-10T13:16:20.000Z | 2023-08-10T13:16:20 | ---
license: cc-by-nc-sa-3.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/cdscr-sts | PL-MTEB | 2023-08-11T11:53:53Z | 97 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-08-11T11:53:53Z | 2023-08-11T11:53:23.000Z | 2023-08-11T11:53:23 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cmaldona/Generalization-MultiClass-CLINC150-ROSTD | cmaldona | 2023-09-05T22:11:52Z | 97 | 0 | null | [
"task_categories:text-classification",
"language:en",
"license:openrail",
"region:us"
] | 2023-09-05T22:11:52Z | 2023-09-05T21:35:36.000Z | 2023-09-05T21:35:36 | ---
name: generalization-test
version: 1.0.0
description: Merge between 3 datasets.
configs:
- config_name: clinc150
default: true
data_files:
- split: train
path: "train_clinc150.csv"
- split: validation
path: "validation_clinc150.csv"
- split: test
path: "test_clinc150.csv"
- config_name: rostd+
data_files:
- split: train
path: "train_rostd+.csv"
- split: validation
path: "val_rostd+.csv"
- split: test
path: "test_rostd+.csv"
license: openrail
task_categories:
- text-classification
language:
- en
---
This dataset merge 3 datasets and have two setup for experiments in generalisation for multi-class clasificacitino task.
* ID, near-OOD, covariate-shitf: [CLINC150](https://github.com/clinc/oos-eval)
* ID, near-OOD, covariate-shitf: [ROSTD+OOD](https://github.com/vgtomahawk/LR_GC_OOD) (fbreleasecoarse version)
* far-OOD: [News Category](https://www.kaggle.com/datasets/rmisra/news-category-dataset?resource=download) (v3) | [
-0.3641272485256195,
-0.48095831274986267,
0.3635702133178711,
0.1552271693944931,
-0.12070709466934204,
-0.3963073492050171,
-0.25212791562080383,
-0.45131242275238037,
0.09836225211620331,
0.3916609287261963,
-0.4792483448982239,
-0.6513981223106384,
-0.38536667823791504,
0.0612982362508... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/local_id_abusive | SEACrowd | 2023-09-26T12:30:53Z | 97 | 0 | null | [
"language:jav",
"language:sun",
"license:unknown",
"aspect-based-sentiment-analysis",
"region:us"
] | 2023-09-26T12:30:53Z | 2023-09-26T11:15:02.000Z | 2023-09-26T11:15:02 | ---
license: unknown
tags:
- aspect-based-sentiment-analysis
language:
- jav
- sun
---
# local_id_abusive
This dataset is for abusive and hate speech detection, using Twitter text containing Javanese and Sundanese words.
(from the publication source)
The Indonesian local language dataset collection was conducted using Twitter search API to collect the tweets and then
implemented using Tweepy Library. The tweets were collected using queries from the list of abusive words in Indonesian
tweets. The abusive words were translated into local Indonesian languages, which are Javanese and Sundanese. The
translated words are then used as queries to collect tweets containing Indonesian and local languages. The translation
process involved native speakers for each local language. The crawling process has collected a total of more than 5000
tweets. Then, the crawled data were filtered to get tweets that contain local’s vocabulary and/or sentences in Javanese
and Sundanese. Next, after the filtering process, the data will be labeled whether the tweets are labeled as hate speech
and abusive language or not.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{putri2021abusive,
title={Abusive language and hate speech detection for Javanese and Sundanese languages in tweets: Dataset and preliminary study},
author={Putri, Shofianina Dwi Ananda and Ibrohim, Muhammad Okky and Budi, Indra},
booktitle={2021 11th International Workshop on Computer Science and Engineering, WCSE 2021},
pages={461--465},
year={2021},
organization={International Workshop on Computer Science and Engineering (WCSE)},
abstract={Indonesia’s demography as an archipelago with lots of tribes and local languages added variances in their communication style. Every region in Indonesia has its own distinct culture, accents, and languages. The demographical condition can influence the characteristic of the language used in social media, such as Twitter. It can be found that Indonesian uses their own local language for communicating and expressing their mind in tweets. Nowadays, research about identifying hate speech and abusive language has become an attractive and developing topic. Moreover, the research related to Indonesian local languages still rarely encountered. This paper analyzes the use of machine learning approaches such as Naïve Bayes (NB), Support Vector Machine (SVM), and Random Forest Decision Tree (RFDT) in detecting hate speech and abusive language in Sundanese and Javanese as Indonesian local languages. The classifiers were used with the several term weightings features, such as word n-grams and char n-grams. The experiments are evaluated using the F-measure. It achieves over 60 % for both local languages.}
}
```
## License
Unknown
## Homepage
[https://github.com/Shofianina/local-indonesian-abusive-hate-speech-dataset](https://github.com/Shofianina/local-indonesian-abusive-hate-speech-dataset)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.29123538732528687,
-1.0262837409973145,
-0.253309428691864,
0.22913219034671783,
-0.23568153381347656,
0.1959916651248932,
-0.3547811806201935,
-0.5676447749137878,
0.20267024636268616,
0.767065167427063,
-0.08479026705026627,
-0.6659778356552124,
-0.7084488868713379,
0.3424589037895202... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/facqa | SEACrowd | 2023-09-26T12:33:40Z | 97 | 0 | null | [
"language:ind",
"question-answering",
"region:us"
] | 2023-09-26T12:33:40Z | 2023-09-26T11:18:01.000Z | 2023-09-26T11:18:01 | ---
tags:
- question-answering
language:
- ind
---
# facqa
FacQA: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article.
Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the
corresponding short passage. There are six categories of questions: date, location, name,
organization, person, and quantitative.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{purwarianti2007machine,
title={A Machine Learning Approach for Indonesian Question Answering System},
author={Ayu Purwarianti, Masatoshi Tsuchiya, and Seiichi Nakagawa},
booktitle={Proceedings of Artificial Intelligence and Applications },
pages={573--578},
year={2007}
}
```
## License
CC-BY-SA 4.0
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.7017927169799805,
-0.7882253527641296,
0.3160507380962372,
0.18560273945331573,
-0.19008377194404602,
-0.1833144873380661,
0.29854944348335266,
0.07813280820846558,
0.43004506826400757,
0.741952657699585,
-0.7664809823036194,
-0.6394187808036804,
-0.3903130888938904,
0.4406181871891022,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
flytech/llama-python-codes-30k | flytech | 2023-11-05T16:39:12Z | 97 | 9 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:llama2",
"code",
"python",
"instruct",
"llama",
"flytech",
"region:us"
] | 2023-11-05T16:39:12Z | 2023-10-08T16:10:50.000Z | 2023-10-08T16:10:50 | ---
author: FlyTech
license: llama2
task_categories:
- question-answering
- text-generation
- text2text-generation
language:
- en
tags:
- code
- python
- instruct
- llama
- flytech
pretty_name: Llama1/2 Python Codes 30k Tokenized
size_categories:
- 10M<n<100M
---
### <span style="color:#3560B0; font-weight: bold;">Python Codes - 30k examples, Llama1&2 tokenized dataset</span>



### <span style="color:#3560B0; font-weight: bold;">Author</span>
**<span style="color:#266090;">FlyTech</span>**
<span style="color:#3560B0"></br>For general guide on how to create, quantize, merge or inference the model and more, visit:</span>
<a href="https://hackmd.io/@swearek/rJYVR_-7a" target="_blank">hackmd.io/my_first_ai</a>
### <span style="color:#3560B0; font-weight: bold;">Overview</span>
<span style="color:#266090">This dataset serves as a rich resource for various Natural Language Processing tasks such as:</span>
- <span style="color:#E91E63;">Question Answering</span>
- <span style="color:#8BC34A;">Text Generation</span>
- <span style="color:#FFC107;">Text-to-Text Generation</span>
<b><span style="color:#266090">It primarily focuses on instructional tasks in Python, tokenized specifically for the Llama architecture.
The dataset is a blend of GPT-4 generated content, custom codes, behavioral approaches and tasks extending beyond Python.</span></b>
<hr style="height:1px;border:none;color:#333;background-color:#136;" />
### <span style="color:#A45356; font-weight: bold;">IMPORTANT!</span>
<b><span style="color:#A8A8C9; background-color: #153055">
The llama-python-codes-30k dataset is not cleaned.
It has a very low number of unique input entries.</br>
For the fully cleaned version of the dataset, detokenized and with filtered-out input entries,
please refer to this link:
</span></b>
<a href="https://huggingface.co/datasets/flytech/python-codes-25k" style="color:#356090">flytech/python-codes-25k</a>
<hr style="height:1px;border:none;color:#333;background-color:#136;" />
### <span style="color:#3560B0; font-weight: bold;">Dataset Metrics</span>
**<span style="color:#3560B0;">Token Count (via LlamaTokenizer)</span>**
- **<span style="color:#4CAF50;">Maximum</span>: 508**
- **<span style="color:#2196F3;">Average</span>: 158.06**
- **<span style="color:#F44336;">Total</span>: 13,993,984**
**<span style="color:#006688;">Word Count</span>: 1,890,810**
**<span style="color:#006688;">Number of Examples</span>: 27,331**
### <b><span style="color:#3560B0; font-weight: bold;">Usage</span></b>
```python
from datasets import load_dataset
dataset = load_dataset('flytech/llama-python-codes-30k', split='train')
# One can map the dataset in any way, for the sake of example:
dataset = dataset.map(lambda example: {'text': example['instruction'] + ' ' + example['input'] + ' ' + example['output']})['text']
```
### <span style="color:#607D8B; font-weight: bold;">License</span>
This dataset is under the `llama2` license.
<hr style="height:1px;border:none;color:#333;background-color:#136;" />
### CONTRIBUTIONS
```python
# All contributions to the repository are welcome.
# Feel free to use the dataset for the Llama models,
# or visit:
```
<a href="https://huggingface.co/datasets/flytech/python-codes-25k" style="color:#356090">flytech/python-codes-25k</a>
```python
# To preprocess and tokenize the dataset as per your model requirements!
```
### <span style="color:#266090; font-weight: bold;">Tags</span>
- `code`
- `python`
- `instruct`
- `flytech` | [
-0.13794554769992828,
-0.5511736869812012,
0.079095758497715,
0.22783038020133972,
-0.28311994671821594,
0.13528019189834595,
-0.06000339239835739,
-0.3313990831375122,
0.5314878821372986,
0.06119093298912048,
-0.7230477929115295,
-0.7081838250160217,
-0.5200666189193726,
0.183074459433555... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kyujinpy/OpenOrca-ko-v3 | kyujinpy | 2023-11-01T14:21:06Z | 97 | 1 | null | [
"license:cc-by-nc-4.0",
"arxiv:2306.02707",
"arxiv:2301.13688",
"region:us"
] | 2023-11-01T14:21:06Z | 2023-11-01T14:19:51.000Z | 2023-11-01T14:19:51 | ---
license: cc-by-nc-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 41612250
num_examples: 19473
download_size: 21614684
dataset_size: 41612250
---
## OpenOrca-Ko-v3
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
<p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p>

<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` | [
-0.550236165523529,
-0.6724863648414612,
0.1682748943567276,
-0.028009576722979546,
-0.08581925183534622,
-0.17826136946678162,
-0.21409057080745697,
-0.8081509470939636,
0.42686742544174194,
0.47538456320762634,
-0.4123477637767792,
-0.6801491975784302,
-0.35424143075942993,
0.12768073379... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qg_ruquad | lmqg | 2022-12-02T18:55:01Z | 96 | 2 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:deepset/germanquad",
"language:ru",
"license:cc-by-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-02T18:55:01Z | 2022-06-02T23:44:54.000Z | 2022-06-02T23:44:54 | ---
license: cc-by-4.0
pretty_name: SberQuAD for question generation
language: ru
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: deepset/germanquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_ruquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [SberQuaD](https://huggingface.co/datasets/sberquad) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Russian (ru)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'известковыми выделениями сине-зелёных водорослей',
'question': 'чем представлены органические остатки?',
'sentence': 'Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных.'
'paragraph': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены..."
'sentence_answer': "Они представлены <hl> известковыми выделениями сине-зелёных водорослей <hl> , ход...",
'paragraph_answer': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены <hl> известковыми выделениям...",
'paragraph_sentence': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. <hl> Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных. <hl> Кроме..."
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
| 45327 | 5036 |23936 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.5325419902801514,
-1.2094982862472534,
0.4549846649169922,
0.35928207635879517,
-0.26248660683631897,
-0.2443087249994278,
-0.06320074200630188,
0.10950624942779541,
-0.04892824590206146,
0.3044917583465576,
-0.795299232006073,
-0.6559997797012329,
-0.1847917139530182,
0.184742793440818... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/genia_term_corpus | bigbio | 2022-12-22T15:44:41Z | 96 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:44:41Z | 2022-11-13T22:08:43.000Z | 2022-11-13T22:08:43 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: GENIA Term Corpus
homepage: http://www.geniaproject.org/genia-corpus/term-corpus
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for GENIA Term Corpus
## Dataset Description
- **Homepage:** http://www.geniaproject.org/genia-corpus/term-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The identification of linguistic expressions referring to entities of interest in molecular biology such as proteins,
genes and cells is a fundamental task in biomolecular text mining. The GENIA technical term annotation covers the
identification of physical biological entities as well as other important terms. The corpus annotation covers the full
1,999 abstracts of the primary GENIA corpus.
## Citation Information
```
@inproceedings{10.5555/1289189.1289260,
author = {Ohta, Tomoko and Tateisi, Yuka and Kim, Jin-Dong},
title = {The GENIA Corpus: An Annotated Research Abstract Corpus in Molecular Biology Domain},
year = {2002},
publisher = {Morgan Kaufmann Publishers Inc.},
address = {San Francisco, CA, USA},
booktitle = {Proceedings of the Second International Conference on Human Language Technology Research},
pages = {82–86},
numpages = {5},
location = {San Diego, California},
series = {HLT '02}
}
@article{Kim2003GENIAC,
title={GENIA corpus - a semantically annotated corpus for bio-textmining},
author={Jin-Dong Kim and Tomoko Ohta and Yuka Tateisi and Junichi Tsujii},
journal={Bioinformatics},
year={2003},
volume={19 Suppl 1},
pages={
i180-2
}
}
@inproceedings{10.5555/1567594.1567610,
author = {Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel},
title = {Introduction to the Bio-Entity Recognition Task at JNLPBA},
year = {2004},
publisher = {Association for Computational Linguistics},
address = {USA},
booktitle = {Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its
Applications},
pages = {70–75},
numpages = {6},
location = {Geneva, Switzerland},
series = {JNLPBA '04}
}
```
| [
-0.23909716308116913,
-0.7998254895210266,
0.32818734645843506,
0.03475510701537132,
-0.3749527633190155,
0.25618118047714233,
-0.180024653673172,
-0.5081227421760559,
0.6046674847602844,
0.31135293841362,
-0.41051650047302246,
-0.902337372303009,
-0.4676605761051178,
0.7130552530288696,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
neuclir/neuclir1 | neuclir | 2023-01-12T18:43:52Z | 96 | 1 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:extended|c4",
"language:fa",
"language:ru",
"language:zh",
"license:odc-by",
"region:us... | 2023-01-12T18:43:52Z | 2023-01-11T21:08:24.000Z | 2023-01-11T21:08:24 | ---
annotations_creators:
- no-annotation
language:
- fa
- ru
- zh
language_creators:
- found
license:
- odc-by
multilinguality:
- multilingual
pretty_name: NeuCLIR1
size_categories:
- 1M<n<10M
source_datasets:
- extended|c4
tags: []
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# Dataset Card for NeuCLIR1
## Dataset Description
- **Website:** https://neuclir.github.io/
- **Repository:** https://github.com/NeuCLIR/download-collection
### Dataset Summary
This is the dataset created for TREC 2022 NeuCLIR Track. The collection designed to be similar to HC4 and a large portion of documents from HC4 are ported to this collection.
The documents are Web pages from Common Crawl in Chinese, Persian, and Russian.
### Languages
- Chinese
- Persian
- Russian
## Dataset Structure
### Data Instances
| Split | Documents |
|-----------------|----------:|
| `fas` (Persian) | 2.2M |
| `rus` (Russian) | 4.6M |
| `zho` (Chinese) | 3.2M |
### Data Fields
- `id`: unique identifier for this document
- `cc_file`: source file from connon crawl
- `time`: extracted date/time from article
- `title`: title extracted from article
- `text`: extracted article body
- `url`: source URL
## Dataset Usage
Using 🤗 Datasets:
```python
from datasets import load_dataset
dataset = load_dataset('neuclir/neuclir1')
dataset['fas'] # Persian documents
dataset['rus'] # Russian documents
dataset['zho'] # Chinese documents
```
| [
-0.3582588732242584,
-0.17184537649154663,
0.09403911232948303,
0.15954068303108215,
-0.3791814148426056,
0.08080413937568665,
-0.21995046734809875,
-0.32319989800453186,
0.3265858292579651,
0.32708802819252014,
-0.759480357170105,
-0.9437883496284485,
-0.1719067245721817,
0.45401805639266... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ssunbell/boostcamp-docvqa-v5 | Ssunbell | 2023-02-05T03:01:47Z | 96 | 1 | null | [
"region:us"
] | 2023-02-05T03:01:47Z | 2023-02-05T02:50:57.000Z | 2023-02-05T02:50:57 | ---
dataset_info:
features:
- name: questionId
dtype: int64
- name: question
dtype: string
- name: image
sequence:
sequence:
sequence: uint8
- name: docId
dtype: int64
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
- name: words
sequence: string
- name: boxes
sequence:
sequence: int64
splits:
- name: train
num_bytes: 6381793673
num_examples: 39454
- name: val
num_bytes: 869361798
num_examples: 5349
download_size: 2578655464
dataset_size: 7251155471
---
# Dataset Card for "boostcamp-docvqa-v5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5711624026298523,
-0.022245725616812706,
0.28418806195259094,
0.3439921438694,
-0.004930358845740557,
0.1751061975955963,
0.3866478204727173,
-0.1594419777393341,
0.6273499131202698,
0.056215837597846985,
-1.0704227685928345,
-0.7906894087791443,
-0.4649178683757782,
-0.2965031862258911... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
c-s-ale/Product-Descriptions-and-Ads | c-s-ale | 2023-03-31T04:39:12Z | 96 | 9 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:openrail",
"art",
"region:us"
] | 2023-03-31T04:39:12Z | 2023-03-31T02:19:06.000Z | 2023-03-31T02:19:06 | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 27511.2
num_examples: 90
- name: test
num_bytes: 3056.8
num_examples: 10
download_size: 24914
dataset_size: 30568
license: openrail
task_categories:
- text-generation
language:
- en
tags:
- art
pretty_name: Product Descriptions and Ads
size_categories:
- n<1K
---
# Synthetic Dataset for Product Descriptions and Ads
The basic process was as follows:
1. Prompt GPT-4 to create a list of 100 sample clothing items and descriptions for those items.
2. Split the output into desired format `{"product" : "<PRODUCT NAME>", "description" : "<DESCRIPTION>"}
3. Prompt GPT-4 to create adverts for each of the 100 samples based on their name and description.
This data was not cleaned or verified manually. | [
-0.5457733273506165,
-0.8982561230659485,
0.27713772654533386,
0.14298281073570251,
-0.31208622455596924,
0.20092225074768066,
0.02145368792116642,
-0.1952512115240097,
0.3740711808204651,
0.6395558714866638,
-1.1363705396652222,
-0.5400389432907104,
-0.027539076283574104,
0.29202148318290... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pythainlp/final_training_set_v1_enth | pythainlp | 2023-04-29T07:05:42Z | 96 | 1 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"language:th",
"language:en",
"region:us"
] | 2023-04-29T07:05:42Z | 2023-04-22T08:56:14.000Z | 2023-04-22T08:56:14 | ---
dataset_info:
features:
- name: text
dtype: string
- name: nb_token
dtype: int64
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 665379914.0331497
num_examples: 379520
- name: test
num_bytes: 899398.9668502472
num_examples: 513
download_size: 258632318
dataset_size: 666279313
task_categories:
- text-generation
- conversational
language:
- th
- en
---
# Dataset Card for "final_training_set_v1_en_th"
Finetuning datasets for [WangChanGLM](https://github.com/pythainlp/wangchanglm) sourced from [LAION OIG chip2 and infill_dbpedia](https://huggingface.co/datasets/laion/OIG) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [DataBricks Dolly v2](https://github.com/databrickslabs/dolly) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [OpenAI TL;DR](https://github.com/openai/summarize-from-feedback) ([MIT](https://opensource.org/license/mit/)), and [Hello-SimpleAI HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) ([CC-BY SA](https://creativecommons.org/licenses/by-sa/4.0/)).
The dataset is translated using Google Translate API by [Thu Ya Kyaw](https://github.com/iamthuya). | [
-0.21308323740959167,
-0.25393667817115784,
-0.07805613428354263,
0.23146334290504456,
-0.33965709805488586,
-0.1941579431295395,
-0.07847172021865845,
-0.31912675499916077,
0.019644977524876595,
0.4835813641548157,
-0.6349396109580994,
-0.5782672166824341,
-0.325507789850235,
0.0039861705... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mattymchen/celeba-hq | mattymchen | 2023-04-26T05:56:53Z | 96 | 1 | null | [
"region:us"
] | 2023-04-26T05:56:53Z | 2023-04-26T05:15:42.000Z | 2023-04-26T05:15:42 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': female
'1': male
splits:
- name: train
num_bytes: 2731627350.0
num_examples: 28000
- name: validation
num_bytes: 197550788.0
num_examples: 2000
download_size: 2762109745
dataset_size: 2929178138.0
---
# Dataset Card for "celeba-hq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6105027198791504,
-0.43613675236701965,
0.03327973559498787,
0.04701000079512596,
-0.00874230358749628,
0.0947830080986023,
0.10047197341918945,
-0.2176089733839035,
0.9143569469451904,
0.4010549783706665,
-0.7799828052520752,
-0.8237491846084595,
-0.5304154753684998,
-0.212296053767204... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
christinacdl/multiclass_depression | christinacdl | 2023-06-03T16:31:59Z | 96 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-06-03T16:31:59Z | 2023-06-03T15:47:34.000Z | 2023-06-03T15:47:34 | ---
license: apache-2.0
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DataProvenanceInitiative/niv2_submix_original | DataProvenanceInitiative | 2023-10-16T17:35:49Z | 96 | 0 | null | [
"region:us"
] | 2023-10-16T17:35:49Z | 2023-10-16T17:32:45.000Z | 2023-10-16T17:32:45 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 13104211362
num_examples: 10066896
download_size: 7612945130
dataset_size: 13104211362
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "niv2_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4947003424167633,
-0.02019534260034561,
-0.21841615438461304,
0.18235304951667786,
-0.448378324508667,
0.05673408880829811,
0.5565666556358337,
0.10618074983358383,
0.9745227694511414,
0.5189511179924011,
-1.0442719459533691,
-0.4037204384803772,
-0.5874044299125671,
-0.3983424305915832... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gaia-benchmark/GAIA | gaia-benchmark | 2023-11-23T11:26:23Z | 96 | 50 | null | [
"arxiv:2311.12983",
"region:us"
] | 2023-11-23T11:26:23Z | 2023-10-20T07:06:54.000Z | 2023-10-20T07:06:54 | # GAIA dataset
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc).
Data
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to solve. It is therefore divided in 3 levels, where level 1 should be breakable by very good LLMs, and level 3 indicate a strong jump in model capabilities. Each level is divided into a fully public dev set for validation, and a test set with private answers and metadata.
GAIA leaderboard can be found in this space (https://huggingface.co/spaces/gaia-benchmark/leaderboard).
Questions are contained in metadata.jsonl. Some questions come with an additional file, that can be found in the same folder and whose id is given in the field file_name.
More details in [the paper](https://arxiv.org/abs/2311.12983) for now and soon here as well. | [
-0.7252686619758606,
-0.523980975151062,
0.5567683577537537,
0.2035442292690277,
-0.05072738230228424,
0.23534606397151947,
0.4378933012485504,
-0.5448942184448242,
0.12067869305610657,
0.3545697033405304,
-0.8304628133773804,
-0.7444190979003906,
-0.391855925321579,
0.06042991206049919,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CJWeiss/inabs | CJWeiss | 2023-10-26T20:42:33Z | 96 | 0 | null | [
"region:us"
] | 2023-10-26T20:42:33Z | 2023-10-26T20:42:23.000Z | 2023-10-26T20:42:23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 159441006
num_examples: 5346
- name: test
num_bytes: 32277886
num_examples: 1069
- name: valid
num_bytes: 21628228
num_examples: 713
download_size: 103927432
dataset_size: 213347120
---
# Dataset Card for "inabs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7443045973777771,
-0.2893886864185333,
0.10495173931121826,
0.27038371562957764,
-0.033581409603357315,
0.008208376355469227,
0.5313860774040222,
-0.3551131784915924,
1.0423692464828491,
0.5421571731567383,
-0.7541050314903259,
-0.6869093179702759,
-0.45963913202285767,
0.07653950899839... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
orkidea/palabrero-guc-draft | orkidea | 2023-10-28T18:57:13Z | 96 | 0 | null | [
"region:us"
] | 2023-10-28T18:57:13Z | 2023-10-27T21:19:21.000Z | 2023-10-27T21:19:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 62556423.0
num_examples: 17
download_size: 60689485
dataset_size: 62556423.0
---
# Dataset Card for "palabrero-guc-draft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.796141505241394,
-0.22076769173145294,
0.09404627978801727,
0.36174869537353516,
-0.31609925627708435,
0.04203711450099945,
0.35524022579193115,
-0.10572756081819534,
0.8142213821411133,
0.5756260752677917,
-0.8782052993774414,
-0.7195166349411011,
-0.5826812982559204,
-0.22816094756126... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AntoineBlanot/snli-binary | AntoineBlanot | 2023-11-17T02:58:57Z | 96 | 0 | null | [
"region:us"
] | 2023-11-17T02:58:57Z | 2023-11-17T02:50:45.000Z | 2023-11-17T02:50:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label_name
dtype: string
splits:
- name: train
num_bytes: 70545630
num_examples: 549367
- name: test
num_bytes: 1326656
num_examples: 9842
download_size: 19925323
dataset_size: 71872286
---
# Dataset Card for "snli-binary"
This dataset is the [snli-3way](https://huggingface.co/datasets/AntoineBlanot/snli-3way) dataset where the `contradiction` and `neutral` classes has been merged together as a `non-entailment` class.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42324572801589966,
-0.3429953455924988,
0.17642532289028168,
0.10068881511688232,
-0.24060694873332977,
0.07166407257318497,
0.26051661372184753,
-0.540398359298706,
0.8358260989189148,
0.586108922958374,
-0.7604908347129822,
-0.3547649085521698,
-0.44877713918685913,
0.1227124556899070... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wza/finccf | wza | 2023-11-20T14:22:56Z | 96 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-20T14:22:56Z | 2023-11-20T14:20:14.000Z | 2023-11-20T14:20:14 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/mit_restaurant | tner | 2022-08-10T11:25:17Z | 95 | 2 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"region:us"
] | 2022-08-10T11:25:17Z | 2022-07-16T11:12:45.000Z | 2022-07-16T11:12:45 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MIT Restaurant
---
# Dataset Card for "tner/mit_restaurant"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Dataset:** MIT restaurant
- **Domain:** Restaurant
- **Number of Entity:** 8
### Dataset Summary
MIT Restaurant NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `Rating`, `Amenity`, `Location`, `Restaurant_Name`, `Price`, `Hours`, `Dish`, `Cuisine`.
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 0, 0, 0, 0, 0, 0, 0, 5, 3, 4, 0],
'tokens': ['can', 'you', 'find', 'the', 'phone', 'number', 'for', 'the', 'closest', 'family', 'style', 'restaurant']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/mit_restaurant/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-Rating": 1,
"I-Rating": 2,
"B-Amenity": 3,
"I-Amenity": 4,
"B-Location": 5,
"I-Location": 6,
"B-Restaurant_Name": 7,
"I-Restaurant_Name": 8,
"B-Price": 9,
"B-Hours": 10,
"I-Hours": 11,
"B-Dish": 12,
"I-Dish": 13,
"B-Cuisine": 14,
"I-Price": 15,
"I-Cuisine": 16
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|mit_restaurant |6900 | 760| 1521|
| [
-0.39083370566368103,
-0.41739869117736816,
0.06914722919464111,
-0.06707298010587692,
-0.11269385367631912,
-0.1948620229959488,
-0.05531308054924011,
0.01958446204662323,
0.4490993022918701,
0.5069315433502197,
-0.3277028501033783,
-1.0674182176589966,
-0.50665682554245,
0.19215413928031... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/tweetner7 | tner | 2022-11-27T18:50:28Z | 95 | 2 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"arxiv:2210.03797",
"region:us"
] | 2022-11-27T18:50:28Z | 2022-07-18T10:39:50.000Z | 2022-07-18T10:39:50 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TweetNER7
---
# Dataset Card for "tner/tweetner7"
## Dataset Description
- **Repository:** [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper)
- **Paper:** [https://arxiv.org/abs/2210.03797](https://arxiv.org/abs/2210.03797)
- **Dataset:** TweetNER7
- **Domain:** Twitter
- **Number of Entity:** 7
### Dataset Summary
This is the official repository of TweetNER7 (["Named Entity Recognition in Twitter:
A Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022"](https://arxiv.org/abs/2210.03797)), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021.
The tweet collection used in TweetNER7 is same as what used in [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi).
The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too.
- Entity Types: `corperation`, `creative_work`, `event`, `group`, `location`, `product`, `person`
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
For example, a tweet
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below:
http://bluenote.lnk.to/AlbumOfTheWeek
```
is transformed into the following text.
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}
```
A simple function to format tweet follows below.
```python
import re
from urlextract import URLExtract
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
```
We ask annotators to ignore those special tokens but label the verified users' mentions.
### Data Split
| split | number of instances | description |
|:------------------|------:|------:|
| train_2020 | 4616 | training dataset from September 2019 to August 2020 |
| train_2021 | 2495 | training dataset from September 2020 to August 2021 |
| train_all | 7111 | combined training dataset of `train_2020` and `train_2021` |
| validation_2020 | 576 | validation dataset from September 2019 to August 2020 |
| validation_2021 | 310 | validation dataset from September 2020 to August 2021 |
| test_2020 | 576 | test dataset from September 2019 to August 2020 |
| test_2021 | 2807 | test dataset from September 2020 to August 2021 |
| train_random | 4616 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
| validation_random | 576 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
| extra_2020 | 87880 | extra tweet without annotations from September 2019 to August 2020 |
| extra_2021 | 93594 | extra tweet without annotations from September 2020 to August 2021 |
For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['Morning', '5km', 'run', 'with', '{{USERNAME}}', 'for', 'breast', 'cancer', 'awareness', '#', 'pinkoctober', '#', 'breastcancerawareness', '#', 'zalorafit', '#', 'zalorafitxbnwrc', '@', 'The', 'Central', 'Park', ',', 'Desa', 'Parkcity', '{{URL}}'],
'tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 14, 2, 14, 14, 14, 14, 14, 14, 4, 11, 11, 11, 11, 14],
'id': '1183344337016381440',
'date': '2019-10-13'
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweetner7/raw/main/dataset/label.json).
```python
{
"B-corporation": 0,
"B-creative_work": 1,
"B-event": 2,
"B-group": 3,
"B-location": 4,
"B-person": 5,
"B-product": 6,
"I-corporation": 7,
"I-creative_work": 8,
"I-event": 9,
"I-group": 10,
"I-location": 11,
"I-person": 12,
"I-product": 13,
"O": 14
}
```
## Models
See full evaluation metrics [here](https://github.com/asahi417/tner/blob/master/MODEL_CARD.md#models-for-tweetner7).
### Main Models
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:--------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-all`](https://huggingface.co/tner/roberta-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.75 | 61.25 |
| [`tner/roberta-base-tweetner7-all`](https://huggingface.co/tner/roberta-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.16 | 60.81 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.68 | 61 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.26 | 60.7 |
| [`tner/bertweet-large-tweetner7-all`](https://huggingface.co/tner/bertweet-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.46 | 61.87 |
| [`tner/bertweet-base-tweetner7-all`](https://huggingface.co/tner/bertweet-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.36 | 60.52 |
| [`tner/bert-large-tweetner7-all`](https://huggingface.co/tner/bert-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.58 | 59 |
| [`tner/bert-base-tweetner7-all`](https://huggingface.co/tner/bert-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 62.3 | 57.59 |
| [`tner/roberta-large-tweetner7-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.02 | 60.9 |
| [`tner/roberta-base-tweetner7-continuous`](https://huggingface.co/tner/roberta-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.47 | 60.01 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.87 | 61.07 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.51 | 60.57 |
| [`tner/bertweet-large-tweetner7-continuous`](https://huggingface.co/tner/bertweet-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.41 | 61.66 |
| [`tner/bertweet-base-tweetner7-continuous`](https://huggingface.co/tner/bertweet-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.84 | 61.02 |
| [`tner/bert-large-tweetner7-continuous`](https://huggingface.co/tner/bert-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.2 | 57.67 |
| [`tner/roberta-large-tweetner7-2021`](https://huggingface.co/tner/roberta-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.05 | 59.11 |
| [`tner/roberta-base-tweetner7-2021`](https://huggingface.co/tner/roberta-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 61.76 | 57 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-2021`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 63.98 | 58.91 |
| [`tner/bertweet-large-tweetner7-2021`](https://huggingface.co/tner/bertweet-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 62.9 | 58.13 |
| [`tner/bertweet-base-tweetner7-2021`](https://huggingface.co/tner/bertweet-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 63.09 | 57.35 |
| [`tner/bert-large-tweetner7-2021`](https://huggingface.co/tner/bert-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 59.75 | 53.93 |
| [`tner/bert-base-tweetner7-2021`](https://huggingface.co/tner/bert-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.67 | 55.5 |
| [`tner/roberta-large-tweetner7-2020`](https://huggingface.co/tner/roberta-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.76 | 60 |
| [`tner/roberta-base-tweetner7-2020`](https://huggingface.co/tner/roberta-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.21 | 59.11 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 64.28 | 59.31 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 62.87 | 58.26 |
| [`tner/bertweet-large-tweetner7-2020`](https://huggingface.co/tner/bertweet-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.01 | 59.47 |
| [`tner/bertweet-base-tweetner7-2020`](https://huggingface.co/tner/bertweet-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 64.06 | 59.44 |
| [`tner/bert-large-tweetner7-2020`](https://huggingface.co/tner/bert-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 61.43 | 56.14 |
| [`tner/bert-base-tweetner7-2020`](https://huggingface.co/tner/bert-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.09 | 54.67 |
Model description follows below.
* Model with suffix `-all`: Model fine-tuned on `train_all` and validated on `validation_2021`.
* Model with suffix `-continuous`: Model fine-tuned on `train_2021` continuously after fine-tuning on `train_2020` and validated on `validation_2021`.
* Model with suffix `-2021`: Model fine-tuned only on `train_2021` and validated on `validation_2021`.
* Model with suffix `-2020`: Model fine-tuned only on `train_2021` and validated on `validation_2020`.
### Sub Models (used in ablation study)
- Model fine-tuned only on `train_random` and validated on `validation_2020`.
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-random`](https://huggingface.co/tner/roberta-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.33 | 60.96 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 63.29 | 58.5 |
| [`tner/roberta-base-tweetner7-random`](https://huggingface.co/tner/roberta-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.04 | 59.23 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 64.72 | 59.97 |
| [`tner/bertweet-large-tweetner7-random`](https://huggingface.co/tner/bertweet-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.86 | 60.49 |
| [`tner/bertweet-base-tweetner7-random`](https://huggingface.co/tner/bertweet-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.55 | 59.58 |
| [`tner/bert-large-tweetner7-random`](https://huggingface.co/tner/bert-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 62.39 | 57.54 |
| [`tner/bert-base-tweetner7-random`](https://huggingface.co/tner/bert-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.91 | 55.92 |
- Model fine-tuned on the self-labeled dataset on `extra_{2020,2021}` and validated on `validation_2020`.
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:--------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-selflabel2020`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.56 | 59.63 |
| [`tner/roberta-large-tweetner7-selflabel2021`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.6 | 59.45 |
| [`tner/roberta-large-tweetner7-2020-selflabel2020-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2020-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.46 | 60.39 |
| [`tner/roberta-large-tweetner7-2020-selflabel2021-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2021-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.52 | 59.45 |
| [`tner/roberta-large-tweetner7-selflabel2020-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.15 | 60.23 |
| [`tner/roberta-large-tweetner7-selflabel2021-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.48 | 59.41 |
Model description follows below.
* Model with suffix `-self2020`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7).
* Model with suffix `-self2021`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7).
* Model with suffix `-2020-self2020-all`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2020` and `train_2020`.
* Model with suffix `-2020-self2021-all`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2021` and `train_2020`.
* Model with suffix `-2020-self2020-continuous`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`.
* Model with suffix `-2020-self2021-continuous`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`.
### Reproduce Experimental Result
To reproduce the experimental result on our AACL paper, please see the repository
[https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper).
## Citation Information
```
@inproceedings{ushio-etal-2022-tweet,
title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts",
author = "Ushio, Asahi and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco. and
Camacho-Collados, Jose",
booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
month = nov,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
| [
-0.35750338435173035,
-0.4003314673900604,
0.2554589509963989,
0.23758748173713684,
-0.3405064046382904,
0.1416151225566864,
-0.27198949456214905,
-0.4094773232936859,
0.6084386706352234,
0.182488352060318,
-0.7523728609085083,
-0.8717273473739624,
-0.6857506632804871,
0.07655960321426392,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sepidmnorozy/Maltese_sentiment | sepidmnorozy | 2022-08-16T09:44:25Z | 95 | 0 | null | [
"region:us"
] | 2022-08-16T09:44:25Z | 2022-08-16T09:26:10.000Z | 2022-08-16T09:26:10 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atasoglu/databricks-dolly-15k-tr | atasoglu | 2023-05-01T10:30:39Z | 95 | 7 | null | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-05-01T10:30:39Z | 2023-05-01T10:22:31.000Z | 2023-05-01T10:22:31 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- tr
pretty_name: databricks-dolly-15k-tr
size_categories:
- 10K<n<100K
---
This dataset is machine-translated version of [databricks-dolly-15k.jsonl](https://github.com/databrickslabs/dolly/tree/master/data) into Turkish.
Used `googletrans==3.1.0a0` to translation. | [
-0.10686516761779785,
-0.7661024928092957,
-0.16666623950004578,
0.3541727066040039,
-0.5878287553787231,
0.17815729975700378,
0.15874888002872467,
-0.15995685756206512,
0.32583823800086975,
0.9116023778915405,
-0.9360790252685547,
-0.7015257477760315,
-0.6694313883781433,
0.47886496782302... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VMware/open-instruct-v1-oasst-dolly-hhrlhf | VMware | 2023-07-13T14:21:14Z | 95 | 15 | null | [
"language:en",
"region:us"
] | 2023-07-13T14:21:14Z | 2023-05-10T23:36:12.000Z | 2023-05-10T23:36:12 | ---
language: en
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: alpaca_prompt
dtype: string
- name: response
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 60252132
num_examples: 62971
download_size: 33232110
dataset_size: 60252132
---
# Dataset Card for "open-instruct-v1-oasst-dolly-hhrlhf"
This dataset is a combination of:
1. Filtered subset of[OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
2. train split of [Mosaic-dolly-hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) (consists of [Databrick's dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and a filtered subset of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf)).
## Dataset
The dataset consists of 3 columns:
1. instruction: The natural language instruction without any prompt templates (we extracted them out of the alpaca-format in Mosaic-dolly-hhrlhf)
2. alpaca_prompt: Alpaca prompt template versions of instruction
3. response: The response to the instruction
## License
- It is usable for commercial purposes so long as you follow the terms of the license.
- Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
- Wikipedia (various pages) - https://www.wikipedia.org/
- Copyright © Wikipedia editors and contributors.
- Databricks (https://www.databricks.com)
- Copyright © Databricks
- Mosaic ML (https://www.mosaicml.com/)
- Copyright © Mosaic ML
- VMware
- Copyright © VMware
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6550427675247192,
-0.5384896397590637,
0.018383709713816643,
0.5148310661315918,
-0.48905694484710693,
-0.2690867483615875,
0.16667282581329346,
-0.28891798853874207,
0.4366326332092285,
0.7825206518173218,
-0.9659609198570251,
-0.6268871426582336,
-0.5552725195884705,
0.115888640284538... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HK83/Anime_Faces | HK83 | 2023-05-15T20:52:40Z | 95 | 1 | null | [
"license:afl-3.0",
"region:us"
] | 2023-05-15T20:52:40Z | 2023-05-15T20:51:30.000Z | 2023-05-15T20:51:30 | ---
license: afl-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edarchimbaud/news-stocks | edarchimbaud | 2023-11-21T05:06:42Z | 95 | 3 | null | [
"region:us"
] | 2023-11-21T05:06:42Z | 2023-05-17T17:23:09.000Z | 2023-05-17T17:23:09 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: body
dtype: string
- name: publisher
dtype: string
- name: publish_time
dtype: timestamp[ns, tz=GMT]
- name: title
dtype: string
- name: url
dtype: string
- name: uuid
dtype: string
splits:
- name: train
num_bytes: 112563283
num_examples: 22025
download_size: 55028670
dataset_size: 112563283
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "news-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The news-sp500 dataset provides news articles related to companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used for various natural language processing tasks such as text classification, sentiment analysis, information extraction, etc. It does not have a specific leaderboard associated with it.
### Languages
The dataset contains news articles in multiple languages.
## Dataset Structure
### Data Instances
The dataset consists of [1563] data instances.
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- body (string): The main content of the news article.
- publisher (string): The name of the publisher or news agency.
- publish_time (timestamp[ns, tz=GMT]): A timestamp indicating the publication time of the news article in GMT timezone.
- title (string): The title or headline of the news article.
- url (string): The URL or link to the original news article.
- uuid (string): A unique identifier for the news article.
### Data Splits
The dataset consists of a single split called train.
## Dataset Creation
### Curation Rationale
The news-sp500 dataset was created to provide a collection of news articles related to companies in the S&P 500 index for research and analysis purposes.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from various online news sources and normalized for consistency.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The news-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The news-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, news-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | [
-0.6194251775741577,
-0.45288902521133423,
0.028927797451615334,
0.482173889875412,
-0.30541762709617615,
0.15561272203922272,
-0.17639581859111786,
-0.23610344529151917,
0.752509355545044,
0.2913880944252014,
-1.0647727251052856,
-0.8162748217582703,
-0.5033472776412964,
0.171132385730743... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
glaiveai/glaive-function-calling | glaiveai | 2023-09-27T18:04:36Z | 95 | 33 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-09-27T18:04:36Z | 2023-08-07T17:51:48.000Z | 2023-08-07T17:51:48 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
This dataset consists of 52k samples generated through [Glaive](https://glaive.ai) for the task of function calling, in the following format-
```
SYSTEM: You are an helpful assistant who has access to the following functions to help the user, you can use the functions if needed-
{
JSON function definiton
}
USER: user message
ASSISTANT: assistant message
Function call invocations are formatted as-
ASSISTANT: <functioncall> {json function call}
Response to the function call is formatted as-
FUNCTION RESPONSE: {json function response}
```
There are also samples which do not have any function invocations, multiple invocations and samples with no functions presented and invoked to keep the data balanced. | [
0.049240995198488235,
-0.5931277871131897,
0.291451632976532,
0.1363804191350937,
-0.27380603551864624,
-0.14190958440303802,
0.2595185935497284,
-0.30981215834617615,
0.29726719856262207,
0.9930610656738281,
-0.9984862804412842,
-0.5528068542480469,
-0.31953108310699463,
0.250212162733078... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sibozhu/paddington_en | sibozhu | 2023-10-04T03:08:51Z | 95 | 0 | null | [
"region:us"
] | 2023-10-04T03:08:51Z | 2023-10-04T03:08:00.000Z | 2023-10-04T03:08:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhallee/uniref_small | lhallee | 2023-10-04T03:12:15Z | 95 | 0 | null | [
"region:us"
] | 2023-10-04T03:12:15Z | 2023-10-04T03:12:13.000Z | 2023-10-04T03:12:13 | ---
dataset_info:
features:
- name: uniref
dtype: string
splits:
- name: train
num_bytes: 20739509
num_examples: 100000
download_size: 20824692
dataset_size: 20739509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uniref_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5748090147972107,
-0.2515539824962616,
0.19697493314743042,
-0.016134200617671013,
-0.4469129741191864,
-0.1343957930803299,
-0.0523848682641983,
0.015314065851271152,
0.8359997868537903,
0.5702998638153076,
-0.8449512720108032,
-0.643405556678772,
-0.4730835258960724,
-0.16262193024158... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yimingzhang/lichess-2022 | yimingzhang | 2023-11-03T21:37:44Z | 95 | 0 | null | [
"region:us"
] | 2023-11-03T21:37:44Z | 2023-10-04T05:08:38.000Z | 2023-10-04T05:08:38 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Santp98/Secop2_documents | Santp98 | 2023-10-29T03:15:59Z | 95 | 0 | null | [
"language:es",
"license:mit",
"legal",
"region:us"
] | 2023-10-29T03:15:59Z | 2023-10-16T23:47:17.000Z | 2023-10-16T23:47:17 | ---
language:
- es
license: mit
pretty_name: Secop2 documents
dataset_info:
features:
- name: id_doc
dtype: string
- name: doc_text
dtype: string
splits:
- name: train
num_bytes: 303997310.5045912
num_examples: 13460
- name: validation
num_bytes: 101339965.24770437
num_examples: 4487
- name: test
num_bytes: 101339965.24770437
num_examples: 4487
download_size: 232995741
dataset_size: 506677241.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
tags:
- legal
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ostapeno/qa-openai_batched_icl5_clen512_maxD-1_maxC2500_0_cleaned | ostapeno | 2023-10-25T19:00:26Z | 95 | 0 | null | [
"region:us"
] | 2023-10-25T19:00:26Z | 2023-10-25T16:41:26.000Z | 2023-10-25T16:41:26 | Config:
{
"type": "QATransformConfig",
"model_setting": "openai_batched",
"icl_examples": 0,
"icl_dataset": "lukaemon/mmlu",
"icl_split": "validation",
"icl_use_options": true,
"num_iterations": 1,
"max_context_length": 512,
"max_tokens_instruction": 2048,
"max_tokens_response": 1024,
"max_contexts_per_subject": 2500
}
Cleaning envolved removing ",space" at the end of instruction. | [
-0.6035023331642151,
-0.46495896577835083,
0.03438909724354744,
0.23712535202503204,
-0.654067873954773,
0.10483886301517487,
-0.16897152364253998,
0.1326049119234085,
-0.17361606657505035,
0.6531315445899963,
-0.8933594822883606,
-0.48312509059906006,
-0.3984625041484833,
-0.1304548978805... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ryan20/qa_hotel_dataset_2 | Ryan20 | 2023-11-01T08:58:21Z | 95 | 0 | null | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"language:pt",
"license:openrail",
"region:us"
] | 2023-11-01T08:58:21Z | 2023-10-31T11:34:01.000Z | 2023-10-31T11:34:01 | ---
license: openrail
task_categories:
- question-answering
language:
- en
- pt
size_categories:
- n<1K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Santp98/ranking_options_processes | Santp98 | 2023-11-05T14:37:48Z | 95 | 0 | null | [
"region:us"
] | 2023-11-05T14:37:48Z | 2023-11-05T14:37:45.000Z | 2023-11-05T14:37:45 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: process_id
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 5619635
num_examples: 23323
download_size: 3091438
dataset_size: 5619635
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ranking_options_processes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5649228096008301,
-0.3730047345161438,
0.4175609350204468,
-0.06286057829856873,
-0.05692614987492561,
0.07808880507946014,
0.07545121759176254,
0.004647455643862486,
0.8561877608299255,
0.7693300843238831,
-0.8453918099403381,
-0.6665786504745483,
-0.7258608341217041,
-0.41524514555931... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ktam204/ZaloAI | ktam204 | 2023-11-14T07:46:20Z | 95 | 0 | null | [
"region:us"
] | 2023-11-14T07:46:20Z | 2023-11-12T09:26:53.000Z | 2023-11-12T09:26:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 86452073.13
num_examples: 1362
download_size: 83935670
dataset_size: 86452073.13
---
# Dataset Card for "ZaloAI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5103752017021179,
-0.13451388478279114,
0.12796135246753693,
0.1993235945701599,
-0.2020452469587326,
-0.1659727841615677,
0.32722702622413635,
-0.2684105634689331,
0.9808024168014526,
0.38436344265937805,
-0.9534568786621094,
-0.6521893739700317,
-0.49176260828971863,
-0.27530241012573... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kpriyanshu256/semeval-task-8-a-mono-v2-mistral-7b | kpriyanshu256 | 2023-11-13T02:02:06Z | 95 | 0 | null | [
"region:us"
] | 2023-11-13T02:02:06Z | 2023-11-13T02:01:58.000Z | 2023-11-13T02:01:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: model
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
- name: mistral-7b_estimated_loss
dtype: float64
- name: mistral-7b_mean_lowest25
dtype: float64
- name: mistral-7b_mean_highest25
dtype: float64
- name: mistral-7b_max
dtype: float64
- name: mistral-7b_min
dtype: float64
- name: mistral-7b_range
dtype: float64
- name: mistral-7b_mean
dtype: float64
- name: mistral-7b_std
dtype: float64
- name: mistral-7b_entropy
dtype: float64
- name: mistral-7b_kurtosis
dtype: float64
- name: mistral-7b_skewness
dtype: float64
- name: mistral-7b_perplexity
dtype: float64
splits:
- name: train
num_bytes: 281584304
num_examples: 95805
- name: val
num_bytes: 69152233
num_examples: 23952
- name: test
num_bytes: 11023757
num_examples: 5000
download_size: 215512867
dataset_size: 361760294
---
# Dataset Card for "semeval-task-8-a-mono-v2-mistral-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44176095724105835,
-0.236724853515625,
0.14266511797904968,
0.3883940875530243,
-0.5067896246910095,
-0.21544672548770905,
0.39706242084503174,
-0.11868303269147873,
0.9214075207710266,
0.5354251861572266,
-0.8308866024017334,
-0.5598409175872803,
-0.8133713006973267,
-0.216944113373756... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
neoneye/histogram-comparisons-v1 | neoneye | 2023-11-14T19:15:58Z | 95 | 0 | null | [
"task_categories:image-to-text",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"region:us"
] | 2023-11-14T19:15:58Z | 2023-11-13T19:54:56.000Z | 2023-11-13T19:54:56 | ---
license: mit
task_categories:
- image-to-text
language:
- en
size_categories:
- 1M<n<10M
---
This dataset contains 3000000 items in total. There are 3 curriculums each containing 1000000 items.
Each item is a markdown document.
Each item contains between 2 and 6 image comparisons, with a `Summary` at the bottom.
The images are between 3x3 and 14x14.
The markdown document contains a `## Response`, that separates the prompt from the answer.
The structure of the markdown document with 3 comparisons: A, B, C.
```
# Histogram comparisons with summary
## Data A
### Data left
### Data right
## Data B
### Data left
### Data right
## Data C
### Data left
### Data right
## Response
## Compare A
## Compare B
## Compare C
## Summary
``` | [
-0.6757329106330872,
-0.3078233003616333,
0.4851321578025818,
0.2718077600002289,
-0.1093302071094513,
-0.021797755733132362,
0.06711433082818985,
-0.19843949377536774,
0.11377312988042831,
0.6694201827049255,
-0.4001398980617523,
-0.802555501461029,
-0.7175018191337585,
0.7131202220916748... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bhunakit/paraphrasethai | Bhunakit | 2023-11-17T11:53:40Z | 95 | 0 | null | [
"region:us"
] | 2023-11-17T11:53:40Z | 2023-11-15T16:35:25.000Z | 2023-11-15T16:35:25 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
birgermoell/open_assistant_dataset | birgermoell | 2023-02-28T10:29:02Z | 94 | 0 | null | [
"region:us"
] | 2023-02-28T10:29:02Z | 2023-02-28T10:25:21.000Z | 2023-02-28T10:25:21 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
climatebert/climate_sentiment | climatebert | 2023-04-18T14:37:00Z | 94 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-04-18T14:37:00Z | 2023-04-11T13:11:01.000Z | 2023-04-11T13:11:01 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ClimateSentiment
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': risk
'1': neutral
'2': opportunity
splits:
- name: train
num_bytes: 492077
num_examples: 1000
- name: test
num_bytes: 174265
num_examples: 320
download_size: 373638
dataset_size: 666342
---
# Dataset Card for climate_sentiment
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for classifying climate-related sentiment of climate-related paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a ternary sentiment classification task of whether a given climate-related paragraph has sentiment opportunity, neutral, or risk.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a climate-related paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> risk, 1 -> neutral, 2 -> opportunity)
### Data Splits
The dataset is split into:
- train: 1,000
- test: 320
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. | [
-0.29399019479751587,
-0.299540638923645,
0.18881964683532715,
0.15101358294487,
-0.3941670060157776,
0.038445428013801575,
-0.2959413230419159,
-0.5333259105682373,
0.2882000803947449,
0.3519437611103058,
-0.5216134786605835,
-0.8354388475418091,
-0.5235773921012878,
-0.04693308100104332,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/quora-pl | clarin-knext | 2023-06-07T08:16:00Z | 94 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:16:00Z | 2023-06-06T22:16:05.000Z | 2023-06-06T22:16:05 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920734167099,
-0.9029767513275146,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.49629199504852295,
-0.0189602542668581,
0.41122621297836304,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175425052643,
-0.048304721713066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Symato/c4_vi-filtered_200GB | Symato | 2023-07-03T11:53:47Z | 94 | 0 | null | [
"region:us"
] | 2023-07-03T11:53:47Z | 2023-07-03T08:35:42.000Z | 2023-07-03T08:35:42 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlabonne/CodeLlama-2-20k | mlabonne | 2023-07-30T10:45:33Z | 94 | 9 | null | [
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"code",
"region:us"
] | 2023-07-30T10:45:33Z | 2023-07-20T11:13:42.000Z | 2023-07-20T11:13:42 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9551210
num_examples: 20022
download_size: 3551225
dataset_size: 9551210
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
tags:
- code
---
# CodeLlama-2-20k: A Llama 2 Version of CodeAlpaca
This dataset is the [`sahil2801/CodeAlpaca-20k`](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset with the Llama 2 prompt format [described here](https://huggingface.co/blog/llama2#how-to-prompt-llama-2).
Here is the code I used to format it:
``` python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset('sahil2801/CodeAlpaca-20k')
# Define a function to merge the three columns into one
def merge_columns(example):
if example['input']:
merged = f"<s>[INST] <<SYS>>\nBelow is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n<</SYS>>\n\n{example['instruction']} Input: {example['input']} [/INST] {example['output']} </s>"
else:
merged = f"<s>[INST] <<SYS>>\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n<</SYS>>\n\n{example['instruction']} [/INST] {example['output']} </s>"
return {"text": merged}
# Apply the function to all elements in the dataset
dataset = dataset.map(merge_columns, remove_columns=['instruction', 'input', 'output'])
``` | [
-0.13276022672653198,
-0.40491020679473877,
0.23789221048355103,
0.8126977682113647,
-0.5014024972915649,
-0.1766771525144577,
-0.3025420904159546,
-0.10324720293283463,
0.44081035256385803,
0.483306884765625,
-0.6994547247886658,
-0.5725877285003662,
-0.6097028255462646,
0.312854021787643... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imoxto/prompt_injection_cleaned_dataset-v2 | imoxto | 2023-08-08T09:30:19Z | 94 | 1 | null | [
"region:us"
] | 2023-08-08T09:30:19Z | 2023-08-08T09:30:03.000Z | 2023-08-08T09:30:03 | ---
dataset_info:
features:
- name: model
dtype: string
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 670958021
num_examples: 535105
download_size: 79246765
dataset_size: 670958021
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "prompt_injection_cleaned_dataset-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2707875072956085,
-0.5045444965362549,
0.3496396243572235,
0.05716274678707123,
-0.2948366701602936,
-0.08674519509077072,
0.46488285064697266,
-0.08993923664093018,
0.5533013343811035,
0.6749019622802734,
-0.7411206364631653,
-0.6768680810928345,
-0.4722592830657959,
-0.170732870697975... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rasgaard/20_newsgroups | rasgaard | 2023-09-13T07:25:05Z | 94 | 0 | null | [
"region:us"
] | 2023-09-13T07:25:05Z | 2023-09-13T07:23:58.000Z | 2023-09-13T07:23:58 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 12724811.858405516
num_examples: 10182
- name: val
num_bytes: 1414701.1415944847
num_examples: 1132
- name: test
num_bytes: 8499585
num_examples: 7532
download_size: 0
dataset_size: 22639098.0
---
# Dataset Card for "20_newsgroups"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8506332039833069,
-0.23145705461502075,
0.28461676836013794,
0.39881351590156555,
-0.18200227618217468,
0.08902502059936523,
0.24511903524398804,
-0.12407898157835007,
0.8124581575393677,
0.5689127445220947,
-0.9742553234100342,
-0.9899473786354065,
-0.6330271363258362,
-0.1367803812026... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
logikon/oasst1-delib | logikon | 2023-09-27T14:23:02Z | 94 | 0 | null | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-09-27T14:23:02Z | 2023-09-21T09:42:05.000Z | 2023-09-21T09:42:05 | ---
language:
- en
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: history
dtype: string
splits:
- name: train
num_bytes: 278875
num_examples: 90
- name: validation
num_bytes: 18290
num_examples: 6
download_size: 208227
dataset_size: 297165
---
# Dataset Card for "oasst1-delib"
Subset of `OpenAssistant/oasst1` with English chat messages that (are supposed to) contain reasoning:
* filtered by keyword "pros"
* includes chat history as extra feature
Dataset creation is documented in https://github.com/logikon-ai/deliberation-datasets/blob/main/notebooks/create_oasst1_delib.ipynb
| [
-0.5028636455535889,
-0.4666137099266052,
0.09581371396780014,
-0.12123581022024155,
-0.5683980584144592,
-0.0012464653700590134,
0.1634369045495987,
-0.07825163006782532,
0.5014564990997314,
0.6073128581047058,
-1.1339024305343628,
-0.8178122043609619,
-0.3512793183326721,
-0.273491114377... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DataProvenanceInitiative/t0_submix_original | DataProvenanceInitiative | 2023-10-16T17:40:22Z | 94 | 0 | null | [
"region:us"
] | 2023-10-16T17:40:22Z | 2023-10-16T17:39:08.000Z | 2023-10-16T17:39:08 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 4602180562
num_examples: 1650308
download_size: 2734694485
dataset_size: 4602180562
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "t0_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5640814900398254,
-0.07413360476493835,
0.003134246915578842,
0.2246992588043213,
-0.49949246644973755,
0.13039688766002655,
0.3841192424297333,
0.1710127592086792,
1.1293323040008545,
0.516777515411377,
-1.0326160192489624,
-0.6421570777893066,
-0.7074587345123291,
-0.19930100440979004... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ostapeno/qa-openai_batched_icl5_clen512_maxD-1_maxC2500_0_cleaned_5000 | ostapeno | 2023-11-03T00:10:57Z | 94 | 0 | null | [
"region:us"
] | 2023-11-03T00:10:57Z | 2023-11-03T00:10:47.000Z | 2023-11-03T00:10:47 | ---
configs:
- config_name: default
data_files:
- split: abstract_algebra
path: data/abstract_algebra-*
- split: college_biology
path: data/college_biology-*
- split: formal_logic
path: data/formal_logic-*
- split: global_facts
path: data/global_facts-*
- split: high_school_government_and_politics
path: data/high_school_government_and_politics-*
- split: high_school_physics
path: data/high_school_physics-*
- split: machine_learning
path: data/machine_learning-*
- split: prehistory
path: data/prehistory-*
- split: security_studies
path: data/security_studies-*
- split: sociology
path: data/sociology-*
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: docno
dtype: string
- name: subject
dtype: string
- name: icl_examples
sequence: string
- name: author_instr
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: abstract_algebra
num_bytes: 18511519
num_examples: 5000
- name: college_biology
num_bytes: 21908371
num_examples: 5000
- name: formal_logic
num_bytes: 26566641
num_examples: 5000
- name: global_facts
num_bytes: 18875609
num_examples: 5000
- name: high_school_government_and_politics
num_bytes: 22884039
num_examples: 5000
- name: high_school_physics
num_bytes: 25246951
num_examples: 5000
- name: machine_learning
num_bytes: 22057964
num_examples: 5000
- name: prehistory
num_bytes: 22831838
num_examples: 5000
- name: security_studies
num_bytes: 36761034
num_examples: 5000
- name: sociology
num_bytes: 22205675
num_examples: 5000
download_size: 21810553
dataset_size: 237849641
---
# Dataset Card for "qa-openai_batched_icl5_clen512_maxD-1_maxC2500_0_cleaned_5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6162436604499817,
0.1303737908601761,
0.07830624282360077,
0.2699902057647705,
-0.40719473361968994,
-0.29486989974975586,
0.20402973890304565,
-0.05444306880235672,
0.5245890021324158,
0.6203612685203552,
-0.7404240965843201,
-0.8157623410224915,
-0.33534741401672363,
0.052949957549571... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pranjali97/ha-en_RL-grow1_train | pranjali97 | 2023-11-04T03:29:55Z | 94 | 0 | null | [
"region:us"
] | 2023-11-04T03:29:55Z | 2023-11-04T03:29:53.000Z | 2023-11-04T03:29:53 | ---
dataset_info:
features:
- name: src
dtype: string
- name: ref
dtype: string
- name: mt
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 13578997
num_examples: 29454
download_size: 3191264
dataset_size: 13578997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ha-en_RL-grow1_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5108657479286194,
-0.48662859201431274,
0.028126057237386703,
0.35854262113571167,
-0.11527719348669052,
0.019726676866412163,
0.3276674151420593,
-0.29244574904441833,
1.0934836864471436,
0.4312608540058136,
-1.0507385730743408,
-0.5859335064888,
-0.5534858107566833,
-0.242245733737945... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thepurpleowl/codequeries | thepurpleowl | 2023-06-03T12:50:46Z | 93 | 5 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:code",
"license:apache-2.0",
"neural modeling of code",
"code ques... | 2023-06-03T12:50:46Z | 2022-08-24T09:27:43.000Z | 2022-08-24T09:27:43 | ---
annotations_creators:
- expert-generated
language:
- code
language_creators:
- found
multilinguality:
- monolingual
pretty_name: codequeries
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- neural modeling of code
- code question answering
- code semantic understanding
task_categories:
- question-answering
task_ids:
- extractive-qa
license:
- apache-2.0
---
# Dataset Card for CodeQueries
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [How to use](#how-to-use)
- [Data Splits and Data Fields](#data-splits-and-data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Data](https://huggingface.co/datasets/thepurpleowl/codequeries)
- **Repository:** [Code](https://github.com/thepurpleowl/codequeries-benchmark)
- **Paper:**
### Dataset Summary
CodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning.
### Supported Tasks and Leaderboards
Extractive question answering for code, semantic understanding of code.
### Languages
The dataset contains code context from `python` files.
## Dataset Structure
### How to Use
The dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
```python
import datasets
# in addition to `twostep`, the other supported settings are <ideal/file_ideal/prefix>.
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST)
print(next(iter(ds)))
#OUTPUT:
{'query_name': 'Unused import',
'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py',
'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...',
'metadata': 'root',
'header': "['module', '___EOS___']",
'index': 0},
'answer_spans': [{'span': 'from glance.common import context',
'start_line': 19,
'start_column': 0,
'end_line': 19,
'end_column': 33}
],
'supporting_fact_spans': [],
'example_type': 1,
'single_hop': False,
'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...],
'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...],
'relevance_label': 1
}
```
### Data Splits and Data Fields
Detailed information on the data splits for proposed settings can be found in the paper.
In general, data splits in all the proposed settings have examples with the following fields -
```
- query_name (query name to uniquely identify the query)
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`]
- answer_spans (answer spans with metadata)
- supporting_fact_spans (supporting-fact spans with metadata)
- example_type (1(positive)) or 0(negative)) example type)
- single_hop (True or False - for query type)
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
- label_sequence (example subtoken labels)
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field]
```
## Dataset Creation
The dataset is created using [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used.
## Additional Information
### Licensing Information
The source code repositories used for preparing CodeQueries are based on the [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available [here](https://huggingface.co/datasets/eth_py150_open). The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license.
| [
-0.5519275665283203,
-0.8475027680397034,
0.1991005539894104,
0.3420720100402832,
-0.16885249316692352,
0.003728813724592328,
-0.09906762838363647,
-0.18253645300865173,
0.6485257744789124,
0.5098587274551392,
-0.6447515487670898,
-0.7813000082969666,
-0.31328824162483215,
0.15240074694156... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SandipPalit/Movie_Dataset | SandipPalit | 2023-01-14T15:41:07Z | 93 | 2 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:en",
"Movie",
"Cinema",
"Film",
"region:us"
] | 2023-01-14T15:41:07Z | 2023-01-14T15:20:44.000Z | 2023-01-14T15:20:44 | ---
task_categories:
- text-classification
- text-generation
- summarization
- sentence-similarity
language:
- en
tags:
- Movie
- Cinema
- Film
pretty_name: Movie Dataset
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aeala/ShareGPT_Vicuna_unfiltered | Aeala | 2023-06-01T07:03:50Z | 93 | 14 | null | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-06-01T07:03:50Z | 2023-06-01T06:54:32.000Z | 2023-06-01T06:54:32 | ---
license: apache-2.0
language:
- en
---
## Dataset Card
This is a reupload of [this dataset](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) that was further cleaned by gozfarb. | [
-0.37370172142982483,
-0.4293087124824524,
0.18885615468025208,
0.10838343948125839,
-0.7668966054916382,
-0.1968853920698166,
0.26227006316185,
-0.24214845895767212,
0.9121332168579102,
1.1706984043121338,
-0.9604146480560303,
-0.6197649836540222,
-0.46426936984062195,
-0.2247784286737442... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuvalkirstain/task_prediction_train3 | yuvalkirstain | 2023-10-31T19:33:36Z | 93 | 0 | null | [
"region:us"
] | 2023-10-31T19:33:36Z | 2023-10-31T19:33:13.000Z | 2023-10-31T19:33:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: path
dtype: string
- name: text
dtype: string
- name: task_name
dtype: string
splits:
- name: train
num_bytes: 659890949
num_examples: 5663600
- name: validation
num_bytes: 7823929
num_examples: 60002
- name: test
num_bytes: 153998
num_examples: 2057
download_size: 148209849
dataset_size: 667868876
---
# Dataset Card for "task_prediction_train3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.32703691720962524,
-0.00874365959316492,
0.33155515789985657,
0.34987688064575195,
0.0023820647038519382,
-0.23599009215831757,
0.23231033980846405,
-0.2985883355140686,
0.518020510673523,
0.4012604355812073,
-0.8887760639190674,
-0.5908003449440002,
-0.7676456570625305,
-0.339119672775... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zouharvi/pwesuite-eval | zouharvi | 2023-10-11T17:14:09Z | 92 | 0 | null | [
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"language:en",
"language:am",
"language:bn",
"language:sw",
"language:uz",
"language:es",
"language:pl",
"language:fr",
"language:de",
"license:apache-2.0",
"words",
"word",
"embedding",
"phonetic",
"phonological",
"cogna... | 2023-10-11T17:14:09Z | 2023-02-04T22:04:58.000Z | 2023-02-04T22:04:58 | ---
language:
- en
- am
- bn
- sw
- uz
- es
- pl
- fr
- de
multilinguality:
- multilingual
tags:
- words
- word
- embedding
- phonetic
- phonological
- cognates
- rhyme
- analogy
pretty_name: PWESuite Evaluation v1
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: token_ort
dtype: string
- name: token_ipa
dtype: string
- name: token_arp
dtype: string
- name: lang
dtype: string
- name: purpose
dtype: string
splits:
- name: train
num_examples: 1738008
license: apache-2.0
---
# PWESuite-Eval
Dataset composed of multiple smaller datasets used for the evaluation of phonetic word embeddings.
See code for evaluation [here](https://github.com/zouharvi/pwesuite).
Used datasets:
- [CMU Pronunciation dictionary](http://www.speech.cs.cmu.edu/cgi-bin/cmudict)
- [CC-100](https://data.statmt.org/cc-100/)
- [CogNet v0](https://aclanthology.org/P19-1302/)
- [Vitz and Winkler (1973)](https://www.sciencedirect.com/science/article/pii/S0022537173800167)
Authors:
- Vilém Zouhar (ETH Zürich, [contact](mailto:vzouhar@ethz.ch))
- Kalvin Chang (CMU LTI, [contact](mailto:kalvinc@cs.cmu.edu))
- Chenxuan Cui (CMU LTI, [contact](mailto:cxcui@cs.cmu.edu))
- Nathaniel Robinson (CMU LTI, [contact](mailto:nrrobins@cs.cmu.edu))
- Nathaniel Carlson (BYU, [contact](mailto:natec18@byu.edu))
- David Mortensen (CMU LTI, [contact](mailto:dmortens@cs.cmu.edu))
If you use this dataset/evaluation, please cite:
```
@article{zouhar2023pwesuite,
title={{PWESuite}: {P}honetic Word Embeddings and Tasks They Facilitate},
author={Zouhar, Vil{\'e}m and Chang, Kalvin and Cui, Chenxuan and Carlson, Nathaniel and Robinson, Nathaniel and Sachan, Mrinmaya and Mortensen, David},
journal={arXiv preprint arXiv:2304.02541},
year={2023},
url={https://arxiv.org/abs/2304.02541}
}
``` | [
-0.06262808293104172,
-0.5668449401855469,
0.43095093965530396,
0.20272736251354218,
-0.10296078026294708,
-0.14122751355171204,
-0.6600680947303772,
0.1244143694639206,
0.19008812308311462,
-0.04571458697319031,
-0.4299676716327667,
-0.7869740128517151,
-0.4757719337940216,
0.013669107109... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LazarusNLP/stsb_mt_id | LazarusNLP | 2023-05-30T13:35:25Z | 92 | 0 | null | [
"region:us"
] | 2023-05-30T13:35:25Z | 2023-05-27T09:14:38.000Z | 2023-05-27T09:14:38 | ---
dataset_info:
features:
- name: domain
dtype: string
- name: data
dtype: string
- name: type
dtype: string
- name: score
dtype: string
- name: correlation
dtype: string
- name: text_1
dtype: string
- name: text_2
dtype: string
splits:
- name: test
num_bytes: 253093
num_examples: 1379
- name: validation
num_bytes: 305450
num_examples: 1500
download_size: 268625
dataset_size: 558543
---
# Machine Translated Indonesian STS-B
We believe that a synthetic baseline is better than no baseline. Therefore, we followed approached done in the [Thai Sentence Vector Benchmark](https://github.com/mrpeerat/Thai-Sentence-Vector-Benchmark) project and translated the [STS-B](https://github.com/facebookresearch/SentEval) test set to Indonesian via Google Translate API. This dataset will be used to evaluate our model's Spearman correlation score on the translated test set.
You can find the latest STS results that we achieved on this dataset in [Indonesian Sentence Embeddings](https://github.com/LazarusNLP/indo-sentence-embeddings). | [
-0.2104654759168625,
-0.9607947468757629,
0.2938525676727295,
0.42540305852890015,
-0.6662728190422058,
0.10063374787569046,
-0.291450560092926,
-0.5649706721305847,
0.30971598625183105,
0.5713058710098267,
-0.2618025541305542,
-0.5391119718551636,
-0.6355783939361572,
0.477385014295578,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
markytools/goorealv3 | markytools | 2023-06-25T01:20:11Z | 92 | 0 | null | [
"region:us"
] | 2023-06-25T01:20:11Z | 2023-06-23T14:15:54.000Z | 2023-06-23T14:15:54 | ---
dataset_info:
features:
- name: image
dtype: image
- name: split
dtype: string
- name: width
dtype: int64
- name: height
dtype: int64
- name: bboxes
dtype: string
- name: labels
dtype: string
- name: gaze_item
dtype: int64
- name: gazeIdx
dtype: int64
- name: gaze_cx
dtype: int64
- name: gaze_cy
dtype: int64
- name: hx
dtype: int64
- name: hy
dtype: int64
- name: seg
dtype: string
- name: occluded
dtype: bool
- name: person_num
dtype: int64
- name: cam_num
dtype: int64
splits:
- name: test
num_bytes: 6289998121.391
num_examples: 7391
download_size: 6286282416
dataset_size: 6289998121.391
---
The dataset features/columns here are almost similar to the original github instruction (please read the github documentation first to understand the dataset): https://github.com/upeee/GOO-GAZE2021/blob/main/dataset/gooreal-download.txt
To download gooreal in huggingface, run the code below (https://huggingface.co/docs/datasets/v1.10.0/loading_datasets.html#from-the-huggingface-hub):
from datasets import load_dataset</br>
dataset = load_dataset("markytools/goorealv3")
The image datasets will be stored in ""~/.cache/huggingface", so you need to delete the files here if you want to free up space.
</br>
The "bboxes" and "labels" features are in string format, so you can use the code below to convert the string into list:</br>
import ast</br>
listOfBboxes = ast.literal_eval(dataset["test"]["bboxes"][0])</br>
</br>
The feature "seg" is now in string format instead of numpy ndarray. This is an optional feature, and you can manually download the files here (https://huggingface.co/datasets/markytools/goosegmv3) using wget commandline. The files are in .npy so load it using np.load (https://numpy.org/doc/stable/reference/generated/numpy.load.html). | [
-0.5536161661148071,
-0.4507479965686798,
0.08613401651382446,
0.04180782660841942,
-0.13947854936122894,
-0.1330210417509079,
-0.22753414511680603,
-0.4632101058959961,
0.47508618235588074,
0.44576168060302734,
-0.46350157260894775,
-0.6251109838485718,
-0.45346906781196594,
0.01463075540... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
razhan/asosoft-speech | razhan | 2023-08-30T14:40:10Z | 92 | 1 | null | [
"region:us"
] | 2023-08-30T14:40:10Z | 2023-07-15T08:49:25.000Z | 2023-07-15T08:49:25 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 621160243.56
num_examples: 3240
- name: test
num_bytes: 113413557.0
num_examples: 600
download_size: 702412597
dataset_size: 734573800.56
---
# Dataset Card for "asosoft-speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5564827919006348,
-0.4338291585445404,
-0.016100270673632622,
0.17611481249332428,
-0.15852060914039612,
0.09627693146467209,
-0.30305805802345276,
-0.38328614830970764,
0.9582211971282959,
0.6067544221878052,
-1.0225694179534912,
-0.8249595165252686,
-0.614852249622345,
-0.364827543497... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sngsfydy/aptos_test | sngsfydy | 2023-07-19T19:19:46Z | 92 | 0 | null | [
"region:us"
] | 2023-07-19T19:19:46Z | 2023-07-19T19:18:30.000Z | 2023-07-19T19:18:30 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
splits:
- name: train
num_bytes: 1802932566.6624794
num_examples: 733
download_size: 1800938316
dataset_size: 1802932566.6624794
---
# Dataset Card for "aptos_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4564606845378876,
0.022400226444005966,
0.10383474081754684,
0.18205109238624573,
-0.49858999252319336,
-0.10720392316579819,
0.4606417119503021,
-0.2321566343307495,
0.797559916973114,
0.4830380380153656,
-0.5581004023551941,
-0.544163167476654,
-0.7657907605171204,
-0.1176192164421081... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhoestq/squad | lhoestq | 2023-08-18T10:52:41Z | 92 | 1 | squad | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
... | 2023-08-18T10:52:41Z | 2023-08-18T10:52:20.000Z | 2023-08-18T10:52:20 | ---
pretty_name: SQuAD
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: squad
train-eval-index:
- config: plain_text
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: plain_text
splits:
- name: train
num_bytes: 79317110
num_examples: 87599
- name: validation
num_bytes: 10472653
num_examples: 10570
download_size: 35142551
dataset_size: 89789763
---
# Dataset Card for "squad"
## Table of Contents
- [Dataset Card for "squad"](#dataset-card-for-squad)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [plain_text](#plain_text)
- [Data Fields](#data-fields)
- [plain_text](#plain_text-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 35.14 MB
- **Size of the generated dataset:** 89.92 MB
- **Total amount of disk used:** 125.06 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 35.14 MB
- **Size of the generated dataset:** 89.92 MB
- **Total amount of disk used:** 125.06 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|87599| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. | [
-0.6450862884521484,
-0.6308832764625549,
0.09647855162620544,
0.198385551571846,
-0.10528775304555893,
0.08359632641077042,
-0.289564311504364,
-0.3650423288345337,
0.5495766401290894,
0.39507049322128296,
-1.0176292657852173,
-0.8757438659667969,
-0.39779752492904663,
0.22953400015830994... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mapama247/wikihow_es | mapama247 | 2023-09-19T12:48:50Z | 92 | 0 | null | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:es",
"license:cc-by-nc-sa-3.0",
"Spanish",
"WikiHow",
"Wiki Articles",
"Tutorials... | 2023-09-19T12:48:50Z | 2023-09-18T08:39:33.000Z | 2023-09-18T08:39:33 | ---
pretty_name: WikiHow-ES
license: cc-by-nc-sa-3.0
size_categories: 1K<n<10K
language: es
multilinguality: monolingual
task_categories:
- text-classification
- question-answering
- conversational
- summarization
tags:
- Spanish
- WikiHow
- Wiki Articles
- Tutorials
- Step-By-Step
- Instruction Tuning
---
### Dataset Summary
Articles retrieved from the [Spanish WikiHow website](https://es.wikihow.com) on September 2023.
Each article contains a tutorial about a specific topic. The format is always a "How to" question
followed by a detailed step-by-step explanation. In some cases, the response includes several methods.
The main idea is to use this data for instruction tuning of Spanish LLMs, but given its nature it
could also be used for other tasks such as text classification or summarization.
### Languages
- Spanish (ES)
### Usage
To load the full dataset:
```python
from datasets import load_dataset
all_articles = load_dataset("mapama247/wikihow_es")
print(all_articles.num_rows) # output: {'train': 7380}
```
To load only examples from a specific category:
```python
from datasets import load_dataset
sports_articles = load_dataset("mapama247/wikihow_es", "deportes")
print(sports_articles.num_rows) # output: {'train': 201}
```
List of available categories, with the repective number of examples:
```
computadoras-y-electrónica 821
salud 804
pasatiempos 729
cuidado-y-estilo-personal 724
carreras-y-educación 564
en-la-casa-y-el-jardín 496
finanzas-y-negocios 459
comida-y-diversión 454
relaciones 388
mascotas-y-animales 338
filosofía-y-religión 264
arte-y-entretenimiento 254
en-el-trabajo 211
adolescentes 201
deportes 201
vida-familiar 147
viajes 139
automóviles-y-otros-vehículos 100
días-de-fiesta-y-tradiciones 86
```
### Supported Tasks
This dataset can be used to train a model for...
- `instruction-tuning`
- `text-classification`
- `question-answering`
- `conversational`
- `summarization`
## Dataset Structure
### Data Instances
```python
{
'category': str,
'question': str,
'introduction': str,
'answers': List[str],
'short_answers': List[str],
'url': str,
'num_answers': int,
'num_refs': int,
'expert_author': bool,
}
```
### Data Fields
- `category`: The category (from [this list](https://es.wikihow.com/Especial:CategoryListing)) to which the example belongs to.
- `label`: Numerical representation of the category, for text classification purposes.
- `question`: The article's title, which always starts with "¿Cómo ...".
- `introduction`: Introductory text that precedes the step-by-step explanation.
- `answers`: List of complete answers, with the full explanation of each step.
- `short_answers`: List of shorter answers that only contain one-sentence steps.
- `num_answers`: The number of alternative answers provided (e.g. length of `answers`).
- `num_ref`: Number of references provided in the article.
- `expert_authors`: Whether the article's author claims to be an expert on the topic or not.
- `url`: The URL address of the original article.
### Data Splits
There is only one split (`train`) that contains a total of 7,380 examples.
## Dataset Creation
### Curation Rationale
This dataset was created for language model alignment to end tasks and user preferences.
### Source Data
How-To questions with detailed step-by-step answers, retrieved from the WikiHow website.
#### Data Collection and Normalization
All articles available in September 2023 were extracted, no filters applied.
Along with the article's content, some metadata was retrieved as well.
#### Source language producers
WikiHow users. All the content is human-generated.
### Personal and Sensitive Information
The data does not include personal or sensitive information.
## Considerations
### Social Impact
The Spanish community can benefit from the high-quality data provided by this dataset.
### Bias
No post-processing steps have been applied to mitigate potential social biases.
## Additional Information
### Curators
Marc Pàmes @ Barcelona Supercomputing Center.
### License
This dataset is licensed under a **Creative Commons CC BY-NC-SA 3.0** license.
Quote from [WikiHow's Terms of Use](https://www.wikihow.com/wikiHow:Terms-of-Use):
> All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as
> provided herein. The Creative Commons license allows such user generated text content to be used freely for personal,
> non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of
> the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction
> on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants
> each User of the Service a license to all text content that Users contribute to the Service under the terms and
> conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully.
> You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as
> you wish, whether for commercial or non-commercial purposes.
| [
-0.4655197858810425,
-0.7247970700263977,
0.10539765655994415,
0.22254535555839539,
-0.26273196935653687,
-0.040566761046648026,
-0.26924559473991394,
-0.19604848325252533,
0.35453760623931885,
0.37841108441352844,
-0.8109675049781799,
-0.834254264831543,
-0.4577919542789459,
0.44999399781... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
feedback-to-code/cwa-server-task-instances | feedback-to-code | 2023-11-09T13:22:12Z | 92 | 0 | null | [
"region:us"
] | 2023-11-09T13:22:12Z | 2023-11-09T13:20:59.000Z | 2023-11-09T13:20:59 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kpriyanshu256/semeval-task-8-a-mono-v2 | kpriyanshu256 | 2023-11-10T14:40:40Z | 92 | 0 | null | [
"region:us"
] | 2023-11-10T14:40:40Z | 2023-11-10T14:40:26.000Z | 2023-11-10T14:40:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: model
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 272387024
num_examples: 95805
- name: val
num_bytes: 66852841
num_examples: 23952
- name: test
num_bytes: 10543757
num_examples: 5000
download_size: 201715990
dataset_size: 349783622
---
# Dataset Card for "semeval-task-8-a-mono-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.454600065946579,
-0.18965323269367218,
0.20525364577770233,
0.2527109682559967,
-0.4680454730987549,
-0.12922631204128265,
0.3785131573677063,
-0.18516916036605835,
1.0205299854278564,
0.553954541683197,
-0.865014374256134,
-0.545525074005127,
-0.7758206129074097,
-0.25526896119117737,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ka4on/radiology | Ka4on | 2023-11-11T19:11:44Z | 92 | 0 | null | [
"region:us"
] | 2023-11-11T19:11:44Z | 2023-11-11T18:51:32.000Z | 2023-11-11T18:51:32 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jinaai/code_search_net_clean | jinaai | 2023-11-17T08:51:49Z | 92 | 0 | null | [
"region:us"
] | 2023-11-17T08:51:49Z | 2023-11-15T17:58:52.000Z | 2023-11-15T17:58:52 | ---
dataset_info:
features:
- name: code
dtype: string
- name: docs
dtype: string
- name: queries
dtype: string
splits:
- name: test
num_bytes: 97395014
num_examples: 92561
- name: train
num_bytes: 2762806177
num_examples: 1743105
download_size: 1016995616
dataset_size: 2860201191
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
# Dataset Card for "code_search_net_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5527632236480713,
-0.11828812211751938,
-0.11595176160335541,
-0.13353216648101807,
-0.05047369748353958,
-0.09075517952442169,
0.12098716199398041,
-0.0042124586179852486,
0.8957154750823975,
0.6300410032272339,
-0.5941290855407715,
-0.7824115753173828,
-0.19948112964630127,
-0.1331038... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ylacombe/english_dialects | ylacombe | 2023-11-27T10:32:58Z | 92 | 0 | null | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-11-27T10:32:58Z | 2023-11-25T12:40:07.000Z | 2023-11-25T12:40:07 | ---
dataset_info:
- config_name: irish_male
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 247383069
num_examples: 450
download_size: 202720287
dataset_size: 247383069
- config_name: midlands_female
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 162542037
num_examples: 246
download_size: 132978651
dataset_size: 162542037
- config_name: midlands_male
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 253069802
num_examples: 450
download_size: 206197835
dataset_size: 253069802
- config_name: northern_female
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 473568497
num_examples: 750
download_size: 394563149
dataset_size: 473568497
- config_name: northern_male
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 1248889021.568
num_examples: 2097
download_size: 1018089994
dataset_size: 1248889021.568
- config_name: scottish_female
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 547825387
num_examples: 894
download_size: 444335278
dataset_size: 547825387
- config_name: scottish_male
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 957274572.368
num_examples: 1649
download_size: 771585437
dataset_size: 957274572.368
- config_name: southern_female
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 2500285879.784
num_examples: 4161
download_size: 2043363777
dataset_size: 2500285879.784
- config_name: southern_male
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 2566139827.568
num_examples: 4331
download_size: 2105363890
dataset_size: 2566139827.568
- config_name: welsh_female
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 852961200.976
num_examples: 1199
download_size: 737774228
dataset_size: 852961200.976
- config_name: welsh_male
features:
- name: line_id
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 1026953293.4
num_examples: 1650
download_size: 926205900
dataset_size: 1026953293.4
configs:
- config_name: irish_male
data_files:
- split: train
path: irish_male/train-*
- config_name: midlands_female
data_files:
- split: train
path: midlands_female/train-*
- config_name: midlands_male
data_files:
- split: train
path: midlands_male/train-*
- config_name: northern_female
data_files:
- split: train
path: northern_female/train-*
- config_name: northern_male
data_files:
- split: train
path: northern_male/train-*
- config_name: scottish_female
data_files:
- split: train
path: scottish_female/train-*
- config_name: scottish_male
data_files:
- split: train
path: scottish_male/train-*
- config_name: southern_female
data_files:
- split: train
path: southern_female/train-*
- config_name: southern_male
data_files:
- split: train
path: southern_male/train-*
- config_name: welsh_female
data_files:
- split: train
path: welsh_female/train-*
- config_name: welsh_male
data_files:
- split: train
path: welsh_male/train-*
license: cc-by-sa-4.0
task_categories:
- text-to-speech
- text-to-audio
language:
- en
pretty_name: Google English Dialects
---
# Dataset Card for "english_dialects"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Crowdsourced high-quality UK and Ireland English Dialect speech data set.](https://www.openslr.org/83/)
- **Repository:** [Google Language Resources and Tools](https://github.com/google/language-resources)
- **Paper:** [Open-source Multi-speaker Corpora of the English Accents in the British Isles](https://aclanthology.org/2020.lrec-1.804/)
### Dataset Summary
This dataset consists of 31 hours of transcribed high-quality audio of English sentences recorded by 120 volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The speakers self-identified as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.
The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage.
The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words.
Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the [CSTR VCTK corpus](https://huggingface.co/datasets/vctk) and the Speech Accent Archive to allow for easy comparison of personal and regional accents.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/83) to make it easier to stream.
### Supported Tasks
- `text-to-speech`, `text-to-audio`: The dataset can be used to train a model for Text-To-Speech (TTS).
- `automatic-speech-recognition`, `speaker-identification`: The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Irish male config, simply specify the corresponding language config name (i.e., "irish_male" for Irish male speakers):
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/english_dialects", "irish_male", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
dataset =load_dataset("ylacombe/english_dialects", "irish_male", split="train", streaming=True)
print(next(iter(dataset)))
```
#### *Bonus*
You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
**Local:**
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
dataset =load_dataset("ylacombe/english_dialects", "irish_male", split="train")
batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
```
**Streaming:**
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset =load_dataset("ylacombe/english_dialects", "irish_male", split="train", streaming=True)
dataloader = DataLoader(dataset, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file called `audio` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'line_id': 'BI0057', 'audio': {'path': 'irm_02484_00388340153.wav', 'array': array([-1.22070312e-04, -1.52587891e-04, -1.22070312e-04, ...,
1.52587891e-04, 9.15527344e-05, 1.83105469e-04]), 'sampling_rate': 48000}, 'text': 'It is thirteen degrees with drizzle in Exeter', 'speaker_id': 2484}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- line_id: unique id of the transcription. The same line id can be found for multiple speakers.
### Data Statistics

## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
License: ([CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en))
### Citation Information
```
@inproceedings{demirsahin-etal-2020-open,
title = "Open-source Multi-speaker Corpora of the {E}nglish Accents in the {B}ritish Isles",
author = "Demirsahin, Isin and
Kjartansson, Oddur and
Gutkin, Alexander and
Rivera, Clara",
editor = "Calzolari, Nicoletta and
B{\'e}chet, Fr{\'e}d{\'e}ric and
Blache, Philippe and
Choukri, Khalid and
Cieri, Christopher and
Declerck, Thierry and
Goggi, Sara and
Isahara, Hitoshi and
Maegaard, Bente and
Mariani, Joseph and
Mazo, H{\'e}l{\`e}ne and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.804",
pages = "6532--6541",
abstract = "This paper presents a dataset of transcribed high-quality audio of English sentences recorded by volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage. The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. The resulting corpora include over 31 hours of recordings from 120 volunteers who self-identify as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
### Contributions
Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.
| [
-0.42740046977996826,
-0.5305050611495972,
-0.06705445796251297,
0.3074851632118225,
-0.12616589665412903,
-0.06667990237474442,
-0.6430954933166504,
-0.373857319355011,
0.6484925746917725,
0.5325883030891418,
-0.49724382162094116,
-0.796329915523529,
-0.4204637408256531,
0.445543915033340... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrm8488/ImageNet1K-val | mrm8488 | 2022-04-27T19:16:51Z | 91 | 0 | null | [
"region:us"
] | 2022-04-27T19:16:51Z | 2022-04-27T19:05:28.000Z | 2022-04-27T19:05:28 | mapping:
```
n01440764 tench, Tinca tinca
n01443537 goldfish, Carassius auratus
n01484850 great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias
n01491361 tiger shark, Galeocerdo cuvieri
n01494475 hammerhead, hammerhead shark
n01496331 electric ray, crampfish, numbfish, torpedo
n01498041 stingray
n01514668 cock
n01514859 hen
n01518878 ostrich, Struthio camelus
n01530575 brambling, Fringilla montifringilla
n01531178 goldfinch, Carduelis carduelis
n01532829 house finch, linnet, Carpodacus mexicanus
n01534433 junco, snowbird
n01537544 indigo bunting, indigo finch, indigo bird, Passerina cyanea
n01558993 robin, American robin, Turdus migratorius
n01560419 bulbul
n01580077 jay
n01582220 magpie
n01592084 chickadee
n01601694 water ouzel, dipper
n01608432 kite
n01614925 bald eagle, American eagle, Haliaeetus leucocephalus
n01616318 vulture
n01622779 great grey owl, great gray owl, Strix nebulosa
n01629819 European fire salamander, Salamandra salamandra
n01630670 common newt, Triturus vulgaris
n01631663 eft
n01632458 spotted salamander, Ambystoma maculatum
n01632777 axolotl, mud puppy, Ambystoma mexicanum
n01641577 bullfrog, Rana catesbeiana
n01644373 tree frog, tree-frog
n01644900 tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
n01664065 loggerhead, loggerhead turtle, Caretta caretta
n01665541 leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
n01667114 mud turtle
n01667778 terrapin
n01669191 box turtle, box tortoise
n01675722 banded gecko
n01677366 common iguana, iguana, Iguana iguana
n01682714 American chameleon, anole, Anolis carolinensis
n01685808 whiptail, whiptail lizard
n01687978 agama
n01688243 frilled lizard, Chlamydosaurus kingi
n01689811 alligator lizard
n01692333 Gila monster, Heloderma suspectum
n01693334 green lizard, Lacerta viridis
n01694178 African chameleon, Chamaeleo chamaeleon
n01695060 Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis
n01697457 African crocodile, Nile crocodile, Crocodylus niloticus
n01698640 American alligator, Alligator mississipiensis
n01704323 triceratops
n01728572 thunder snake, worm snake, Carphophis amoenus
n01728920 ringneck snake, ring-necked snake, ring snake
n01729322 hognose snake, puff adder, sand viper
n01729977 green snake, grass snake
n01734418 king snake, kingsnake
n01735189 garter snake, grass snake
n01737021 water snake
n01739381 vine snake
n01740131 night snake, Hypsiglena torquata
n01742172 boa constrictor, Constrictor constrictor
n01744401 rock python, rock snake, Python sebae
n01748264 Indian cobra, Naja naja
n01749939 green mamba
n01751748 sea snake
n01753488 horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
n01755581 diamondback, diamondback rattlesnake, Crotalus adamanteus
n01756291 sidewinder, horned rattlesnake, Crotalus cerastes
n01768244 trilobite
n01770081 harvestman, daddy longlegs, Phalangium opilio
n01770393 scorpion
n01773157 black and gold garden spider, Argiope aurantia
n01773549 barn spider, Araneus cavaticus
n01773797 garden spider, Aranea diademata
n01774384 black widow, Latrodectus mactans
n01774750 tarantula
n01775062 wolf spider, hunting spider
n01776313 tick
n01784675 centipede
n01795545 black grouse
n01796340 ptarmigan
n01797886 ruffed grouse, partridge, Bonasa umbellus
n01798484 prairie chicken, prairie grouse, prairie fowl
n01806143 peacock
n01806567 quail
n01807496 partridge
n01817953 African grey, African gray, Psittacus erithacus
n01818515 macaw
n01819313 sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
n01820546 lorikeet
n01824575 coucal
n01828970 bee eater
n01829413 hornbill
n01833805 hummingbird
n01843065 jacamar
n01843383 toucan
n01847000 drake
n01855032 red-breasted merganser, Mergus serrator
n01855672 goose
n01860187 black swan, Cygnus atratus
n01871265 tusker
n01872401 echidna, spiny anteater, anteater
n01873310 platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus
n01877812 wallaby, brush kangaroo
n01882714 koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
n01883070 wombat
n01910747 jellyfish
n01914609 sea anemone, anemone
n01917289 brain coral
n01924916 flatworm, platyhelminth
n01930112 nematode, nematode worm, roundworm
n01943899 conch
n01944390 snail
n01945685 slug
n01950731 sea slug, nudibranch
n01955084 chiton, coat-of-mail shell, sea cradle, polyplacophore
n01968897 chambered nautilus, pearly nautilus, nautilus
n01978287 Dungeness crab, Cancer magister
n01978455 rock crab, Cancer irroratus
n01980166 fiddler crab
n01981276 king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica
n01983481 American lobster, Northern lobster, Maine lobster, Homarus americanus
n01984695 spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
n01985128 crayfish, crawfish, crawdad, crawdaddy
n01986214 hermit crab
n01990800 isopod
n02002556 white stork, Ciconia ciconia
n02002724 black stork, Ciconia nigra
n02006656 spoonbill
n02007558 flamingo
n02009229 little blue heron, Egretta caerulea
n02009912 American egret, great white heron, Egretta albus
n02011460 bittern
n02012849 crane
n02013706 limpkin, Aramus pictus
n02017213 European gallinule, Porphyrio porphyrio
n02018207 American coot, marsh hen, mud hen, water hen, Fulica americana
n02018795 bustard
n02025239 ruddy turnstone, Arenaria interpres
n02027492 red-backed sandpiper, dunlin, Erolia alpina
n02028035 redshank, Tringa totanus
n02033041 dowitcher
n02037110 oystercatcher, oyster catcher
n02051845 pelican
n02056570 king penguin, Aptenodytes patagonica
n02058221 albatross, mollymawk
n02066245 grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus
n02071294 killer whale, killer, orca, grampus, sea wolf, Orcinus orca
n02074367 dugong, Dugong dugon
n02077923 sea lion
n02085620 Chihuahua
n02085782 Japanese spaniel
n02085936 Maltese dog, Maltese terrier, Maltese
n02086079 Pekinese, Pekingese, Peke
n02086240 Shih-Tzu
n02086646 Blenheim spaniel
n02086910 papillon
n02087046 toy terrier
n02087394 Rhodesian ridgeback
n02088094 Afghan hound, Afghan
n02088238 basset, basset hound
n02088364 beagle
n02088466 bloodhound, sleuthhound
n02088632 bluetick
n02089078 black-and-tan coonhound
n02089867 Walker hound, Walker foxhound
n02089973 English foxhound
n02090379 redbone
n02090622 borzoi, Russian wolfhound
n02090721 Irish wolfhound
n02091032 Italian greyhound
n02091134 whippet
n02091244 Ibizan hound, Ibizan Podenco
n02091467 Norwegian elkhound, elkhound
n02091635 otterhound, otter hound
n02091831 Saluki, gazelle hound
n02092002 Scottish deerhound, deerhound
n02092339 Weimaraner
n02093256 Staffordshire bullterrier, Staffordshire bull terrier
n02093428 American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier
n02093647 Bedlington terrier
n02093754 Border terrier
n02093859 Kerry blue terrier
n02093991 Irish terrier
n02094114 Norfolk terrier
n02094258 Norwich terrier
n02094433 Yorkshire terrier
n02095314 wire-haired fox terrier
n02095570 Lakeland terrier
n02095889 Sealyham terrier, Sealyham
n02096051 Airedale, Airedale terrier
n02096177 cairn, cairn terrier
n02096294 Australian terrier
n02096437 Dandie Dinmont, Dandie Dinmont terrier
n02096585 Boston bull, Boston terrier
n02097047 miniature schnauzer
n02097130 giant schnauzer
n02097209 standard schnauzer
n02097298 Scotch terrier, Scottish terrier, Scottie
n02097474 Tibetan terrier, chrysanthemum dog
n02097658 silky terrier, Sydney silky
n02098105 soft-coated wheaten terrier
n02098286 West Highland white terrier
n02098413 Lhasa, Lhasa apso
n02099267 flat-coated retriever
n02099429 curly-coated retriever
n02099601 golden retriever
n02099712 Labrador retriever
n02099849 Chesapeake Bay retriever
n02100236 German short-haired pointer
n02100583 vizsla, Hungarian pointer
n02100735 English setter
n02100877 Irish setter, red setter
n02101006 Gordon setter
n02101388 Brittany spaniel
n02101556 clumber, clumber spaniel
n02102040 English springer, English springer spaniel
n02102177 Welsh springer spaniel
n02102318 cocker spaniel, English cocker spaniel, cocker
n02102480 Sussex spaniel
n02102973 Irish water spaniel
n02104029 kuvasz
n02104365 schipperke
n02105056 groenendael
n02105162 malinois
n02105251 briard
n02105412 kelpie
n02105505 komondor
n02105641 Old English sheepdog, bobtail
n02105855 Shetland sheepdog, Shetland sheep dog, Shetland
n02106030 collie
n02106166 Border collie
n02106382 Bouvier des Flandres, Bouviers des Flandres
n02106550 Rottweiler
n02106662 German shepherd, German shepherd dog, German police dog, alsatian
n02107142 Doberman, Doberman pinscher
n02107312 miniature pinscher
n02107574 Greater Swiss Mountain dog
n02107683 Bernese mountain dog
n02107908 Appenzeller
n02108000 EntleBucher
n02108089 boxer
n02108422 bull mastiff
n02108551 Tibetan mastiff
n02108915 French bulldog
n02109047 Great Dane
n02109525 Saint Bernard, St Bernard
n02109961 Eskimo dog, husky
n02110063 malamute, malemute, Alaskan malamute
n02110185 Siberian husky
n02110341 dalmatian, coach dog, carriage dog
n02110627 affenpinscher, monkey pinscher, monkey dog
n02110806 basenji
n02110958 pug, pug-dog
n02111129 Leonberg
n02111277 Newfoundland, Newfoundland dog
n02111500 Great Pyrenees
n02111889 Samoyed, Samoyede
n02112018 Pomeranian
n02112137 chow, chow chow
n02112350 keeshond
n02112706 Brabancon griffon
n02113023 Pembroke, Pembroke Welsh corgi
n02113186 Cardigan, Cardigan Welsh corgi
n02113624 toy poodle
n02113712 miniature poodle
n02113799 standard poodle
n02113978 Mexican hairless
n02114367 timber wolf, grey wolf, gray wolf, Canis lupus
n02114548 white wolf, Arctic wolf, Canis lupus tundrarum
n02114712 red wolf, maned wolf, Canis rufus, Canis niger
n02114855 coyote, prairie wolf, brush wolf, Canis latrans
n02115641 dingo, warrigal, warragal, Canis dingo
n02115913 dhole, Cuon alpinus
n02116738 African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
n02117135 hyena, hyaena
n02119022 red fox, Vulpes vulpes
n02119789 kit fox, Vulpes macrotis
n02120079 Arctic fox, white fox, Alopex lagopus
n02120505 grey fox, gray fox, Urocyon cinereoargenteus
n02123045 tabby, tabby cat
n02123159 tiger cat
n02123394 Persian cat
n02123597 Siamese cat, Siamese
n02124075 Egyptian cat
n02125311 cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
n02127052 lynx, catamount
n02128385 leopard, Panthera pardus
n02128757 snow leopard, ounce, Panthera uncia
n02128925 jaguar, panther, Panthera onca, Felis onca
n02129165 lion, king of beasts, Panthera leo
n02129604 tiger, Panthera tigris
n02130308 cheetah, chetah, Acinonyx jubatus
n02132136 brown bear, bruin, Ursus arctos
n02133161 American black bear, black bear, Ursus americanus, Euarctos americanus
n02134084 ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
n02134418 sloth bear, Melursus ursinus, Ursus ursinus
n02137549 mongoose
n02138441 meerkat, mierkat
n02165105 tiger beetle
n02165456 ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
n02167151 ground beetle, carabid beetle
n02168699 long-horned beetle, longicorn, longicorn beetle
n02169497 leaf beetle, chrysomelid
n02172182 dung beetle
n02174001 rhinoceros beetle
n02177972 weevil
n02190166 fly
n02206856 bee
n02219486 ant, emmet, pismire
n02226429 grasshopper, hopper
n02229544 cricket
n02231487 walking stick, walkingstick, stick insect
n02233338 cockroach, roach
n02236044 mantis, mantid
n02256656 cicada, cicala
n02259212 leafhopper
n02264363 lacewing, lacewing fly
n02268443 dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk
n02268853 damselfly
n02276258 admiral
n02277742 ringlet, ringlet butterfly
n02279972 monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
n02280649 cabbage butterfly
n02281406 sulphur butterfly, sulfur butterfly
n02281787 lycaenid, lycaenid butterfly
n02317335 starfish, sea star
n02319095 sea urchin
n02321529 sea cucumber, holothurian
n02325366 wood rabbit, cottontail, cottontail rabbit
n02326432 hare
n02328150 Angora, Angora rabbit
n02342885 hamster
n02346627 porcupine, hedgehog
n02356798 fox squirrel, eastern fox squirrel, Sciurus niger
n02361337 marmot
n02363005 beaver
n02364673 guinea pig, Cavia cobaya
n02389026 sorrel
n02391049 zebra
n02395406 hog, pig, grunter, squealer, Sus scrofa
n02396427 wild boar, boar, Sus scrofa
n02397096 warthog
n02398521 hippopotamus, hippo, river horse, Hippopotamus amphibius
n02403003 ox
n02408429 water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
n02410509 bison
n02412080 ram, tup
n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis
n02417914 ibex, Capra ibex
n02422106 hartebeest
n02422699 impala, Aepyceros melampus
n02423022 gazelle
n02437312 Arabian camel, dromedary, Camelus dromedarius
n02437616 llama
n02441942 weasel
n02442845 mink
n02443114 polecat, fitch, foulmart, foumart, Mustela putorius
n02443484 black-footed ferret, ferret, Mustela nigripes
n02444819 otter
n02445715 skunk, polecat, wood pussy
n02447366 badger
n02454379 armadillo
n02457408 three-toed sloth, ai, Bradypus tridactylus
n02480495 orangutan, orang, orangutang, Pongo pygmaeus
n02480855 gorilla, Gorilla gorilla
n02481823 chimpanzee, chimp, Pan troglodytes
n02483362 gibbon, Hylobates lar
n02483708 siamang, Hylobates syndactylus, Symphalangus syndactylus
n02484975 guenon, guenon monkey
n02486261 patas, hussar monkey, Erythrocebus patas
n02486410 baboon
n02487347 macaque
n02488291 langur
n02488702 colobus, colobus monkey
n02489166 proboscis monkey, Nasalis larvatus
n02490219 marmoset
n02492035 capuchin, ringtail, Cebus capucinus
n02492660 howler monkey, howler
n02493509 titi, titi monkey
n02493793 spider monkey, Ateles geoffroyi
n02494079 squirrel monkey, Saimiri sciureus
n02497673 Madagascar cat, ring-tailed lemur, Lemur catta
n02500267 indri, indris, Indri indri, Indri brevicaudatus
n02504013 Indian elephant, Elephas maximus
n02504458 African elephant, Loxodonta africana
n02509815 lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
n02510455 giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
n02514041 barracouta, snoek
n02526121 eel
n02536864 coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch
n02606052 rock beauty, Holocanthus tricolor
n02607072 anemone fish
n02640242 sturgeon
n02641379 gar, garfish, garpike, billfish, Lepisosteus osseus
n02643566 lionfish
n02655020 puffer, pufferfish, blowfish, globefish
n02666196 abacus
n02667093 abaya
n02669723 academic gown, academic robe, judge's robe
n02672831 accordion, piano accordion, squeeze box
n02676566 acoustic guitar
n02687172 aircraft carrier, carrier, flattop, attack aircraft carrier
n02690373 airliner
n02692877 airship, dirigible
n02699494 altar
n02701002 ambulance
n02704792 amphibian, amphibious vehicle
n02708093 analog clock
n02727426 apiary, bee house
n02730930 apron
n02747177 ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin
n02749479 assault rifle, assault gun
n02769748 backpack, back pack, knapsack, packsack, rucksack, haversack
n02776631 bakery, bakeshop, bakehouse
n02777292 balance beam, beam
n02782093 balloon
n02783161 ballpoint, ballpoint pen, ballpen, Biro
n02786058 Band Aid
n02787622 banjo
n02788148 bannister, banister, balustrade, balusters, handrail
n02790996 barbell
n02791124 barber chair
n02791270 barbershop
n02793495 barn
n02794156 barometer
n02795169 barrel, cask
n02797295 barrow, garden cart, lawn cart, wheelbarrow
n02799071 baseball
n02802426 basketball
n02804414 bassinet
n02804610 bassoon
n02807133 bathing cap, swimming cap
n02808304 bath towel
n02808440 bathtub, bathing tub, bath, tub
n02814533 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
n02814860 beacon, lighthouse, beacon light, pharos
n02815834 beaker
n02817516 bearskin, busby, shako
n02823428 beer bottle
n02823750 beer glass
n02825657 bell cote, bell cot
n02834397 bib
n02835271 bicycle-built-for-two, tandem bicycle, tandem
n02837789 bikini, two-piece
n02840245 binder, ring-binder
n02841315 binoculars, field glasses, opera glasses
n02843684 birdhouse
n02859443 boathouse
n02860847 bobsled, bobsleigh, bob
n02865351 bolo tie, bolo, bola tie, bola
n02869837 bonnet, poke bonnet
n02870880 bookcase
n02871525 bookshop, bookstore, bookstall
n02877765 bottlecap
n02879718 bow
n02883205 bow tie, bow-tie, bowtie
n02892201 brass, memorial tablet, plaque
n02892767 brassiere, bra, bandeau
n02894605 breakwater, groin, groyne, mole, bulwark, seawall, jetty
n02895154 breastplate, aegis, egis
n02906734 broom
n02909870 bucket, pail
n02910353 buckle
n02916936 bulletproof vest
n02917067 bullet train, bullet
n02927161 butcher shop, meat market
n02930766 cab, hack, taxi, taxicab
n02939185 caldron, cauldron
n02948072 candle, taper, wax light
n02950826 cannon
n02951358 canoe
n02951585 can opener, tin opener
n02963159 cardigan
n02965783 car mirror
n02966193 carousel, carrousel, merry-go-round, roundabout, whirligig
n02966687 carpenter's kit, tool kit
n02971356 carton
n02974003 car wheel
n02977058 cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM
n02978881 cassette
n02979186 cassette player
n02980441 castle
n02981792 catamaran
n02988304 CD player
n02992211 cello, violoncello
n02992529 cellular telephone, cellular phone, cellphone, cell, mobile phone
n02999410 chain
n03000134 chainlink fence
n03000247 chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour
n03000684 chain saw, chainsaw
n03014705 chest
n03016953 chiffonier, commode
n03017168 chime, bell, gong
n03018349 china cabinet, china closet
n03026506 Christmas stocking
n03028079 church, church building
n03032252 cinema, movie theater, movie theatre, movie house, picture palace
n03041632 cleaver, meat cleaver, chopper
n03042490 cliff dwelling
n03045698 cloak
n03047690 clog, geta, patten, sabot
n03062245 cocktail shaker
n03063599 coffee mug
n03063689 coffeepot
n03065424 coil, spiral, volute, whorl, helix
n03075370 combination lock
n03085013 computer keyboard, keypad
n03089624 confectionery, confectionary, candy store
n03095699 container ship, containership, container vessel
n03100240 convertible
n03109150 corkscrew, bottle screw
n03110669 cornet, horn, trumpet, trump
n03124043 cowboy boot
n03124170 cowboy hat, ten-gallon hat
n03125729 cradle
n03126707 crane
n03127747 crash helmet
n03127925 crate
n03131574 crib, cot
n03133878 Crock Pot
n03134739 croquet ball
n03141823 crutch
n03146219 cuirass
n03160309 dam, dike, dyke
n03179701 desk
n03180011 desktop computer
n03187595 dial telephone, dial phone
n03188531 diaper, nappy, napkin
n03196217 digital clock
n03197337 digital watch
n03201208 dining table, board
n03207743 dishrag, dishcloth
n03207941 dishwasher, dish washer, dishwashing machine
n03208938 disk brake, disc brake
n03216828 dock, dockage, docking facility
n03218198 dogsled, dog sled, dog sleigh
n03220513 dome
n03223299 doormat, welcome mat
n03240683 drilling platform, offshore rig
n03249569 drum, membranophone, tympan
n03250847 drumstick
n03255030 dumbbell
n03259280 Dutch oven
n03271574 electric fan, blower
n03272010 electric guitar
n03272562 electric locomotive
n03290653 entertainment center
n03291819 envelope
n03297495 espresso maker
n03314780 face powder
n03325584 feather boa, boa
n03337140 file, file cabinet, filing cabinet
n03344393 fireboat
n03345487 fire engine, fire truck
n03347037 fire screen, fireguard
n03355925 flagpole, flagstaff
n03372029 flute, transverse flute
n03376595 folding chair
n03379051 football helmet
n03384352 forklift
n03388043 fountain
n03388183 fountain pen
n03388549 four-poster
n03393912 freight car
n03394916 French horn, horn
n03400231 frying pan, frypan, skillet
n03404251 fur coat
n03417042 garbage truck, dustcart
n03424325 gasmask, respirator, gas helmet
n03425413 gas pump, gasoline pump, petrol pump, island dispenser
n03443371 goblet
n03444034 go-kart
n03445777 golf ball
n03445924 golfcart, golf cart
n03447447 gondola
n03447721 gong, tam-tam
n03450230 gown
n03452741 grand piano, grand
n03457902 greenhouse, nursery, glasshouse
n03459775 grille, radiator grille
n03461385 grocery store, grocery, food market, market
n03467068 guillotine
n03476684 hair slide
n03476991 hair spray
n03478589 half track
n03481172 hammer
n03482405 hamper
n03483316 hand blower, blow dryer, blow drier, hair dryer, hair drier
n03485407 hand-held computer, hand-held microcomputer
n03485794 handkerchief, hankie, hanky, hankey
n03492542 hard disc, hard disk, fixed disk
n03494278 harmonica, mouth organ, harp, mouth harp
n03495258 harp
n03496892 harvester, reaper
n03498962 hatchet
n03527444 holster
n03529860 home theater, home theatre
n03530642 honeycomb
n03532672 hook, claw
n03534580 hoopskirt, crinoline
n03535780 horizontal bar, high bar
n03538406 horse cart, horse-cart
n03544143 hourglass
n03584254 iPod
n03584829 iron, smoothing iron
n03590841 jack-o'-lantern
n03594734 jean, blue jean, denim
n03594945 jeep, landrover
n03595614 jersey, T-shirt, tee shirt
n03598930 jigsaw puzzle
n03599486 jinrikisha, ricksha, rickshaw
n03602883 joystick
n03617480 kimono
n03623198 knee pad
n03627232 knot
n03630383 lab coat, laboratory coat
n03633091 ladle
n03637318 lampshade, lamp shade
n03642806 laptop, laptop computer
n03649909 lawn mower, mower
n03657121 lens cap, lens cover
n03658185 letter opener, paper knife, paperknife
n03661043 library
n03662601 lifeboat
n03666591 lighter, light, igniter, ignitor
n03670208 limousine, limo
n03673027 liner, ocean liner
n03676483 lipstick, lip rouge
n03680355 Loafer
n03690938 lotion
n03691459 loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
n03692522 loupe, jeweler's loupe
n03697007 lumbermill, sawmill
n03706229 magnetic compass
n03709823 mailbag, postbag
n03710193 mailbox, letter box
n03710637 maillot
n03710721 maillot, tank suit
n03717622 manhole cover
n03720891 maraca
n03721384 marimba, xylophone
n03724870 mask
n03729826 matchstick
n03733131 maypole
n03733281 maze, labyrinth
n03733805 measuring cup
n03742115 medicine chest, medicine cabinet
n03743016 megalith, megalithic structure
n03759954 microphone, mike
n03761084 microwave, microwave oven
n03763968 military uniform
n03764736 milk can
n03769881 minibus
n03770439 miniskirt, mini
n03770679 minivan
n03773504 missile
n03775071 mitten
n03775546 mixing bowl
n03776460 mobile home, manufactured home
n03777568 Model T
n03777754 modem
n03781244 monastery
n03782006 monitor
n03785016 moped
n03786901 mortar
n03787032 mortarboard
n03788195 mosque
n03788365 mosquito net
n03791053 motor scooter, scooter
n03792782 mountain bike, all-terrain bike, off-roader
n03792972 mountain tent
n03793489 mouse, computer mouse
n03794056 mousetrap
n03796401 moving van
n03803284 muzzle
n03804744 nail
n03814639 neck brace
n03814906 necklace
n03825788 nipple
n03832673 notebook, notebook computer
n03837869 obelisk
n03838899 oboe, hautboy, hautbois
n03840681 ocarina, sweet potato
n03841143 odometer, hodometer, mileometer, milometer
n03843555 oil filter
n03854065 organ, pipe organ
n03857828 oscilloscope, scope, cathode-ray oscilloscope, CRO
n03866082 overskirt
n03868242 oxcart
n03868863 oxygen mask
n03871628 packet
n03873416 paddle, boat paddle
n03874293 paddlewheel, paddle wheel
n03874599 padlock
n03876231 paintbrush
n03877472 pajama, pyjama, pj's, jammies
n03877845 palace
n03884397 panpipe, pandean pipe, syrinx
n03887697 paper towel
n03888257 parachute, chute
n03888605 parallel bars, bars
n03891251 park bench
n03891332 parking meter
n03895866 passenger car, coach, carriage
n03899768 patio, terrace
n03902125 pay-phone, pay-station
n03903868 pedestal, plinth, footstall
n03908618 pencil box, pencil case
n03908714 pencil sharpener
n03916031 perfume, essence
n03920288 Petri dish
n03924679 photocopier
n03929660 pick, plectrum, plectron
n03929855 pickelhaube
n03930313 picket fence, paling
n03930630 pickup, pickup truck
n03933933 pier
n03935335 piggy bank, penny bank
n03937543 pill bottle
n03938244 pillow
n03942813 ping-pong ball
n03944341 pinwheel
n03947888 pirate, pirate ship
n03950228 pitcher, ewer
n03954731 plane, carpenter's plane, woodworking plane
n03956157 planetarium
n03958227 plastic bag
n03961711 plate rack
n03967562 plow, plough
n03970156 plunger, plumber's helper
n03976467 Polaroid camera, Polaroid Land camera
n03976657 pole
n03977966 police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria
n03980874 poncho
n03982430 pool table, billiard table, snooker table
n03983396 pop bottle, soda bottle
n03991062 pot, flowerpot
n03992509 potter's wheel
n03995372 power drill
n03998194 prayer rug, prayer mat
n04004767 printer
n04005630 prison, prison house
n04008634 projectile, missile
n04009552 projector
n04019541 puck, hockey puck
n04023962 punching bag, punch bag, punching ball, punchball
n04026417 purse
n04033901 quill, quill pen
n04033995 quilt, comforter, comfort, puff
n04037443 racer, race car, racing car
n04039381 racket, racquet
n04040759 radiator
n04041544 radio, wireless
n04044716 radio telescope, radio reflector
n04049303 rain barrel
n04065272 recreational vehicle, RV, R.V.
n04067472 reel
n04069434 reflex camera
n04070727 refrigerator, icebox
n04074963 remote control, remote
n04081281 restaurant, eating house, eating place, eatery
n04086273 revolver, six-gun, six-shooter
n04090263 rifle
n04099969 rocking chair, rocker
n04111531 rotisserie
n04116512 rubber eraser, rubber, pencil eraser
n04118538 rugby ball
n04118776 rule, ruler
n04120489 running shoe
n04125021 safe
n04127249 safety pin
n04131690 saltshaker, salt shaker
n04133789 sandal
n04136333 sarong
n04141076 sax, saxophone
n04141327 scabbard
n04141975 scale, weighing machine
n04146614 school bus
n04147183 schooner
n04149813 scoreboard
n04152593 screen, CRT screen
n04153751 screw
n04154565 screwdriver
n04162706 seat belt, seatbelt
n04179913 sewing machine
n04192698 shield, buckler
n04200800 shoe shop, shoe-shop, shoe store
n04201297 shoji
n04204238 shopping basket
n04204347 shopping cart
n04208210 shovel
n04209133 shower cap
n04209239 shower curtain
n04228054 ski
n04229816 ski mask
n04235860 sleeping bag
n04238763 slide rule, slipstick
n04239074 sliding door
n04243546 slot, one-armed bandit
n04251144 snorkel
n04252077 snowmobile
n04252225 snowplow, snowplough
n04254120 soap dispenser
n04254680 soccer ball
n04254777 sock
n04258138 solar dish, solar collector, solar furnace
n04259630 sombrero
n04263257 soup bowl
n04264628 space bar
n04265275 space heater
n04266014 space shuttle
n04270147 spatula
n04273569 speedboat
n04275548 spider web, spider's web
n04277352 spindle
n04285008 sports car, sport car
n04286575 spotlight, spot
n04296562 stage
n04310018 steam locomotive
n04311004 steel arch bridge
n04311174 steel drum
n04317175 stethoscope
n04325704 stole
n04326547 stone wall
n04328186 stopwatch, stop watch
n04330267 stove
n04332243 strainer
n04335435 streetcar, tram, tramcar, trolley, trolley car
n04336792 stretcher
n04344873 studio couch, day bed
n04346328 stupa, tope
n04347754 submarine, pigboat, sub, U-boat
n04350905 suit, suit of clothes
n04355338 sundial
n04355933 sunglass
n04356056 sunglasses, dark glasses, shades
n04357314 sunscreen, sunblock, sun blocker
n04366367 suspension bridge
n04367480 swab, swob, mop
n04370456 sweatshirt
n04371430 swimming trunks, bathing trunks
n04371774 swing
n04372370 switch, electric switch, electrical switch
n04376876 syringe
n04380533 table lamp
n04389033 tank, army tank, armored combat vehicle, armoured combat vehicle
n04392985 tape player
n04398044 teapot
n04399382 teddy, teddy bear
n04404412 television, television system
n04409515 tennis ball
n04417672 thatch, thatched roof
n04418357 theater curtain, theatre curtain
n04423845 thimble
n04428191 thresher, thrasher, threshing machine
n04429376 throne
n04435653 tile roof
n04442312 toaster
n04443257 tobacco shop, tobacconist shop, tobacconist
n04447861 toilet seat
n04456115 torch
n04458633 totem pole
n04461696 tow truck, tow car, wrecker
n04462240 toyshop
n04465501 tractor
n04467665 trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi
n04476259 tray
n04479046 trench coat
n04482393 tricycle, trike, velocipede
n04483307 trimaran
n04485082 tripod
n04486054 triumphal arch
n04487081 trolleybus, trolley coach, trackless trolley
n04487394 trombone
n04493381 tub, vat
n04501370 turnstile
n04505470 typewriter keyboard
n04507155 umbrella
n04509417 unicycle, monocycle
n04515003 upright, upright piano
n04517823 vacuum, vacuum cleaner
n04522168 vase
n04523525 vault
n04525038 velvet
n04525305 vending machine
n04532106 vestment
n04532670 viaduct
n04536866 violin, fiddle
n04540053 volleyball
n04542943 waffle iron
n04548280 wall clock
n04548362 wallet, billfold, notecase, pocketbook
n04550184 wardrobe, closet, press
n04552348 warplane, military plane
n04553703 washbasin, handbasin, washbowl, lavabo, wash-hand basin
n04554684 washer, automatic washer, washing machine
n04557648 water bottle
n04560804 water jug
n04562935 water tower
n04579145 whiskey jug
n04579432 whistle
n04584207 wig
n04589890 window screen
n04590129 window shade
n04591157 Windsor tie
n04591713 wine bottle
n04592741 wing
n04596742 wok
n04597913 wooden spoon
n04599235 wool, woolen, woollen
n04604644 worm fence, snake fence, snake-rail fence, Virginia fence
n04606251 wreck
n04612504 yawl
n04613696 yurt
n06359193 web site, website, internet site, site
n06596364 comic book
n06785654 crossword puzzle, crossword
n06794110 street sign
n06874185 traffic light, traffic signal, stoplight
n07248320 book jacket, dust cover, dust jacket, dust wrapper
n07565083 menu
n07579787 plate
n07583066 guacamole
n07584110 consomme
n07590611 hot pot, hotpot
n07613480 trifle
n07614500 ice cream, icecream
n07615774 ice lolly, lolly, lollipop, popsicle
n07684084 French loaf
n07693725 bagel, beigel
n07695742 pretzel
n07697313 cheeseburger
n07697537 hotdog, hot dog, red hot
n07711569 mashed potato
n07714571 head cabbage
n07714990 broccoli
n07715103 cauliflower
n07716358 zucchini, courgette
n07716906 spaghetti squash
n07717410 acorn squash
n07717556 butternut squash
n07718472 cucumber, cuke
n07718747 artichoke, globe artichoke
n07720875 bell pepper
n07730033 cardoon
n07734744 mushroom
n07742313 Granny Smith
n07745940 strawberry
n07747607 orange
n07749582 lemon
n07753113 fig
n07753275 pineapple, ananas
n07753592 banana
n07754684 jackfruit, jak, jack
n07760859 custard apple
n07768694 pomegranate
n07802026 hay
n07831146 carbonara
n07836838 chocolate sauce, chocolate syrup
n07860988 dough
n07871810 meat loaf, meatloaf
n07873807 pizza, pizza pie
n07875152 potpie
n07880968 burrito
n07892512 red wine
n07920052 espresso
n07930864 cup
n07932039 eggnog
n09193705 alp
n09229709 bubble
n09246464 cliff, drop, drop-off
n09256479 coral reef
n09288635 geyser
n09332890 lakeside, lakeshore
n09399592 promontory, headland, head, foreland
n09421951 sandbar, sand bar
n09428293 seashore, coast, seacoast, sea-coast
n09468604 valley, vale
n09472597 volcano
n09835506 ballplayer, baseball player
n10148035 groom, bridegroom
n10565667 scuba diver
n11879895 rapeseed
n11939491 daisy
n12057211 yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum
n12144580 corn
n12267677 acorn
n12620546 hip, rose hip, rosehip
n12768682 buckeye, horse chestnut, conker
n12985857 coral fungus
n12998815 agaric
n13037406 gyromitra
n13040303 stinkhorn, carrion fungus
n13044778 earthstar
n13052670 hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa
n13054560 bolete
n13133613 ear, spike, capitulum
n15075141 toilet tissue, toilet paper, bathroom tissue
``` | [
-1.0844253301620483,
-0.2456229031085968,
0.3232777416706085,
0.41045206785202026,
-0.14507359266281128,
0.3055281341075897,
0.16771180927753448,
-0.4816921055316925,
0.8262014985084534,
-0.2975088357925415,
-0.24258370697498322,
-0.4879564940929413,
-0.952782154083252,
0.585330605506897,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qg_dequad | lmqg | 2022-12-02T18:53:57Z | 91 | 1 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:deepset/germanquad",
"language:de",
"license:cc-by-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-02T18:53:57Z | 2022-06-02T23:45:30.000Z | 2022-06-02T23:45:30 | ---
license: cc-by-4.0
pretty_name: GermanQuAD for question generation
language: de
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: deepset/germanquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_dequad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [GermanQuAD](https://huggingface.co/datasets/deepset/germanquad) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Spanish (es)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'elektromagnetischer Linearführungen',
'question': 'Was kann den Verschleiß des seillosen Aufzuges minimieren?',
'sentence': 'Im Rahmen der Forschungen an dem seillosen Aufzug wird ebenfalls an der Entwicklung elektromagnetischer Linearführungen gearbeitet, um den Verschleiß der seillosen Aufzugsanlage bei hohem Fahrkomfort zu minimieren.',
'paragraph': "Aufzugsanlage\n\n=== Seilloser Aufzug ===\nAn der RWTH Aachen im Institut für Elektrische Maschinen wurde ein seilloser Aufzug entwickelt und ein Prototyp aufgebaut. Die Kabine wird hierbei durch z..."
'sentence_answer': "Im Rahmen der Forschungen an dem seillosen Aufzug wird ebenfalls an der Entwicklung <hl> elektromagnetischer Linearführungen <hl> gearbeitet, um den Verschleiß der seillosen Aufzugsanlage bei...",
'paragraph_answer': "Aufzugsanlage === Seilloser Aufzug === An der RWTH Aachen im Institut für Elektrische Maschinen wurde ein seilloser Aufzug entwickelt und ein Prototyp aufgebaut. Die Kabine wird hierbei durc...",
'paragraph_sentence': "Aufzugsanlage === Seilloser Aufzug === An der RWTH Aachen im Institut für Elektrische Maschinen wurde ein seilloser Aufzug entwickelt und ein Prototyp aufgebaut. Die Kabine wird hierbei du..."
}
```
## Data Fields
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
### Data Splits
|train|validation|test |
|----:|---------:|----:|
|9314 | 2204 | 2204|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.6907064914703369,
-1.2082648277282715,
0.5023570656776428,
0.10188350826501846,
-0.18780483305454254,
-0.21335285902023315,
-0.13505619764328003,
0.06752564013004303,
0.02768789976835251,
0.3304932415485382,
-0.7931577563285828,
-0.6685301065444946,
-0.16707582771778107,
0.3044598698616... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nbardy/Fractal-photos | Nbardy | 2022-09-07T07:56:15Z | 91 | 2 | null | [
"region:us"
] | 2022-09-07T07:56:15Z | 2022-09-07T07:40:44.000Z | 2022-09-07T07:40:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Censius-AI/ECommerce-Women-Clothing-Reviews | Censius-AI | 2023-04-03T12:09:24Z | 91 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-04-03T12:09:24Z | 2023-04-03T12:04:42.000Z | 2023-04-03T12:04:42 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
benlipkin/folio | benlipkin | 2023-05-02T16:44:40Z | 91 | 0 | null | [
"task_categories:text-classification",
"language:en",
"license:cc",
"arxiv:2209.00840",
"region:us"
] | 2023-05-02T16:44:40Z | 2023-05-02T16:37:18.000Z | 2023-05-02T16:37:18 | ---
license: cc
task_categories:
- text-classification
language:
- en
---
```
@article{han2022folio,
title={FOLIO: Natural Language Reasoning with First-Order Logic},
author = {Han, Simeng and Schoelkopf, Hailey and Zhao, Yilun and Qi, Zhenting and Riddell, Martin and Benson, Luke and Sun, Lucy and Zubova, Ekaterina and Qiao, Yujie and Burtell, Matthew and Peng, David and Fan, Jonathan and Liu, Yixin and Wong, Brian and Sailor, Malcolm and Ni, Ansong and Nan, Linyong and Kasai, Jungo and Yu, Tao and Zhang, Rui and Joty, Shafiq and Fabbri, Alexander R. and Kryscinski, Wojciech and Lin, Xi Victoria and Xiong, Caiming and Radev, Dragomir},
journal={arXiv preprint arXiv:2209.00840},
url = {https://arxiv.org/abs/2209.00840},
year={2022}
``` | [
-0.3575775921344757,
-0.5874157547950745,
0.6129775047302246,
0.23736032843589783,
-0.244614377617836,
-0.3011130690574646,
-0.008038729429244995,
-0.4843578040599823,
0.0743543729186058,
0.601092517375946,
-0.6521694660186768,
-0.611065685749054,
-0.6592299342155457,
0.2978588938713074,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pankajmathur/orca_mini_v1_dataset | pankajmathur | 2023-08-15T20:26:46Z | 91 | 8 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-15T20:26:46Z | 2023-07-30T22:15:20.000Z | 2023-07-30T22:15:20 | ---
license: apache-2.0
---
An Orca Style dataset, which can be used to fine tuned base models with the following prompt format.
```
### System:
<system>
### User:
<instruction>
### Assistant:
<output>
```
More details coming soon.. | [
-0.3820759356021881,
-0.6980570554733276,
0.12663228809833527,
-0.05778760835528374,
-0.5174102783203125,
-0.20035766065120697,
0.17145806550979614,
0.04143187031149864,
0.38238781690597534,
0.8976393938064575,
-1.1077059507369995,
-0.7861436009407043,
-0.26337793469429016,
0.0363743640482... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
manu/project_gutenberg | manu | 2023-09-07T15:33:32Z | 91 | 2 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:fr",
"language:en",
"language:zh",
"language:pt",
"language:pl",
"language:nl",
"language:ru",
"language:sv",
"language:it",
"language:de",
"language:es",
"region:us"
] | 2023-09-07T15:33:32Z | 2023-09-07T14:14:10.000Z | 2023-09-07T14:14:10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: de
num_bytes: 1070196924
num_examples: 3131
- name: en
num_bytes: 25616345280
num_examples: 61340
- name: es
num_bytes: 496728508
num_examples: 1202
- name: fr
num_bytes: 2338871137
num_examples: 5493
- name: it
num_bytes: 383733486
num_examples: 1008
- name: nl
num_bytes: 504939551
num_examples: 1420
- name: pl
num_bytes: 4864460
num_examples: 34
- name: pt
num_bytes: 204058452
num_examples: 1111
- name: ru
num_bytes: 943593
num_examples: 6
- name: sv
num_bytes: 116664385
num_examples: 388
- name: zh
num_bytes: 174238359
num_examples: 437
download_size: 14399256761
dataset_size: 30911584135
task_categories:
- text-generation
language:
- fr
- en
- zh
- pt
- pl
- nl
- ru
- sv
- it
- de
- es
pretty_name: Project Gutenberg
size_categories:
- 10K<n<100K
---
# Dataset Card for "Project Gutenberg"
Project Gutenberg is a library of over 70,000 free eBooks, hosted at https://www.gutenberg.org/.
All examples correspond to a single book, and contain a header and a footer of a few lines (delimited by a *** Start of *** and *** End of *** tags).
### Usage
```python
from datasets import load_dataset
ds = load_dataset("manu/project_gutenberg", split="fr", streaming=True)
print(next(iter(ds)))
```
### License
Full license is available here:
https://www.gutenberg.org/policy/license.html
#### Summary
For nearly all uses, in nearly all parts of the world, the opening words of all of our eBooks apply: This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at [www.gutenberg.org]. If you are not located in the United States, you’ll have to check the laws of the country where you are located before using this ebook.”
##### Using the Project Gutenberg Trademark
If you want to use the name Project Gutenberg anywhere in the ebooks you distribute or on the distribution medium or in advertising you have to obey these rules:
- you may only distribute verbatim copies of the ebooks. No changes are allowed to the ebook contents. (Though reformatting the ebook to a different file format is considered okay).
- If you charge money for the copies you distribute, you have to pay royalties to Project Gutenberg.
- You must refund your clients for defective copies or if they don’t agree with the Project Gutenberg license.
If you don’t agree with any of the above mentioned restrictions, you may not use the Project Gutenberg trademark. You may still distribute the ebooks if you strip the Project Gutenberg license and all references to Project Gutenberg. | [
-0.2946113049983978,
-0.02072177641093731,
-0.0493883341550827,
0.1732170432806015,
-0.5243780612945557,
-0.1264386773109436,
0.11769505590200424,
-0.2932426333427429,
0.008512555621564388,
0.9622313380241394,
-0.32964828610420227,
-0.778357207775116,
-0.4340980052947998,
0.179092600941658... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zelalt/content-papers-withprompt | zelalt | 2023-10-27T00:27:54Z | 91 | 0 | null | [
"region:us"
] | 2023-10-27T00:27:54Z | 2023-10-27T00:27:53.000Z | 2023-10-27T00:27:53 | ---
dataset_info:
features:
- name: id
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1283997
num_examples: 992
download_size: 797519
dataset_size: 1283997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "content-papers-withprompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5591332912445068,
-0.1824425756931305,
0.33674052357673645,
0.2984931170940399,
-0.39420825242996216,
-0.01674121804535389,
0.1226874440908432,
-0.04005368426442146,
1.0519465208053589,
0.47375696897506714,
-0.7940784692764282,
-0.8995761871337891,
-0.9179630279541016,
-0.34265190362930... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cideon00/villm | cideon00 | 2023-10-29T12:35:53Z | 91 | 0 | null | [
"region:us"
] | 2023-10-29T12:35:53Z | 2023-10-29T12:35:29.000Z | 2023-10-29T12:35:29 | ---
dataset_info:
features:
- name: text
dtype: string
- name: tok_len
dtype: int64
splits:
- name: train
num_bytes: 1411182336.1899912
num_examples: 512774
download_size: 328694427
dataset_size: 1411182336.1899912
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "villm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5227006077766418,
-0.3992828130722046,
0.2753736078739166,
0.1689254492521286,
-0.12676426768302917,
0.10427006334066391,
0.10044264793395996,
-0.12166883796453476,
0.7546578645706177,
0.6368127465248108,
-0.8370891809463501,
-0.9337100982666016,
-0.5396032929420471,
-0.3284442126750946... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maywell/ko_wikidata_QA | maywell | 2023-11-25T00:28:52Z | 91 | 10 | null | [
"region:us"
] | 2023-11-25T00:28:52Z | 2023-10-31T02:09:29.000Z | 2023-10-31T02:09:29 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 144606911
num_examples: 137505
configs:
- config_name: default
data_files:
- split: train
path: data/train.csv
---
## 업데이트 로그
- 2023-11-03 : MarkrAI의 Dedup 적용.
# 한국어 위키 데이터 QA셋
본 데이터는 Synatra-7B-Instruct 모델과 ChatGPT를 사용하여, 제작된 QA셋입니다.
해당 데이터를 직접적으로 상업적으로 사용하는 것은 허용되지 않으며, 데이터를 이용하여 훈련된 모델에 대한 상업적 사용은 허용됩니다.
아직 완벽히 정제되지는 않았으며, 오류나 수정사항에 대해서는 PR 부탁드립니다.
| [
-0.7763694524765015,
-0.6654467582702637,
0.3832390606403351,
0.25623610615730286,
-0.8260776400566101,
0.03274489566683769,
0.19746585190296173,
-0.2682468593120575,
0.500443696975708,
0.466574490070343,
-0.6215474009513855,
-0.4933170974254608,
-0.7990584969520569,
0.06891611963510513,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kyujinpy/KOR-OpenOrca-Platypus-v3 | kyujinpy | 2023-11-18T20:22:23Z | 91 | 0 | null | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | 2023-11-18T20:22:23Z | 2023-11-08T18:56:08.000Z | 2023-11-08T18:56:08 | ---
language:
- ko
license: cc-by-nc-4.0
size_categories:
- 10K<n<50K
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_examples: 34214
---
# KOR-OpenOrca-Platypus-v3
- KOR-OpenOrca-Platypus 데이터셋에서 수작업으로 번역 오류 200건 이상을 고친 데이터셋.
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭
## KOpen-platpyus
Repo: [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
- 고품질 한국어 데이터셋
1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정
2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존
3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴
4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)
5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정
6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역
7. 고유명사는 최대한 유지함
> Post-processing 작업 내용
- Add post-processing (v2)
+) 단답형 Task 삭제.
## OpenOrca-Ko-v2
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
- 수작업으로 고친 내용(v2)
1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...)
2. KoCoT 데이터셋 제거.
3. Yes, True, False 등등 일부 답변 수정
> Post-processing 작업 내용
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
<p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p>

<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` | [
-0.5971404314041138,
-0.6534334421157837,
0.21117918193340302,
0.09118090569972992,
-0.1874001920223236,
-0.17410899698734283,
-0.2356676608324051,
-0.7159631252288818,
0.397068589925766,
0.5256991386413574,
-0.36181315779685974,
-0.7705175280570984,
-0.4267400801181793,
0.1694150269031524... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/small_addition_whole | jlbaker361 | 2023-11-17T05:53:45Z | 91 | 0 | null | [
"region:us"
] | 2023-11-17T05:53:45Z | 2023-11-17T04:47:37.000Z | 2023-11-17T04:47:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1337.7777777777778
num_examples: 40
- name: test
num_bytes: 167.22222222222223
num_examples: 5
download_size: 4158
dataset_size: 1505.0
---
# Dataset Card for "small_addition_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6454857587814331,
-0.35208526253700256,
0.30925020575523376,
0.2744390070438385,
-0.38430410623550415,
-0.3768545687198639,
0.10002651810646057,
-0.1776512861251831,
1.121294379234314,
0.5477714538574219,
-0.749376118183136,
-0.5624014139175415,
-0.5673038363456726,
-0.33147671818733215... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/small_addition_decimal | jlbaker361 | 2023-11-17T05:54:00Z | 91 | 0 | null | [
"region:us"
] | 2023-11-17T05:54:00Z | 2023-11-17T04:47:46.000Z | 2023-11-17T04:47:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1827.5555555555557
num_examples: 40
- name: test
num_bytes: 228.44444444444446
num_examples: 5
download_size: 4479
dataset_size: 2056.0
---
# Dataset Card for "small_addition_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6757172346115112,
-0.30854618549346924,
0.16778725385665894,
0.2791062295436859,
-0.23579789698123932,
-0.41373351216316223,
-0.05570192635059357,
-0.1583985835313797,
0.8703786730766296,
0.3532482981681824,
-0.6262192130088806,
-0.5910395383834839,
-0.5479209423065186,
-0.2490555793046... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
indonesian-nlp/mc4-id | indonesian-nlp | 2022-10-25T11:52:34Z | 90 | 3 | mc4 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended",
"language:id",
"license:odc-by",
"arxiv:1910.10683",
"region:us"
] | 2022-10-25T11:52:34Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- id
license:
- odc-by
multilinguality:
- monolingual
size_categories:
tiny:
- 1M<n<10M
small:
- 10M<n<100M
medium:
- 10M<n<100M
large:
- 10M<n<100M
full:
- 100M<n<1B
source_datasets:
- extended
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4-id
---
# Dataset Card for Clean(maybe) Indonesia mC4
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A thoroughly cleaned version of the Indonesia split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4). Based on the [Common Crawl dataset](https://commoncrawl.org). The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Splits
You can load any subset like this:
```python
from datasets import load_dataset
mc4_id_tiny = load_dataset("munggok/mc4-id", "tiny")
```
Since splits are quite large, you may want to traverse them using the streaming mode available starting from 🤗 Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_id_full_stream = load_dataset("munggok/mc4-id", "full", split='train', streaming=True)
print(next(iter(mc4_id_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Discussion of Biases
Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Dataset Curators
Authors at AllenAI are the original curators for the `mc4` corpus.
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
If you use this dataset in your work, please cite us and the original mC4 authors as:
```
@inproceedings{xue-etal-2021-mt5,
title = "m{T}5: A Massively Multilingual Pre-trained Text-to-Text Transformer",
author = "Xue, Linting and
Constant, Noah and
Roberts, Adam and
Kale, Mihir and
Al-Rfou, Rami and
Siddhant, Aditya and
Barua, Aditya and
Raffel, Colin",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.41",
doi = "10.18653/v1/2021.naacl-main.41",
pages = "483--498",
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| [
-0.5756919980049133,
-0.5345596671104431,
0.3211793303489685,
0.13990823924541473,
-0.28962644934654236,
0.07566644996404648,
-0.2823386788368225,
-0.5087075233459473,
0.5321300029754639,
0.5055062174797058,
-0.6177189350128174,
-0.5861415266990662,
-0.4367516338825226,
0.6659846901893616,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mariosasko/test_imagefolder_with_metadata | mariosasko | 2022-06-28T12:59:23Z | 90 | 0 | null | [
"region:us"
] | 2022-06-28T12:59:23Z | 2022-06-28T12:53:50.000Z | 2022-06-28T12:53:50 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/scitail | bigbio | 2023-03-31T02:11:26Z | 90 | 1 | scitail | [
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-03-31T02:11:26Z | 2022-07-02T20:53:40.000Z | 2022-07-02T20:53:40 | ---
language:
- en
bigbio_language:
- English
license: apache-2.0
bigbio_license_shortname: APACHE_2p0
multilinguality: monolingual
pretty_name: SciTail
homepage: https://allenai.org/data/scitail
bigbio_pubmed: false
bigbio_public: true
bigbio_tasks:
- TEXTUAL_ENTAILMENT
paperswithcode_id: scitail
---
# Dataset Card for SciTail
## Dataset Description
- **Homepage:** https://allenai.org/data/scitail
- **Pubmed:** False
- **Public:** True
- **Tasks:** TE
The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We crowd source the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples with neutral label.
## Citation Information
```
@inproceedings{scitail,
author = {Tushar Khot and Ashish Sabharwal and Peter Clark},
booktitle = {AAAI}
title = {SciTail: A Textual Entailment Dataset from Science Question Answering},
year = {2018}
```
| [
0.025907179340720177,
-0.5129493474960327,
0.2001420557498932,
0.2824547588825226,
-0.15952938795089722,
-0.3156863749027252,
0.1220160499215126,
-0.10677844285964966,
0.4709490239620209,
0.5174282193183899,
-0.4678906500339508,
-0.5779553055763245,
-0.38418981432914734,
0.4714732766151428... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FIdo-AI/ua-news | FIdo-AI | 2022-07-05T18:32:36Z | 90 | 0 | null | [
"region:us"
] | 2022-07-05T18:32:36Z | 2022-07-03T18:53:04.000Z | 2022-07-03T18:53:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
acul3/KoPI-NLLB | acul3 | 2022-09-06T05:49:03Z | 90 | 1 | null | [
"region:us"
] | 2022-09-06T05:49:03Z | 2022-09-04T16:52:01.000Z | 2022-09-04T16:52:01 | KopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, [allenai/nllb](https://huggingface.co/datasets/allenai/nllb)
each language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
detail soon | [
-0.4576664865016937,
-0.7540121078491211,
0.04169074818491936,
0.37694528698921204,
-0.3882431983947754,
0.15126630663871765,
-0.18839335441589355,
-0.5124290585517883,
0.2369799166917801,
0.9863201975822449,
-0.3775465190410614,
-0.22654637694358826,
-0.4380755126476288,
0.245467215776443... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bdotloh/empathetic-dialogues-contexts | bdotloh | 2022-09-21T06:12:44Z | 90 | 6 | null | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"language:en",
"region:us"
] | 2022-09-21T06:12:44Z | 2022-09-19T05:58:21.000Z | 2022-09-19T05:58:21 | ---
annotations_creators:
- crowdsourced
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
---
# Dataset Description
This is a dataset of emotional contexts that was retrieved from the original EmpatheticDialogues (ED) dataset. Respondents were asked to describe an event that was associated with a particular emotion label (i.e. p(event|emotion).
There are 32 emotion labels in total.
There are 19209, 2756, and 2542 instances of emotional descriptions in the train, valid, and test set, respectively. | [
-0.7645012140274048,
-0.6209949851036072,
0.2029954046010971,
0.3272356688976288,
-0.023981668055057526,
-0.4579980671405792,
-0.1756839156150818,
-0.19443652033805847,
0.43200448155403137,
0.20017533004283905,
-1.0017472505569458,
-0.33881333470344543,
-0.3921999931335449,
0.3919826149940... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
valhalla/emoji-dataset | valhalla | 2022-10-05T11:39:52Z | 90 | 3 | null | [
"region:us"
] | 2022-10-05T11:39:52Z | 2022-10-05T08:39:37.000Z | 2022-10-05T08:39:37 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ppietro/catrinas | ppietro | 2022-11-14T17:18:37Z | 90 | 0 | null | [
"license:afl-3.0",
"region:us"
] | 2022-11-14T17:18:37Z | 2022-11-14T16:37:20.000Z | 2022-11-14T16:37:20 | ---
license: afl-3.0
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wwydmanski/blog-feedback | wwydmanski | 2023-02-25T16:03:19Z | 90 | 0 | null | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"tabular",
"region:us"
] | 2023-02-25T16:03:19Z | 2023-02-25T15:57:14.000Z | 2023-02-25T15:57:14 | ---
task_categories:
- tabular-regression
- tabular-classification
tags:
- tabular
size_categories:
- 10K<n<100K
---
## Source
Source: [UCI](https://archive.ics.uci.edu/ml/datasets/BlogFeedback)
## Data Set Information:
This data originates from blog posts. The raw HTML-documents
of the blog posts were crawled and processed.
The prediction task associated with the data is the prediction
of the number of comments in the upcoming 24 hours. In order
to simulate this situation, we choose a basetime (in the past)
and select the blog posts that were published at most
72 hours before the selected base date/time. Then, we calculate
all the features of the selected blog posts from the information
that was available at the basetime, therefore each instance
corresponds to a blog post. The target is the number of
comments that the blog post received in the next 24 hours
relative to the basetime.
In the train data, the basetimes were in the years
2010 and 2011. In the test data the basetimes were
in February and March 2012. This simulates the real-world
situtation in which training data from the past is available
to predict events in the future.
The train data was generated from different basetimes that may
temporally overlap. Therefore, if you simply split the train
into disjoint partitions, the underlying time intervals may
overlap. Therefore, the you should use the provided, temporally
disjoint train and test splits in order to ensure that the
evaluation is fair.
## Attribute Information:
1...50:Average, standard deviation, min, max and median of them attributes 51...60 for the source of the current blog post. With source we mean the blog on which the post appeared.
For example, myblog.blog.org would be the source of the post myblog.blog.org/post_2010_09_10
51: Total number of comments before basetime
52: Number of comments in the last 24 hours before the
basetime
53: Let T1 denote the datetime 48 hours before basetime,
Let T2 denote the datetime 24 hours before basetime.
This attribute is the number of comments in the time period
between T1 and T2
54: Number of comments in the first 24 hours after the
publication of the blog post, but before basetime
55: The difference of Attribute 52 and Attribute 53
56...60:
The same features as the attributes 51...55, but
features 56...60 refer to the number of links (trackbacks),
while features 51...55 refer to the number of comments.
61: The length of time between the publication of the blog post
and basetime
62: The length of the blog post
63...262:
The 200 bag of words features for 200 frequent words of the
text of the blog post
263...269: binary indicator features (0 or 1) for the weekday
(Monday...Sunday) of the basetime
270...276: binary indicator features (0 or 1) for the weekday
(Monday...Sunday) of the date of publication of the blog
post
277: Number of parent pages: we consider a blog post P as a
parent of blog post B, if B is a reply (trackback) to
blog post P.
278...280:
Minimum, maximum, average number of comments that the
parents received
281: The target: the number of comments in the next 24 hours
(relative to basetime)
| [
-0.46962475776672363,
-0.46083465218544006,
0.431786447763443,
0.8329043984413147,
-0.32420527935028076,
0.0524410642683506,
-0.18480993807315826,
-0.3653257191181183,
0.46014925837516785,
0.16376252472400665,
-0.8646909594535828,
-0.44420325756073,
-0.5522470474243164,
0.13503870368003845... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pankajmathur/WizardLM_Orca | pankajmathur | 2023-06-26T14:39:38Z | 90 | 64 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-06-26T14:39:38Z | 2023-06-24T18:34:28.000Z | 2023-06-24T18:34:28 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
Explain tuned WizardLM dataset ~55K created using approaches from Orca Research Paper.
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student models like orca_mini_13b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see how the System prompt is added before each instruction. | [
-0.5708826780319214,
-0.7963640689849854,
-0.08305717259645462,
-0.3932651877403259,
-0.07534517347812653,
-0.12403445690870285,
0.20478108525276184,
-0.2594095468521118,
0.0022268069442361593,
0.7652339339256287,
-1.0116440057754517,
-0.18809625506401062,
0.11791079491376877,
-0.055426031... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zzliang/GRIT | zzliang | 2023-07-04T06:40:28Z | 90 | 72 | null | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:object-detection",
"task_categories:zero-shot-classification",
"task_ids:image-captioning",
"task_ids:visual-question-answering",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:COYO-700M"... | 2023-07-04T06:40:28Z | 2023-07-04T03:33:28.000Z | 2023-07-04T03:33:28 | ---
license: ms-pl
language:
- en
multilinguality:
- monolingual
pretty_name: GRIT
size_categories:
- 100M<n<1B
source_datasets:
- COYO-700M
tags:
- image-text-bounding-box pairs
- image-text pairs
task_categories:
- text-to-image
- image-to-text
- object-detection
- zero-shot-classification
task_ids:
- image-captioning
- visual-question-answering
---
# GRIT: Large-Scale Training Corpus of Grounded Image-Text Pairs
### Dataset Description
- **Repository:** [Microsoft unilm](https://github.com/microsoft/unilm/tree/master/kosmos-2)
- **Paper:** [Kosmos-2](https://arxiv.org/abs/2306.14824)
### Dataset Summary
We introduce GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created based on image-text pairs from [COYO-700M](https://github.com/kakaobrain/coyo-dataset) and LAION-2B. We construct a pipeline to extract and link text spans (i.e., noun phrases, and referring expressions) in the caption to their corresponding image regions. More details can be found in the [paper](https://arxiv.org/abs/2306.14824).
### Supported Tasks
During the construction, we excluded the image-caption pairs if no bounding boxes are retained. This procedure resulted in a high-quality image-caption subset of COYO-700M, which we will validate in the future.
Furthermore, this dataset contains text-span-bounding-box pairs. Thus, it can be used in many location-aware mono/multimodal tasks, such as phrase grounding, referring expression comprehension, referring expression generation, and open-world object detection.
### Data Instance
One instance is
```python
{
'key': '000373938',
'clip_similarity_vitb32': 0.353271484375,
'clip_similarity_vitl14': 0.2958984375,
'id': 1795296605919,
'url': "https://www.thestrapsaver.com/wp-content/uploads/customerservice-1.jpg",
'caption': 'a wire hanger with a paper cover that reads we heart our customers',
'width': 1024,
'height': 693,
'noun_chunks': [[19, 32, 0.019644069503434333, 0.31054004033406574, 0.9622142865754519, 0.9603442351023356, 0.79298526], [0, 13, 0.019422357885505368, 0.027634161214033764, 0.9593302408854166, 0.969467560450236, 0.67520964]],
'ref_exps': [[19, 66, 0.019644069503434333, 0.31054004033406574, 0.9622142865754519, 0.9603442351023356, 0.79298526], [0, 66, 0.019422357885505368, 0.027634161214033764, 0.9593302408854166, 0.969467560450236, 0.67520964]]
}
```
- `key`: The generated file name when using img2dataset to download COYO-700M (omit it).
- `clip_similarity_vitb32`: The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP), provided by COYO-700M.
- `clip_similarity_vitl14`: The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP), provided by COYO-700M.
- `id`: Unique 64-bit integer ID in COYO-700M.
- `url`: The image URL.
- `caption`: The corresponding caption.
- `width`: The width of the image.
- `height`: The height of the image.
- `noun_chunks`: The noun chunks (extracted by [spaCy](https://spacy.io/)) that have associated bounding boxes (predicted by [GLIP](https://github.com/microsoft/GLIP)). The items in the children list respectively represent 'Start of the noun chunk in caption', 'End of the noun chunk in caption', 'normalized x_min', 'normalized y_min', 'normalized x_max', 'normalized y_max', 'confidence score'.
- `ref_exps`: The corresponding referring expressions. If a noun chunk has no expansion, we just copy it.
### Download image
We recommend to use [img2dataset](https://github.com/rom1504/img2dataset) tool to download the images.
1. Download the metadata. You can download it by cloning current repository:
```bash
git lfs install
git clone https://huggingface.co/datasets/zzliang/GRIT
```
2. Install [img2dataset](https://github.com/rom1504/img2dataset).
```bash
pip install img2dataset
```
3. Download images
You need to replace `/path/to/GRIT_dataset/grit-20m` with the local path to this repository.
```bash
img2dataset --url_list /path/to/GRIT_dataset/grit-20m --input_format "parquet"\
--url_col "url" --caption_col "caption" --output_format webdataset \
--output_folder /tmp/grit --processes_count 4 --thread_count 64 --image_size 256 \
--resize_only_if_bigger=True --resize_mode="keep_ratio" --skip_reencode=True \
--save_additional_columns '["id","noun_chunks","ref_exps","clip_similarity_vitb32","clip_similarity_vitl14"]' \
--enable_wandb False
```
You can adjust some parameters according to your actual needs (e.g., `processes_count`, `thread_count`, `image_size`, `save_additional_columns`).
More img2dataset hyper-parameters can be found in [here](https://github.com/rom1504/img2dataset#api).
### Citation Information
If you apply this dataset to any project and research, please cite our paper and coyo-700m:
```
@article{Kosmos2,
title={Kosmos-2: Grounding Multimodal Large Language Models to the World},
author={Zhiliang Peng and Wenhui Wang and Li Dong and Yaru Hao and Shaohan Huang and Shuming Ma and Furu Wei},
journal={ArXiv},
year={2023},
volume={abs/2306.14824}
}
@misc{kakaobrain2022coyo-700m,
title = {COYO-700M: Image-Text Pair Dataset},
author = {Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim},
year = {2022},
howpublished = {\url{https://github.com/kakaobrain/coyo-dataset}},
}
``` | [
-0.5547434687614441,
-0.7480459213256836,
0.3589736819267273,
0.20712962746620178,
-0.45369336009025574,
-0.11587415635585785,
-0.3966076374053955,
-0.4715874195098877,
0.6304154992103577,
0.22045935690402985,
-0.49396854639053345,
-0.6660870313644409,
-0.6007928848266602,
0.06861119717359... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChrisHayduk/Llama-2-SQL-and-Code-Dataset | ChrisHayduk | 2023-09-29T04:18:17Z | 90 | 6 | null | [
"region:us"
] | 2023-09-29T04:18:17Z | 2023-07-18T18:28:31.000Z | 2023-07-18T18:28:31 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: table
dtype: string
splits:
- name: train
num_bytes: 46640417
num_examples: 128351
- name: eval
num_bytes: 1756894
num_examples: 1302
download_size: 18298063
dataset_size: 48397311
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
# Dataset Card for "Llama-2-SQL-and-Code-Dataset"
This dataset is intended to provide LLaMA 2 improved coding and instruction following capabilities, with a specific focus on SQL generation.
The dataset is in Alpaca Instruct format. Please be sure to provide the instruction and input in the prompt to the model, along with any prompt text you would like to place around those inputs.
In the train split, please ignore the table column. The eval split provides example tables so that the actual executable SQL performance can be compared on a number of SQL generation tasks.
To use the tables, they can be loaded as JSON objects and passed to a SQL execution tool such as sqlglot. | [
-0.21186409890651703,
-0.8416930437088013,
0.1486819088459015,
0.30409589409828186,
-0.8105436563491821,
0.241747185587883,
0.1926952600479126,
-0.15429404377937317,
0.3508251905441284,
0.8616494536399841,
-0.6885556578636169,
-0.5136253237724304,
-0.34789714217185974,
-0.0573810450732708,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
minoruskore/wlkjokj3454sd45sc45 | minoruskore | 2023-09-09T21:55:35Z | 90 | 0 | null | [
"license:other",
"region:us"
] | 2023-09-09T21:55:35Z | 2023-09-07T15:35:14.000Z | 2023-09-07T15:35:14 | ---
license: other
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: user_id
dtype: int64
- name: name
dtype: string
- name: anime_id
dtype: int64
- name: anime
dtype: string
- name: rating
dtype: int64
splits:
- name: train
num_bytes: 1386784355
num_examples: 19460153
- name: test
num_bytes: 354541207
num_examples: 4865038
- name: train100k
num_bytes: 5716739
num_examples: 80000
- name: test100k
num_bytes: 1453191
num_examples: 20000
- name: train500k
num_bytes: 28547903
num_examples: 400000
- name: test500k
num_bytes: 7235060
num_examples: 100000
- name: train1kk
num_bytes: 57023319
num_examples: 800000
- name: test1kk
num_bytes: 14562005
num_examples: 200000
download_size: 832651093
dataset_size: 1855863779
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: train100k
path: data/train100k-*
- split: test100k
path: data/test100k-*
- split: train500k
path: data/train500k-*
- split: test500k
path: data/test500k-*
- split: train1kk
path: data/train1kk-*
- split: test1kk
path: data/test1kk-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
longhoang06/text-recognition | longhoang06 | 2023-09-30T15:08:12Z | 90 | 0 | null | [
"region:us"
] | 2023-09-30T15:08:12Z | 2023-09-30T15:03:06.000Z | 2023-09-30T15:03:06 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6858787617.0
num_examples: 100000
download_size: 6858941356
dataset_size: 6858787617.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "text-recognition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5172435641288757,
-0.15849652886390686,
0.3522084653377533,
0.19176983833312988,
-0.1603350043296814,
0.01965624839067459,
0.05988472327589989,
-0.4822671711444855,
0.7618843913078308,
0.3920721411705017,
-0.6525157690048218,
-0.7011903524398804,
-0.7773025631904602,
-0.0023243487812578... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.