id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hbXNov/entigen | hbXNov | 2022-10-26T07:20:22Z | 22 | 1 | null | [
"region:us"
] | 2022-10-26T07:20:22Z | 2022-10-26T05:55:43.000Z | 2022-10-26T05:55:43 | Relevant Paper - `https://github.com/Hritikbansal/entigen_emnlp`
language of prompts - English | [
-0.16414692997932434,
-0.8367642164230347,
0.49860021471977234,
0.7190374732017517,
-0.2150687575340271,
-0.07943475246429443,
-0.2699269652366638,
-0.40869882702827454,
0.7201619148254395,
0.32839512825012207,
-0.8747737407684326,
-0.3971177339553833,
0.18100132048130035,
0.74917298555374... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tglcourse/5s_birdcall_samples_top20 | tglcourse | 2022-10-27T07:34:37Z | 22 | 1 | null | [
"license:unknown",
"region:us"
] | 2022-10-27T07:34:37Z | 2022-10-27T07:26:02.000Z | 2022-10-27T07:26:02 | ---
license:
- unknown
pretty_name: 5s Birdcall Samples
---
This dataset contains 5 second clips of birdcalls for audio generation tests.
There are 20 species represented, with ~500 recordings each. Recordings are from xeno-canto.
These clips were taken from longer samples by identifying calls within the recordings using the approach shown here: https://www.kaggle.com/code/johnowhitaker/peak-identification
The audio is represented at 32kHz (mono) | [
-0.7507405877113342,
-0.15143953263759613,
-0.10338965803384781,
0.46965301036834717,
-0.1565716713666916,
0.05510161817073822,
0.03681282326579094,
-0.7044119834899902,
0.2691578269004822,
0.37069955468177795,
-0.9186583757400513,
-0.22101062536239624,
-0.24487945437431335,
0.613539278507... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/EASC | arbml | 2022-11-02T15:18:15Z | 22 | 0 | null | [
"region:us"
] | 2022-11-02T15:18:15Z | 2022-11-02T15:17:47.000Z | 2022-11-02T15:17:47 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qa_harvesting_from_wikipedia_pseudo | lmqg | 2022-11-10T11:30:06Z | 22 | 0 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
] | 2022-11-10T11:30:06Z | 2022-11-09T19:05:38.000Z | 2022-11-09T19:05:38 | ---
license: cc-by-4.0
pretty_name: Synthetic QA dataset.
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_harvesting_from_wikipedia_pseudo"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_harvesting_from_wikipedia`](https://huggingface.co/datasets/lmqg/qa_harvesting_from_wikipedia), 1 million paragraph and answer pairs collected in [Du and Cardie, 2018](https://aclanthology.org/P18-1177/), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The `train` split is the synthetic data and the `validation` split is the original validation set of [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), where the model should be evaluate on.
This contains synthetic QA datasets created with the following QG models:
- [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad)
- [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad)
- [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad)
- [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad)
- [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad)
See more detail about the QAE at [https://github.com/asahi417/lm-question-generation/tree/master/misc/qa_based_evaluation](https://github.com/asahi417/lm-question-generation/tree/master/misc/emnlp_2022/qa_based_evaluation).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
|train |validation|
|--------:|---------:|
|1,092,142| 10,570 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.5955312848091125,
-0.8909291625022888,
0.4858342111110687,
-0.031132109463214874,
-0.18911617994308472,
0.05789046362042427,
-0.0015054623363539577,
-0.37678369879722595,
0.2529889941215515,
0.3376148045063019,
-0.9845773577690125,
-0.6664422750473022,
-0.13486428558826447,
0.3612001538... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jellywibble/dalio-reward-model-hackathon-dataset | Jellywibble | 2022-11-13T17:25:41Z | 22 | 0 | null | [
"region:us"
] | 2022-11-13T17:25:41Z | 2022-11-12T04:06:26.000Z | 2022-11-12T04:06:26 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 8765
num_examples: 16
download_size: 6055
dataset_size: 8765
---
# Dataset Card for "dalio-reward-model-hackathon-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.22049282491207123,
-0.04044264927506447,
0.08533959835767746,
0.3029624819755554,
-0.08910177648067474,
0.03481025993824005,
0.2639559209346771,
-0.24366112053394318,
0.9199503064155579,
0.3157196640968323,
-0.9763142466545105,
-0.6199292540550232,
-0.47744762897491455,
-0.2459546476602... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_st_2011_rel | bigbio | 2022-12-22T15:43:54Z | 22 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:43:54Z | 2022-11-13T22:06:59.000Z | 2022-11-13T22:06:59 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2011 REL
homepage: https://github.com/openbiocorpora/bionlp-st-2011-rel
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2011 REL
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2011-rel
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE,COREF
The Entity Relations (REL) task is a supporting task of the BioNLP Shared Task 2011.
The task concerns the extraction of two types of part-of relations between a
gene/protein and an associated entity.
## Citation Information
```
@inproceedings{10.5555/2107691.2107703,
author = {Pyysalo, Sampo and Ohta, Tomoko and Tsujii, Jun'ichi},
title = {Overview of the Entity Relations (REL) Supporting Task of BioNLP Shared Task 2011},
year = {2011},
isbn = {9781937284091},
publisher = {Association for Computational Linguistics},
address = {USA},
abstract = {This paper presents the Entity Relations (REL) task,
a supporting task of the BioNLP Shared Task 2011. The task concerns
the extraction of two types of part-of relations between a gene/protein
and an associated entity. Four teams submitted final results for
the REL task, with the highest-performing system achieving 57.7%
F-score. While experiments suggest use of the data can help improve
event extraction performance, the task data has so far received only
limited use in support of event extraction. The REL task continues
as an open challenge, with all resources available from the shared
task website.},
booktitle = {Proceedings of the BioNLP Shared Task 2011 Workshop},
pages = {83–88},
numpages = {6},
location = {Portland, Oregon},
series = {BioNLP Shared Task '11}
}
```
| [
-0.19065842032432556,
-0.4442925751209259,
0.24197345972061157,
0.07726554572582245,
-0.4553799033164978,
-0.14144034683704376,
-0.08659040182828903,
-1.0030193328857422,
0.5607950091362,
0.41793879866600037,
-0.6770203709602356,
-0.6144494414329529,
-0.35786256194114685,
0.478605449199676... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/iepa | bigbio | 2022-12-22T15:44:47Z | 22 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:44:47Z | 2022-11-13T22:09:00.000Z | 2022-11-13T22:09:00 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: IEPA
homepage: http://psb.stanford.edu/psb-online/proceedings/psb02/abstracts/p326.html
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
---
# Dataset Card for IEPA
## Dataset Description
- **Homepage:** http://psb.stanford.edu/psb-online/proceedings/psb02/abstracts/p326.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The IEPA benchmark PPI corpus is designed for relation extraction. It was created from 303 PubMed abstracts, each of which contains a specific pair of co-occurring chemicals.
## Citation Information
```
@ARTICLE{ding2001mining,
title = "Mining {MEDLINE}: abstracts, sentences, or phrases?",
author = "Ding, J and Berleant, D and Nettleton, D and Wurtele, E",
journal = "Pac Symp Biocomput",
pages = "326--337",
year = 2002,
address = "United States",
language = "en"
}
```
| [
-0.31238341331481934,
-0.016670960932970047,
0.48181357979774475,
0.3420182764530182,
0.031080545857548714,
-0.27724114060401917,
0.04164651781320572,
-0.5186735391616821,
0.24998082220554352,
0.1957881599664688,
-0.5238462090492249,
-0.4954274296760559,
-0.500038743019104,
0.5678554177284... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JoBeer/eclassTrainST | JoBeer | 2023-01-07T12:10:51Z | 22 | 0 | null | [
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | 2023-01-07T12:10:51Z | 2022-11-29T07:05:17.000Z | 2022-11-29T07:05:17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: entailment
dtype: string
- name: contradiction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 327174992
num_examples: 698880
- name: eval
num_bytes: 219201779
num_examples: 450912
download_size: 46751846
dataset_size: 546376771
task_categories:
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "eclassTrainST"
This NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard. | [
-0.4831947982311249,
-0.415345162153244,
-0.20522324740886688,
-0.01593073271214962,
-0.4471610486507416,
-0.21766211092472076,
-0.09189309179782867,
-0.23957134783267975,
0.20307625830173492,
0.30162280797958374,
-0.9374881386756897,
-0.4014012813568115,
-0.21549159288406372,
0.1093944832... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SuperNova672/cord-10k-processed | SuperNova672 | 2022-11-29T08:46:09Z | 22 | 0 | null | [
"region:us"
] | 2022-11-29T08:46:09Z | 2022-11-29T08:45:59.000Z | 2022-11-29T08:45:59 | ---
dataset_info:
features:
- name: data
dtype: string
splits:
- name: train
num_bytes: 524148223
num_examples: 695729
download_size: 275228391
dataset_size: 524148223
---
# Dataset Card for "cord-10k-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5959587693214417,
-0.2550618350505829,
0.16726398468017578,
0.48043307662010193,
-0.3534904718399048,
0.3849775195121765,
0.1690993458032608,
-0.18631266057491302,
1.0149469375610352,
0.4785049855709076,
-0.8844701051712036,
-0.6698340773582458,
-0.49009811878204346,
-0.0690061151981353... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucadiliello/hotpot_as2 | lucadiliello | 2022-11-29T11:24:51Z | 22 | 0 | null | [
"region:us"
] | 2022-11-29T11:24:51Z | 2022-11-29T11:21:40.000Z | 2022-11-29T11:21:40 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 132583963
num_examples: 489238
- name: dev
num_bytes: 6483895
num_examples: 25295
- name: test
num_bytes: 6364224
num_examples: 24846
download_size: 55519634
dataset_size: 145432082
---
# Dataset Card for "hotpot_as2"
Answer Sentence Selection version of the HotpotQA dataset. For more info, check out the original [repository](https://github.com/lucadiliello/answer-selection). | [
-0.4320087432861328,
-0.6418484449386597,
-0.02496708557009697,
0.38734519481658936,
-0.3692479729652405,
-0.25612643361091614,
-0.10742858052253723,
-0.053143057972192764,
0.3939346373081207,
0.8517022132873535,
-0.49659255146980286,
-0.2802112102508545,
-0.37652891874313354,
0.0128234662... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DTU54DL/common-accent | DTU54DL | 2022-11-30T13:25:07Z | 22 | 2 | acronym-identification | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-11-30T13:25:07Z | 2022-11-30T07:46:58.000Z | 2022-11-30T07:46:58 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: accent
dtype: string
splits:
- name: train
num_bytes: 471755846.3910719
num_examples: 10000
- name: test
num_bytes: 19497172.25755167
num_examples: 451
download_size: 436911322
dataset_size: 491253018.6486236
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.47841677069664,
-0.5084842443466187,
0.14602938294410706,
0.278889000415802,
-0.21702472865581512,
0.24832050502300262,
-0.3366999328136444,
-0.3758932054042816,
0.6720380783081055,
0.6457639932632446,
-0.9167346358299255,
-1.2200127840042114,
-0.7551794052124023,
0.07273735105991364,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
proteinea/solubility | proteinea | 2023-01-16T14:43:54Z | 22 | 0 | null | [
"license:mit",
"doi:10.57967/hf/1103",
"region:us"
] | 2023-01-16T14:43:54Z | 2022-12-12T13:17:49.000Z | 2022-12-12T13:17:49 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qag_ruquad | lmqg | 2022-12-18T07:59:33Z | 22 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_ruquad",
"language:ru",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-18T07:59:33Z | 2022-12-18T07:05:48.000Z | 2022-12-18T07:05:48 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: ru
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_ruquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_ruquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the RUQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Russian (ru)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": " Everybody , как и хотела Мадонна, выпускают синглом. При нулевом бюджете на раскрутку фото певицы решают не помещать на обложке, чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы . Everybody поднимается на 3-е место в чарте Hot Dance Club Songs, а потом на 107 место в основном, немного не дотянув до первой сотни Hot 100 журнала Billboard[91]. Менеджмент считает это отличным результатом, учитывая нулевые затраты на пиар, и хочет убедиться, что взлёт Everybody не случаен. По просьбе Мадонны вместо Каминса берут более опытного штатного аранжировщика Warner Bros. Records Регги Лукаса (англ.)русск.. Второй сингл Burning Up тоже достигает в чарте танцевальных хитов 3-го места, повторив успех Everybody . И только после этого Мадонне позволяют арендовать студию для записи первого альбома[91].",
"questions": [ "При каком бюджете на раскрутку фото певицы решают не помещать на обложке ?", "Какой альбом Мадонны выпускают синглом?", "Имя более опытного штатного аранжировщика берут по просьбе Мадонны вместо Каминсаболее ?", "Почему при нулевом бджете фото певицы решают не помещать на обложке ?", "На каое место Everybody поднимается в чарте Hot Dance Club Songs?" ],
"answers": [ "При нулевом", " Everybody ", "Warner Bros", "чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы ", "на 3-е место" ],
"questions_answers": "question: При каком бюджете на раскрутку фото певицы решают не помещать на обложке ?, answer: При нулевом | question: Какой альбом Мадонны выпускают синглом?, answer: Everybody | question: Имя более опытного штатного аранжировщика берут по просьбе Мадонны вместо Каминсаболее ?, answer: Warner Bros | question: Почему при нулевом бджете фото певицы решают не помещать на обложке ?, answer: чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы | question: На каое место Everybody поднимается в чарте Hot Dance Club Songs?, answer: на 3-е место"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|10407| 4079 | 4017|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.7307727336883545,
-0.8452824950218201,
0.3163757622241974,
0.07261453568935394,
-0.4215214252471924,
0.00003462937820586376,
0.044546838849782944,
0.03515467792749405,
0.47762373089790344,
0.3730544149875641,
-0.8885953426361084,
-0.6797294020652771,
-0.2977849841117859,
0.0774898827075... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joelniklaus/legal_case_document_summarization | joelniklaus | 2023-02-02T23:52:54Z | 22 | 9 | null | [
"region:us"
] | 2023-02-02T23:52:54Z | 2022-12-30T20:54:10.000Z | 2022-12-30T20:54:10 | # Dataset Card for LegalCaseDocumentSummarization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/Law-AI/summarization)
- **Repository:** [Zenodo](https://zenodo.org/record/7152317#.Y69PkeKZODW)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
| [
-0.37500134110450745,
-0.47541412711143494,
0.21602573990821838,
0.19919198751449585,
-0.47220319509506226,
0.26885515451431274,
-0.4058643579483032,
-0.28318777680397034,
0.5535204410552979,
0.8623065948486328,
-0.7089694142341614,
-1.3207148313522339,
-0.7263079881668091,
-0.043992571532... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jahjinx/IMDb_movie_reviews | jahjinx | 2023-01-08T15:47:19Z | 22 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | 2023-01-08T15:47:19Z | 2023-01-07T22:36:33.000Z | 2023-01-07T22:36:33 | ---
pretty_name: IMDb
task_categories:
- text-classification
task_ids:
- sentiment-classification
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
---
# Dataset Card for IMDb Movie Reviews
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Total amount of disk used:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This is a custom train/test/validation split of the IMDb Large Movie Review Dataset available from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
#### IMDb_movie_reviews
An example of 'train':
```
{
"text": "Beautifully photographed and ably acted, generally, but the writing is very slipshod. There are scenes of such unbelievability that there is no joy in the watching. The fact that the young lover has a twin brother, for instance, is so contrived that I groaned out loud. And the "emotion-light bulb connection" seems gimmicky, too.<br /><br />I don\'t know, though. If you have a few glasses of wine and feel like relaxing with something pretty to look at with a few flaccid comedic scenes, this is a pretty good movie. No major effort on the part of the viewer required. But Italian film, especially Italian comedy, is usually much, much better than this."
"label": 0,
}
```
### Data Fields
The data fields are the same among all splits.
#### IMDb_movie_reviews
- `text`: a `string` feature.
- `label`: a classification label, with values `neg` (0), `pos` (1).
### Data Splits
| name | train | validation | test |
|------------------|------:|-----------:|------:|
|IMDb_movie_reviews| 36000 | 4000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
[More Information Needed] | [
-0.800256609916687,
-0.5575896501541138,
0.17847734689712524,
0.069545678794384,
-0.6088320016860962,
-0.02266397327184677,
-0.08634961396455765,
-0.18747110664844513,
0.7182348370552063,
0.31750038266181946,
-0.7584106922149658,
-0.6741015911102295,
-0.523479700088501,
0.15627579391002655... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
svjack/bloom-dialogue-generate-ds-zh | svjack | 2023-01-26T03:53:12Z | 22 | 0 | null | [
"region:us"
] | 2023-01-26T03:53:12Z | 2023-01-26T03:52:16.000Z | 2023-01-26T03:52:16 | ---
dataset_info:
features:
- name: question
dtype: string
- name: dialogue_text
dtype: string
- name: dialogue
sequence: string
- name: repo
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 98021681
num_examples: 24297
download_size: 101459282
dataset_size: 98021681
---
# Dataset Card for "bloom-dialogue-generate-ds-zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5867403745651245,
-0.43485748767852783,
0.5253360867500305,
0.22172345221042633,
-0.1197822317481041,
-0.0012893659295514226,
0.10222177952528,
-0.01387739647179842,
0.8145567774772644,
0.47486183047294617,
-1.4178199768066406,
-0.8105923533439636,
-0.3416211009025574,
-0.32368993759155... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/miracl-en-corpus-22-12 | Cohere | 2023-02-06T11:54:52Z | 22 | 0 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-02-06T11:54:52Z | 2023-02-02T23:21:21.000Z | 2023-02-02T23:21:21 | ---
annotations_creators:
- expert-generated
language:
- en
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| [
-0.629565954208374,
-0.810245156288147,
0.3239307701587677,
0.24838297069072723,
-0.05481919273734093,
-0.06178038939833641,
-0.3013111352920532,
-0.5093767046928406,
0.550499677658081,
0.2260749638080597,
-0.5536705255508423,
-1.010604739189148,
-0.7051231265068054,
0.3405207097530365,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
neulab/mconala | neulab | 2023-02-10T19:01:31Z | 22 | 2 | null | [
"task_categories:text-generation",
"task_categories:translation",
"size_categories:n<1K",
"language:es",
"language:ja",
"language:ru",
"license:cc-by-sa-4.0",
"code generation",
"arxiv:2203.08388",
"region:us"
] | 2023-02-10T19:01:31Z | 2023-02-10T18:08:54.000Z | 2023-02-10T18:08:54 | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
- translation
language:
- es
- ja
- ru
tags:
- code generation
pretty_name: mconala
size_categories:
- n<1K
---
# Dataset Card for MCoNaLa
## Dataset Description
- **Homepage:** https://github.com/zorazrw/multilingual-conala
- **Repository:** https://github.com/zorazrw/multilingual-conala
- **Paper:** https://arxiv.org/pdf/2203.08388.pdf
- **Leaderboard:** https://explainaboard.inspiredco.ai/leaderboards?show_mine=false&sort_dir=desc&sort_field=created_at&dataset=mconala
### Dataset Summary
MCoNaLa is a Multilingual Code/Natural Language Challenge dataset with 896 NL-Code pairs in three languages: Spanish, Japanese, and Russian.
### Languages
Spanish, Japanese, Russian; Python
## Dataset Structure
### How to Use
```bash
from datasets import load_dataset
# Spanish subset
load_dataset("neulab/mconala", "es")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 341
})
})
# Japanese subset
load_dataset("neulab/mconala", "ja")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 210
})
})
# Russian subset
load_dataset("neulab/mconala", "ru")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 345
})
})
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|question_id|int|StackOverflow post id of the sample|
|intent|string|Title of the Stackoverflow post as the initial NL intent|
|rewritten_intent|string|nl intent rewritten by human annotators|
|snippet|string|Python code solution to the NL intent|
### Data Splits
The dataset contains 341, 210, and 345 samples in Spanish, Japanese, and Russian.
### Citation Information
```
@article{wang2022mconala,
title={MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages},
author={Zhiruo Wang, Grace Cuenca, Shuyan Zhou, Frank F. Xu, Graham Neubig},
journal={arXiv preprint arXiv:2203.08388},
year={2022}
}
``` | [
-0.5135529637336731,
-0.6898638010025024,
0.11297175288200378,
0.3292464017868042,
-0.14949937164783478,
0.05412423610687256,
-0.49976983666419983,
-0.2772825360298157,
0.6852227449417114,
0.44297048449516296,
-0.646761417388916,
-1.0516464710235596,
-0.4383601248264313,
0.5178886651992798... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sayakpaul/instructpix2pix-demo | sayakpaul | 2023-02-22T04:38:14Z | 22 | 0 | null | [
"arxiv:2211.09800",
"region:us"
] | 2023-02-22T04:38:14Z | 2023-02-21T12:21:29.000Z | 2023-02-21T12:21:29 | ---
dataset_info:
features:
- name: input
dtype: string
- name: edit
dtype: string
- name: output
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 2456199.0
num_examples: 5
download_size: 2460397
dataset_size: 2456199.0
---
# Dataset Card for "instructpix2pix-demo"
Dataset was created using [this notebook](https://colab.research.google.com/gist/sayakpaul/f90aa06f8f89c831f798dd5b3939818b/scratchpad.ipynb).
Paper reference: [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) | [
-0.31203559041023254,
-0.5538146495819092,
0.4525805413722992,
-0.05878252536058426,
-0.1661251038312912,
-0.15764939785003662,
-0.0006799189723096788,
-0.17751269042491913,
0.10068479925394058,
0.26232463121414185,
-0.7568855881690979,
-0.5724978446960449,
-0.22845622897148132,
-0.2772314... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jm0727/spider-val | jm0727 | 2023-02-21T15:12:08Z | 22 | 0 | null | [
"region:us"
] | 2023-02-21T15:12:08Z | 2023-02-21T15:00:38.000Z | 2023-02-21T15:00:38 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kanishka/comps | kanishka | 2023-09-16T15:09:24Z | 22 | 1 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2210.01963",
"region:us"
] | 2023-09-16T15:09:24Z | 2023-03-05T18:47:23.000Z | 2023-03-05T18:47:23 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license: apache-2.0
multilinguality:
- monolingual
pretty_name: COMPS
size_categories:
- 10K<n<100K
source_datasets:
- original
---
# Dataset Card for "COMPS"
## Dataset Description
COMPS is a dataset of minimal pair sentences in English that enables the
testing knowledge of concepts and their properties in language models (LMs).
Specifically, it tests the ability of LMs to attribute properties to everyday
concepts, and demonstrate reasoning compatible with property inheritance, where
subordinate concepts inherit the properties of their superordinate (hypernyms).
- **Homepage:** [https://github.com/kanishkamisra/comps/](https://github.com/kanishkamisra/comps/)
- **Repository:** [https://github.com/kanishkamisra/comps/](https://github.com/kanishkamisra/comps/)
- **Paper:** [arxiv](https://arxiv.org/abs/2210.01963)
- **Point of Contact:** [Kanishka Misra] (https://kanishka.website)
### Citation Information
```
@inproceedings{misra-etal-2023-comps,
title = "{COMPS}: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models",
author = "Misra, Kanishka and
Rayz, Julia and
Ettinger, Allyson",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.213",
doi = "10.18653/v1/2023.eacl-main.213",
pages = "2928--2949",
abstract = "A characteristic feature of human semantic cognition is its ability to not only store and retrieve the properties of concepts observed through experience, but to also facilitate the inheritance of properties (can breathe) from superordinate concepts (animal) to their subordinates (dog){---}i.e. demonstrate property inheritance. In this paper, we present COMPS, a collection of minimal pair sentences that jointly tests pre-trained language models (PLMs) on their ability to attribute properties to concepts and their ability to demonstrate property inheritance behavior. Analyses of 22 different PLMs on COMPS reveal that they can easily distinguish between concepts on the basis of a property when they are trivially different, but find it relatively difficult when concepts are related on the basis of nuanced knowledge representations. Furthermore, we find that PLMs can show behaviors suggesting successful property inheritance in simple contexts, but fail in the presence of distracting information, which decreases the performance of many models sometimes even below chance. This lack of robustness in demonstrating simple reasoning raises important questions about PLMs{'} capacity to make correct inferences even when they appear to possess the prerequisite knowledge.",
}
```
| [
-0.43953651189804077,
-0.6973034739494324,
0.07218720018863678,
0.03029484674334526,
-0.13148294389247894,
0.11544930934906006,
-0.334536075592041,
-0.32452288269996643,
0.13267315924167633,
0.3281952440738678,
-0.5238663554191589,
-0.4411774277687073,
-0.45573800802230835,
0.0073844878934... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigcode/the-stack-march-sample-special-tokens-stripped | bigcode | 2023-03-08T15:23:31Z | 22 | 0 | null | [
"region:us"
] | 2023-03-08T15:23:31Z | 2023-03-08T15:07:56.000Z | 2023-03-08T15:07:56 | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 3034084423
num_examples: 746856
download_size: 1107347598
dataset_size: 3034084423
---
# Dataset Card for "the-stack-march-sample-special-tokens-stripped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4651811122894287,
-0.36498400568962097,
-0.1444152444601059,
0.1769159585237503,
-0.48303744196891785,
0.25589001178741455,
0.4533097743988037,
0.03434271365404129,
1.290639877319336,
0.7386158108711243,
-1.0944095849990845,
-0.7158120274543762,
-0.44667476415634155,
-0.121320940554142,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
koaradearu/nva-aria | koaradearu | 2023-03-15T07:09:03Z | 22 | 0 | null | [
"region:us"
] | 2023-03-15T07:09:03Z | 2023-03-15T07:08:14.000Z | 2023-03-15T07:08:14 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dahoas/rl-prompt-dataset | Dahoas | 2023-03-17T14:08:30Z | 22 | 2 | null | [
"region:us"
] | 2023-03-17T14:08:30Z | 2023-03-17T13:57:19.000Z | 2023-03-17T13:57:19 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 331075688.0
num_examples: 201417
- name: test
num_bytes: 7649255
num_examples: 5103
download_size: 206459232
dataset_size: 338724943.0
---
# Dataset Card for "rl-prompt-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6681001782417297,
-0.4349440932273865,
0.23245832324028015,
0.2148953378200531,
-0.18745967745780945,
0.1469653844833374,
0.2216058373451233,
-0.06458176672458649,
0.809705376625061,
0.41273921728134155,
-1.2459447383880615,
-0.7316306233406067,
-0.4153897762298584,
0.052061568945646286... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Deysi/sentences-and-emotions | Deysi | 2023-03-21T22:54:16Z | 22 | 3 | null | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | 2023-03-21T22:54:16Z | 2023-03-21T22:23:47.000Z | 2023-03-21T22:23:47 | ---
dataset_info:
features:
- name: utterance
dtype: string
- name: emotion
dtype: string
splits:
- name: test
num_bytes: 62487
num_examples: 816
- name: valid
num_bytes: 39971
num_examples: 493
- name: train
num_bytes: 188423
num_examples: 2405
download_size: 36170
dataset_size: 290881
task_categories:
- text-classification
language:
- en
pretty_name: Sentences and emotions
size_categories:
- 100K<n<1M
---
# Dataset Card for "sentences-and-emotions"
Recognizing Emotion Cause in Conversations. Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Pengfei Hong, Romila Ghosh, Abhinaba Roy, Niyati Chhaya, Alexander Gelbukh, Rada Mihalcea. Cognitive Computation (2021). | [
-0.4617821276187897,
-1.0395411252975464,
0.6613580584526062,
0.25710561871528625,
0.03661292418837547,
-0.2422589808702469,
-0.2024330049753189,
-0.33669808506965637,
0.16198848187923431,
0.3908969461917877,
-1.1456034183502197,
-0.4652784764766693,
-0.8115112781524658,
0.0530291125178337... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maximoss/lingnli-multi-mt | maximoss | 2023-11-26T16:23:38Z | 22 | 1 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"size_categories:10K<n<100K",
"language:el",
"language:fr",
"language:it",
"language:es",
"language:pt",
"language:ko",
"language:fi",
"language:lt",
"language:bg",
"li... | 2023-11-26T16:23:38Z | 2023-03-25T12:06:26.000Z | 2023-03-25T12:06:26 | ---
license: bsd-2-clause
language:
- el
- fr
- it
- es
- pt
- ko
- fi
- lt
- bg
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This repository contains a collection of machine translations of [LingNLI](https://github.com/Alicia-Parrish/ling_in_loop) dataset
into 9 different languages (Bulgarian, Finnish, French, Greek, Italian, Korean, Lithuanian, Portuguese, Spanish). The goal is to predict textual entailment (does sentence A
imply/contradict/neither sentence B), which is a classification task (given two sentences,
predict one of three labels). It is here formatted in the same manner as the widely used [XNLI](https://huggingface.co/datasets/xnli) dataset for convenience.
If you want to use this dataset only in a specific language among those provided here, you can filter data by selecting only the language column value you wish.
### Supported Tasks and Leaderboards
This dataset can be used for the task of Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), which is a sentence-pair classification task.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `language`: The language in which the pair of sentences is given.
- `premise`: The machine translated premise in the target language.
- `hypothesis`: The machine translated premise in the target language.
- `label`: The classification label, with possible values 0 (`entailment`), 1 (`neutral`), 2 (`contradiction`).
- `label_text`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `premise_original`: The original premise from the English source dataset.
- `hypothesis_original`: The original hypothesis from the English source dataset.
### Data Splits
For the whole dataset (LitL and LotS subsets):
| language |train|validation|
|-------------|----:|---------:|
|all_languages|269865| 44037|
|el-gr |29985| 4893|
|fr |29985| 4893|
|it |29985| 4893|
|es |29985| 4893|
|pt |29985| 4893|
|ko |29985| 4893|
|fi |29985| 4893|
|lt |29985| 4893|
|bg |29985| 4893|
For LitL subset:
| language |train|validation|
|-------------|----:|---------:|
|all_languages|134955| 21825|
|el-gr |14995| 2425|
|fr |14995| 2425|
|it |14995| 2425|
|es |14995| 2425|
|pt |14995| 2425|
|ko |14995| 2425|
|fi |14995| 2425|
|lt |14995| 2425|
|bg |14995| 2425|
For LotS subset:
| language |train|validation|
|-------------|----:|---------:|
|all_languages|134910| 22212|
|el-gr |14990| 2468|
|fr |14990| 2468|
|it |14990| 2468|
|es |14990| 2468|
|pt |14990| 2468|
|ko |14990| 2468|
|fi |14990| 2468|
|lt |14990| 2468|
|bg |14990| 2468|
## Dataset Creation
The two subsets of the original dataset were machine translated using the latest neural machine translation [opus-mt-tc-big](https://huggingface.co/models?sort=downloads&search=opus-mt-tc-big) models available for the respective languages.
Running the translations lasted from March 25, 2023 until April 8, 2023.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
**BibTeX:**
````BibTeX
@inproceedings{parrish-etal-2021-putting-linguist,
title = "Does Putting a Linguist in the Loop Improve {NLU} Data Collection?",
author = "Parrish, Alicia and
Huang, William and
Agha, Omar and
Lee, Soo-Hwan and
Nangia, Nikita and
Warstadt, Alexia and
Aggarwal, Karmanya and
Allaway, Emily and
Linzen, Tal and
Bowman, Samuel R.",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.421",
doi = "10.18653/v1/2021.findings-emnlp.421",
pages = "4886--4901",
abstract = "Many crowdsourced NLP datasets contain systematic artifacts that are identified only after data collection is complete. Earlier identification of these issues should make it easier to create high-quality training and evaluation data. We attempt this by evaluating protocols in which expert linguists work {`}in the loop{'} during data collection to identify and address these issues by adjusting task instructions and incentives. Using natural language inference as a test case, we compare three data collection protocols: (i) a baseline protocol with no linguist involvement, (ii) a linguist-in-the-loop intervention with iteratively-updated constraints on the writing task, and (iii) an extension that adds direct interaction between linguists and crowdworkers via a chatroom. We find that linguist involvement does not lead to increased accuracy on out-of-domain test sets compared to baseline, and adding a chatroom has no effect on the data. Linguist involvement does, however, lead to more challenging evaluation data and higher accuracy on some challenge sets, demonstrating the benefits of integrating expert analysis during data collection.",
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and
Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
abstract = "This paper presents OPUS-MT a project that focuses on the development of free resources and tools for machine translation. The current status is a repository of over 1,000 pre-trained neural machine translation models that are ready to be launched in on-line translation services. For this we also provide open source implementations of web applications that can run efficiently on average desktop hardware with a straightforward setup and installation.",
}
````
**ACL:**
Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alexia Warstadt, Karmanya Aggarwal, Emily Allaway, Tal Linzen, and Samuel R. Bowman. 2021. [Does Putting a Linguist in the Loop Improve NLU Data Collection?](https://aclanthology.org/2021.findings-emnlp.421). In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 4886–4901, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jörg Tiedemann and Santhosh Thottingal. 2020. [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61). In *Proceedings of the 22nd Annual Conference of the European Association for Machine Translation*, pages 479–480, Lisboa, Portugal. European Association for Machine Translation.
### Acknowledgements
These translations of the original dataset were done as part of a research project supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, _Institut Cybersécurité Occitanie_, funded by Région Occitanie, France.
### Contributions
[More Information Needed] | [
-0.3469681739807129,
-0.6979289054870605,
0.188580721616745,
0.19558240473270416,
-0.1270742267370224,
-0.10596780478954315,
-0.5984011888504028,
-0.43811464309692383,
0.46264106035232544,
0.5079566836357117,
-0.42519450187683105,
-0.6953476667404175,
-0.535273015499115,
0.4508042335510254... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/ggponc2 | bigbio | 2023-04-05T01:15:05Z | 22 | 4 | null | [
"multilinguality:monolingual",
"language:de",
"region:us"
] | 2023-04-05T01:15:05Z | 2023-04-01T16:49:04.000Z | 2023-04-01T16:49:04 | ---
language:
- de
bigbio_language:
- German
multilinguality: monolingual
pretty_name: GGPONC2
homepage: https://www.leitlinienprogramm-onkologie.de/projekte/ggponc-english/
bigbio_pubmed: false
bigbio_public: flase
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for GGPONC2
## Dataset Description
- **Homepage:** https://www.leitlinienprogramm-onkologie.de/projekte/ggponc-english/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER
The GGPONC project aims to provide a freely distributable corpus of German medical text for NLP researchers.
Clinical guidelines are particularly suitable to create such corpora, as they contain no protected health information
(PHI), which distinguishes them from other kinds of medical text.
The second version of the corpus (GGPONC 2.0) consists of 30 German oncology guidelines with 1.87 million tokens.
It has been completely manually annotated on the entity level by 7 medical students using the INCEpTION platform over a
time frame of 6 months in more than 1200 hours of work. This makes GGPONC 2.0 the largest annotated, freely
distributable corpus of German medical text at the moment.
Annotated entities are Findings (Diagnosis / Pathology, Other Finding), Substances (Clinical Drug, Nutrients / Body
Substances, External Substances) and Procedures (Therapeutic, Diagnostic), as well as Specifications for these entities.
In total, annotators have created more than 200000 entity annotations. In addition, fragment relationships have been
annotated to explicitly indicate elliptical coordinated noun phrases, a common phenomenon in German text.
## Citation Information
```
@inproceedings{borchert-etal-2022-ggponc,
title = "{GGPONC} 2.0 - The {G}erman Clinical Guideline Corpus for Oncology: Curation Workflow, Annotation Policy, Baseline {NER} Taggers",
author = "Borchert, Florian and
Lohr, Christina and
Modersohn, Luise and
Witt, Jonas and
Langer, Thomas and
Follmann, Markus and
Gietzelt, Matthias and
Arnrich, Bert and
Hahn, Udo and
Schapranow, Matthieu-P.",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.389",
pages = "3650--3660",
}
```
| [
-0.3860339820384979,
-0.5497572422027588,
0.5454134941101074,
0.04426095634698868,
-0.3902949392795563,
-0.5192790627479553,
-0.5440723299980164,
-0.6068384051322937,
0.12486745417118073,
0.6049361824989319,
-0.22769276797771454,
-0.8770233988761902,
-0.8459339141845703,
0.2660104632377624... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cannin/biostars_qa | cannin | 2023-04-06T14:18:09Z | 22 | 2 | null | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"biology",
"region:us"
] | 2023-04-06T14:18:09Z | 2023-04-03T22:10:57.000Z | 2023-04-03T22:10:57 | ---
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
- text-generation
language:
- en
tags:
- biology
size_categories:
- 1K<n<10K
---
## Dataset Description
- **BioStars Homepage:** https://www.biostars.org/
- **BioStars Paper:** https://doi.org/10.1371/journal.pcbi.1002216
- **Code Repository (This Dataset):** https://github.com/cannin/biostars_qa
### Dataset Summary
This dataset contains 4803 question/answer pairs extracted from the [BioStars](https://www.biostars.org/) website. The site focuses on bioinformatics, computational genomics, and biological data analysis.
## Dataset Structure
### Data Fields
The data contains INSTRUCTION, RESPONSE, SOURCE, and METADATA fields. The format is described for [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/data/datasets/README.md)
## Dataset Creation
### Curation Rationale
Questions were included if they were an accepted answer and the question had at least 1 vote.
### Source Data
Data collected using the [Biostars API](https://www.biostars.org/info/api/)
## Additional Information
### Dataset Curators
[@cannin](https://github.com/cannin). @cannin has no affiliation with the BioStars project.
### Licensing Information
Apache-2.0
### Citation Information
#### BioStars Project
Cite the original project: https://doi.org/10.1371/journal.pcbi.1002216
#### This Dataset
Citation for this dataset:
```
@misc{Luna2023a,
author = {Augustin Luna},
title = {biostars_qa Dataset},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/cannin/biostars_qa}}
}
```
#### This Dataset Code
Citation for the code to generate this dataset:
```
@misc{Luna2023b,
author = {Augustin Luna},
title = {biostars_qa Code},
year = {2023},
howpublished = {\url{https://github.com/cannin/biostars_qa}}
}
``` | [
-0.23100745677947998,
-0.49543142318725586,
0.4194740951061249,
0.24406024813652039,
-0.22752253711223602,
-0.1536857634782791,
0.028142480179667473,
-0.21427766978740692,
0.5434399247169495,
0.4725976586341858,
-0.670522928237915,
-0.8340994715690613,
-0.3814380168914795,
0.34381967782974... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hpprc/janli | hpprc | 2023-04-11T04:40:37Z | 22 | 2 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language_creators:other",
"multilinguality:monolingual",
"language:ja",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-04-11T04:40:37Z | 2023-04-05T12:25:01.000Z | 2023-04-05T12:25:01 | ---
language:
- ja
language_creators:
- other
multilinguality:
- monolingual
pretty_name: JaNLI
task_categories:
- text-classification
task_ids:
- natural-language-inference
license: cc-by-sa-4.0
---
# Dataset Card for JaNLI
## Table of Contents
- [Dataset Card for JaNLI](#dataset-card-for-janli)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [base](#base)
- [original](#original)
- [Data Fields](#data-fields)
- [base](#base-1)
- [original](#original-1)
- [Data Splits](#data-splits)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/verypluming/JaNLI
- **Repository:** https://github.com/verypluming/JaNLI
- **Paper:** https://aclanthology.org/2021.blackboxnlp-1.26/
### Dataset Summary
The JaNLI (Japanese Adversarial NLI) dataset, inspired by the English HANS dataset, is designed to necessitate an understanding of Japanese linguistic phenomena and to illuminate the vulnerabilities of models.
### Languages
The language data in JaNLI is in Japanese (BCP-47 [ja-JP](https://www.rfc-editor.org/info/bcp47)).
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
import datasets as ds
dataset: ds.DatasetDict = ds.load_dataset("hpprc/janli")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 13680
# })
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 720
# })
# })
dataset: ds.DatasetDict = ds.load_dataset("hpprc/janli", name="original")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 13680
# })
# test: Dataset({
# features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 720
# })
# })
```
#### base
An example of looks as follows:
```json
{
'id': 12,
'premise': '若者がフットボール選手を見ている',
'hypothesis': 'フットボール選手を若者が見ている',
'label': 0,
'heuristics': 'overlap-full',
'number_of_NPs': 2,
'semtag': 'scrambling'
}
```
#### original
An example of looks as follows:
```json
{
'id': 12,
'sentence_A_Ja': '若者がフットボール選手を見ている',
'sentence_B_Ja': 'フットボール選手を若者が見ている',
'entailment_label_Ja': 0,
'heuristics': 'overlap-full',
'number_of_NPs': 2,
'semtag': 'scrambling'
}
```
### Data Fields
#### base
A version adopting the column names of a typical NLI dataset.
| Name | Description |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| id | The number of the sentence pair. |
| premise | The premise (sentence_A_Ja). |
| hypothesis | The hypothesis (sentence_B_Ja). |
| label | The correct label for the sentence pair (either `entailment` or `non-entailment`); in the setting described in the paper, non-entailment = neutral + contradiction (entailment_label_Ja). |
| heuristics | The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset. |
| number_of_NPs | The number of noun phrase in a sentence. |
| semtag | The linguistic phenomena tag. |
#### original
The original version retaining the unaltered column names.
| Name | Description |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| id | The number of the sentence pair. |
| sentence_A_Ja | The premise. |
| sentence_B_Ja | The hypothesis. |
| entailment_label_Ja | The correct label for this sentence pair (either `entailment` or `non-entailment`); in the setting described in the paper, non-entailment = neutral + contradiction |
| heuristics | The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset. |
| number_of_NPs | The number of noun phrase in a sentence. |
| semtag | The linguistic phenomena tag. |
### Data Splits
| name | train | validation | test |
| -------- | -----: | ---------: | ---: |
| base | 13,680 | | 720 |
| original | 13,680 | | 720 |
### Annotations
The annotation process for this Japanese NLI dataset involves tagging each pair (P, H) of a premise and hypothesis with a label for structural pattern and linguistic phenomenon.
The structural relationship between premise and hypothesis sentences is classified into five patterns, with each pattern associated with a type of heuristic that can lead to incorrect predictions of the entailment relation.
Additionally, 11 categories of Japanese linguistic phenomena and constructions are focused on for generating the five patterns of adversarial inferences.
For each linguistic phenomenon, a template for the premise sentence P is fixed, and multiple templates for hypothesis sentences H are created.
In total, 144 templates for (P, H) pairs are produced.
Each pair of premise and hypothesis sentences is tagged with an entailment label (`entailment` or `non-entailment`), a structural pattern, and a linguistic phenomenon label.
The JaNLI dataset is generated by instantiating each template 100 times, resulting in a total of 14,400 examples.
The same number of entailment and non-entailment examples are generated for each phenomenon.
The structural patterns are annotated with the templates for each linguistic phenomenon, and the ratio of `entailment` and `non-entailment` examples is not necessarily 1:1 for each pattern.
The dataset uses a total of 158 words (nouns and verbs), which occur more than 20 times in the JSICK and JSNLI datasets.
## Additional Information
- [verypluming/JaNLI](https://github.com/verypluming/JaNLI)
- [Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference](https://aclanthology.org/2021.blackboxnlp-1.26/)
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@InProceedings{yanaka-EtAl:2021:blackbox,
author = {Yanaka, Hitomi and Mineshima, Koji},
title = {Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference},
booktitle = {Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021)},
url = {https://aclanthology.org/2021.blackboxnlp-1.26/},
year = {2021},
}
```
### Contributions
Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset.
| [
-0.4280126094818115,
-0.8660875558853149,
0.2212025374174118,
0.33356815576553345,
-0.24781416356563568,
-0.20027557015419006,
-0.15570853650569916,
-0.22395996749401093,
0.529471218585968,
0.435779333114624,
-0.47045016288757324,
-0.7893214821815491,
-0.5083605647087097,
0.341436922550201... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
afmck/peanuts-flan-t5-xl | afmck | 2023-04-05T14:09:59Z | 22 | 4 | null | [
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-04-05T14:09:59Z | 2023-04-05T13:16:59.000Z | 2023-04-05T13:16:59 | ---
license: apache-2.0
task_categories:
- text-to-image
language:
- en
pretty_name: Peanuts Dataset (Snoopy and Co.)
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: panel_name
dtype: string
- name: characters
sequence: string
- name: themes
sequence: string
- name: color
dtype: string
- name: year
dtype: int64
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2947874869.848
num_examples: 77456
download_size: 0
dataset_size: 2947874869.848
---
# Peanut Comic Strip Dataset (Snoopy & Co.)

This is a dataset Peanuts comic strips from `1950/10/02` to `2000/02/13`.
There are `77,456` panels extracted from `17,816` comic strips.
The dataset size is approximately `4.4G`.
Each row in the dataset contains the following fields:
- `image`: `PIL.Image` containing the extracted panel.
- `panel_name`: unique identifier for the row.
- `characters`: `tuple[str, ...]` of characters included in the comic strip the panel is part of.
- `themes`: `tuple[str, ...]` of theme in the comic strip the panel is part of.
- `color`: `str` indicating whether the panel is grayscale or in color.
- `caption`: [BLIP-2_FLAN-T5-XL](https://huggingface.co/docs/transformers/main/model_doc/blip-2) generated caption from the panel.
- `year`: `int` storing the year the specific panel was released.
> **FLAN-T5-XL has a commercial use license and so this dataset can be used for commercial projects. Alternatively use [this similar dataset](https://huggingface.co/datasets/afmck/peanuts-opt-6.7b) that uses OPT-6.7B as the caption pipeline's text model, however it does not permit commercial use.**
Character and theme information was extracted from [Peanuts Wiki (Fandom)](https://peanuts.fandom.com/wiki/Peanuts_Wiki) using [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
Images were extracted from [Peanuts Search](https://peanuts-search.com/).
Only strips with the following characters were extracted:
```
- "Charlie Brown"
- "Sally Brown"
- "Joe Cool" # Snoopy alter-ego
- "Franklin"
- "Violet Gray"
- "Eudora"
- "Frieda"
- "Marcie"
- "Peppermint Patty"
- "Patty"
- "Pig-Pen"
- "Linus van Pelt"
- "Lucy van Pelt"
- "Rerun van Pelt"
- "Schroeder"
- "Snoopy"
- "Shermy"
- "Spike"
- "Woodstock"
- "the World War I Flying Ace" # Snoopy alter-ego
```
### Extraction Details
Panel detection and extraction was done using the following codeblock:
```python
def check_contour(cnt):
area = cv2.contourArea(cnt)
if area < 600:
return False
_, _, w, h = cv2.boundingRect(cnt)
if w / h < 1 / 2: return False
if w / h > 2 / 1: return False
return True
def get_panels_from_image(path):
panels = []
original_img = cv2.imread(path)
gray = cv2.cvtColor(original_img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
invert = 255 - opening
cnts, _ = cv2.findContours(invert, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
idx = 0
for cnt in cnts:
if not check_contour(cnt): continue
idx += 1
x,y,w,h = cv2.boundingRect(cnt)
roi = original_img[y:y+h,x:x+w]
panels.append(roi)
return panels
```
`check_contour` will reject panels with `area < 600` or with aspect ratios larger than `2` or smaller than `0.5`.
Grayscale detection was done using the following codeblock:
```python
def is_grayscale(panel):
LAB_THRESHOLD = 10.
img = cv2.cvtColor(panel, cv2.COLOR_RGB2LAB)
_, ea, eb = cv2.split(img)
de = abs(ea - eb)
mean_e = np.mean(de)
return mean_e < LAB_THRESHOLD
```
Captioning was done using the standard BLIP-2 pipeline shown in the [Huggingface docs](https://huggingface.co/docs/transformers/main/model_doc/blip-2) using beam search over 10 beams and a repetition penalty of `2.0`.
Raw captions are extracted and no postprocessing is applied. You may wish to normalise captions (such as replacing "cartoon" with "peanuts cartoon") or incorporate extra metadata into prompts. | [
-0.6929276585578918,
-0.552189826965332,
0.16822010278701782,
0.49380549788475037,
-0.2200683206319809,
0.10507028549909592,
-0.028058767318725586,
-0.46358031034469604,
0.48065316677093506,
0.7125526070594788,
-0.581303060054779,
-0.3633699417114258,
-0.48812392354011536,
0.30020806193351... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fschlatt/trump-tweets | fschlatt | 2023-04-19T11:41:59Z | 22 | 1 | null | [
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-04-19T11:41:59Z | 2023-04-19T10:35:29.000Z | 2023-04-19T10:35:29 | ---
license: cc0-1.0
language:
- en
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: is_retweet
dtype: bool
- name: is_deleted
dtype: bool
- name: device
dtype: string
- name: favorites
dtype: int64
- name: retweets
dtype: int64
- name: datetime
dtype: timestamp[s]
- name: is_flagged
dtype: bool
splits:
- name: train
num_bytes: 10593265
num_examples: 56571
download_size: 0
dataset_size: 10593265
---
This is a clone of the Trump Tweet Kaggle dataset found here: https://www.kaggle.com/datasets/headsortails/trump-twitter-archive | [
-0.012189784087240696,
-0.7853805422782898,
0.30517518520355225,
0.01321044284850359,
0.0012208331609144807,
0.850398600101471,
0.1196383535861969,
0.15520240366458893,
1.066384196281433,
0.9265635013580322,
-1.0437736511230469,
-0.4599323272705078,
-0.3950267732143402,
-0.4522233009338379... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tiansz/ChineseSTS | tiansz | 2023-04-20T07:19:37Z | 22 | 6 | null | [
"task_categories:sentence-similarity",
"size_categories:1M<n<10M",
"language:zh",
"license:apache-2.0",
"STS",
"region:us"
] | 2023-04-20T07:19:37Z | 2023-04-20T06:40:04.000Z | 2023-04-20T06:40:04 | ---
license: apache-2.0
task_categories:
- sentence-similarity
language:
- zh
tags:
- STS
size_categories:
- 1M<n<10M
---
这是一个中文文本相似度的数据集,相似度划分为 0、1。
该 [notebook](https://www.kaggle.com/code/tiansztianszs/chinese-sentence-similarity) 记录了我使用本数据集的全过程。同时你也可以在 [github](https://github.com/tiansztiansz/Chinese-Text-Similarity) 上下载该数据集 | [
0.004288718570023775,
-1.020296335220337,
0.348413348197937,
0.7613927125930786,
-0.6647065877914429,
-0.1318414807319641,
0.004693593829870224,
-0.40020623803138733,
0.6572771072387695,
0.4265471398830414,
-0.17253857851028442,
-0.7022730708122253,
-0.41225817799568176,
0.1609561294317245... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
generative-newsai/news-unmasked | generative-newsai | 2023-04-27T14:30:14Z | 22 | 1 | null | [
"task_categories:image-to-text",
"region:us"
] | 2023-04-27T14:30:14Z | 2023-04-27T04:52:57.000Z | 2023-04-27T04:52:57 | ---
dataset_info:
features:
- name: image
dtype: image
- name: section
dtype: string
- name: headline
dtype: string
- name: image_id
dtype: string
splits:
- name: train
num_bytes: 5084636867.984
num_examples: 48988
- name: test
num_bytes: 1360809852.398
num_examples: 12247
download_size: 1331950856
dataset_size: 6445446720.382
task_categories:
- image-to-text
pretty_name: NewsUnmasked
---
# Dataset Card for "news-unmasked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5042221546173096,
-0.34437960386276245,
0.12203710526227951,
0.22136689722537994,
-0.5380723476409912,
0.19745351374149323,
0.0296844020485878,
-0.14454075694084167,
1.0688354969024658,
0.6562598347663879,
-0.8048079609870911,
-0.9901725053787231,
-0.773453414440155,
-0.4450371563434601... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aneeshas/imsdb-genre-movie-scripts | aneeshas | 2023-05-07T20:36:37Z | 22 | 0 | null | [
"region:us"
] | 2023-05-07T20:36:37Z | 2023-05-07T19:16:43.000Z | 2023-05-07T19:16:43 | ---
dataset_info:
features:
- name: Action
dtype: string
- name: Horror
dtype: string
- name: Sci-Fi
dtype: string
- name: Comedy
dtype: string
- name: Drama
dtype: string
splits:
- name: train
num_bytes: 180531797
num_examples: 150
download_size: 80225374
dataset_size: 180531797
---
# Dataset Card for "imsdb-genre-movie-scripts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5817964673042297,
-0.0947786346077919,
0.17960646748542786,
0.31215211749076843,
-0.2139425277709961,
0.31863221526145935,
0.255585640668869,
0.257213830947876,
1.0864038467407227,
0.6471846699714661,
-1.1312049627304077,
-0.9453548192977905,
-0.7548489570617676,
-0.15020427107810974,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jumtra/jglue_jsquad | Jumtra | 2023-06-21T01:07:32Z | 22 | 0 | null | [
"region:us"
] | 2023-06-21T01:07:32Z | 2023-05-22T11:58:17.000Z | 2023-05-22T11:58:17 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7356910
num_examples: 67301
download_size: 3527041
dataset_size: 7356910
---
# Dataset Card for "jglue_jsquad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5394647717475891,
-0.3601190447807312,
0.04379869997501373,
0.10208233445882797,
-0.12109488248825073,
0.24200351536273956,
0.21741840243339539,
-0.02939613349735737,
0.9320425391197205,
0.47335779666900635,
-0.5569742321968079,
-0.783305823802948,
-0.5916606187820435,
-0.44030147790908... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Brand24/mms | Brand24 | 2023-08-23T21:49:55Z | 22 | 3 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:mixed",
"multilinguality:multi-lingual",
"size_categories:1M<n<10M",
"language:ar",
"language:bg",
"language:bs",
"language:cs",
"language:de",
"language:el",
"language:en",
"language:es",
"la... | 2023-08-23T21:49:55Z | 2023-05-24T12:07:06.000Z | 2023-05-24T12:07:06 | ---
annotations_creators:
- mixed
language:
- ar
- bg
- bs
- cs
- de
- el
- en
- es
- fa
- fr
- he
- hi
- hr
- hu
- it
- ja
- lv
- pl
- pt
- ru
- sk
- sl
- sq
- sr
- sv
- th
- ur
- zh
license:
- other
multilinguality:
- multi-lingual
size_categories:
- 1M<n<10M
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Massive-Multilingual-Sentiment
---
# Massive Multilingual Sentiment Corpora (MMS)
## Corpora Summary
Despite impressive advancements in multilingual corpora collection and model training, developing large-scale deployments of multilingual models still presents a significant challenge. This is particularly true for language tasks that are culture-dependent. One such example is the area of multilingual sentiment analysis, where affective markers can be subtle and deeply ensconced in culture.
This work presents the most extensive open massively multilingual corpus of datasets for training sentiment models. The corpus consists of 79 manually selected from over 350 datasets reported in the scientific literature based on strict quality criteria and covers 27 languages. Datasets can be queried using several linguistic and functional features. In addition, we present a multi-faceted sentiment classification benchmark summarizing hundreds of experiments conducted on different base models, training objectives, dataset collections, and fine-tuning strategies.
More about dataset here [https://brand24-ai.github.io/mms_benchmark](https://brand24-ai.github.io/mms_benchmark).
## General licenses information
This is a library of the open-sourced datasets that we gathered. We provide citations or links to sources of these datasets. It is essential to mention that these datasets could have different licenses, and we encourage everybody to check the permissions of each dataset separately. It is critical because, for example, not all datasets will be available for commercial purposes. This ensures that proper consent and permissions are obtained for the use and curation of the data, respecting the rights and privacy of the individuals whose data is included in the datasets. You will cite our library and the authors of each dataset you want to use.
## Usage
```python
import datasets
# whole dataset will be downloaded and cached
mms_dataset = datasets.load_dataset("Brand24/mms")
# filter only texts in Polish
pl = mms_dataset.filter(lambda row: row['language'] == 'pl')
```
## Corpora statistics
### Per language
| language | label_name | count |
|:-----------|:-------------|--------:|
| ar | negative | 138899 |
| ar | neutral | 192774 |
| ar | positive | 600402 |
| bg | negative | 13930 |
| bg | neutral | 28657 |
| bg | positive | 19563 |
| bs | negative | 11974 |
| bs | neutral | 11145 |
| bs | positive | 13064 |
| cs | negative | 39674 |
| cs | neutral | 59200 |
| cs | positive | 97413 |
| de | negative | 104667 |
| de | neutral | 100071 |
| de | positive | 111149 |
| el | negative | 230 |
| el | neutral | 38 |
| el | positive | 232 |
| en | negative | 304939 |
| en | neutral | 290823 |
| en | positive | 1734724 |
| es | negative | 108733 |
| es | neutral | 122493 |
| es | positive | 187486 |
| fa | negative | 1602 |
| fa | neutral | 5091 |
| fa | positive | 6832 |
| fr | negative | 84187 |
| fr | neutral | 43245 |
| fr | positive | 83199 |
| he | negative | 2279 |
| he | neutral | 243 |
| he | positive | 6097 |
| hi | negative | 4992 |
| hi | neutral | 6392 |
| hi | positive | 5615 |
| hr | negative | 19757 |
| hr | neutral | 19470 |
| hr | positive | 38367 |
| hu | negative | 8974 |
| hu | neutral | 17621 |
| hu | positive | 30087 |
| it | negative | 4043 |
| it | neutral | 4193 |
| it | positive | 3829 |
| ja | negative | 83982 |
| ja | neutral | 41979 |
| ja | positive | 83819 |
| lv | negative | 1378 |
| lv | neutral | 2618 |
| lv | positive | 1794 |
| pl | negative | 77422 |
| pl | neutral | 62074 |
| pl | positive | 97192 |
| pt | negative | 56827 |
| pt | neutral | 55165 |
| pt | positive | 45842 |
| ru | negative | 31770 |
| ru | neutral | 48106 |
| ru | positive | 31054 |
| sk | negative | 14431 |
| sk | neutral | 12842 |
| sk | positive | 29350 |
| sl | negative | 33694 |
| sl | neutral | 50553 |
| sl | positive | 29296 |
| sq | negative | 6889 |
| sq | neutral | 14757 |
| sq | positive | 22638 |
| sr | negative | 25089 |
| sr | neutral | 32283 |
| sr | positive | 18996 |
| sv | negative | 16266 |
| sv | neutral | 13342 |
| sv | positive | 11738 |
| th | negative | 9326 |
| th | neutral | 28616 |
| th | positive | 34377 |
| ur | negative | 5239 |
| ur | neutral | 8585 |
| ur | positive | 5836 |
| zh | negative | 117967 |
| zh | neutral | 69016 |
| zh | positive | 144719 |
## Dataset Structure
### Linguistic Typology
The field of language typology focuses on studying the similarities and differences among languages. These differences can be categorized into phonological (sounds), syntactic (structures), lexical (vocabulary), and theoretical aspects. Linguistic typology analyzes the current state of languages, contrasting with genealogical linguistics, which examines historical relationships between languages.
Genealogical linguistics studies language families and genera. A language family consists of languages that share a common ancestral language, while genera are branches within a language family. The Indo-European family, for example, includes genera such as Slavic, Romance, Germanic, and Indic. Over 7000 languages are categorized into approximately 150 language families, with Indo-European, Sino-Tibetan, Turkic, Afro-Asiatic, Nilo-Saharan, Niger-Congo, and Eskimo-Aleut being some of the largest families.
Within linguistic typology, languages are described using various linguistic features. Our work focuses on sentiment classification and selects ten relevant features:
- `text`: The feature text represents the actual text of the sentiment dataset. It is of type string and contains the text samples or sentences for sentiment analysis.
- `label`: The feature label corresponds to the sentiment labels of the text samples. It is of type ClassLabel and has three possible values: negative, neutral, and positive. These labels indicate the sentiment or emotional polarity associated with the text.
- `original_dataset`: The feature original_dataset refers to the name or identifier of the original dataset from which the text samples were extracted. It is of type string and provides information about the source dataset.
- `domain`: The feature domain represents the domain or topic of the sentiment dataset. It is of type string and provides context regarding the subject matter of the text samples.
- `language`: The feature language indicates the language of the text samples in the sentiment dataset. It is of type string and specifies the language in which the text is written.
- `Family`: The feature Family represents the language family to which a specific language belongs. It is of type string and provides information about the broader categorization of languages into language families.
- `Genus`: The feature Genus corresponds to the genus or branch within a language family. It is of type string and indicates the specific subgrouping of languages within a language family.
- `Definite article`: Half of the languages do not use the definite article, which signals uniqueness or definiteness of a concept.
- `Indefinite article`: Half of the languages do not use the indefinite article, with some languages using a separate article or the numeral "one."
- `Number of cases`: Languages vary greatly in the number of morphological cases used.
- `Order of subject, verb, and object`: Different languages have different word orderings, with variations like SOV, SVO, VSO, VOS, OVS, and OSV.
- `Negative morphemes`: Negative morphemes indicate clausal negation in declarative sentences.
- `Polar questions`: Questions with yes/no answers, which can be formed using question particles, interrogative morphology, or intonation.
- `Position of the negative morpheme`: The position of the negative morpheme can vary in relation to subjects and objects.
- `Prefixing vs. suffixing`: Languages differ in their use of prefixes and suffixes in inflectional morphology.
- `Coding of nominal plurals`: Plurals can be expressed through morphological changes or the use of plurality indicator morphemes.
- `Grammatical genders`: Languages vary in the number of grammatical genders used, or may not use the concept at all.
These language features are available as filtering options in our library. Users can download specific facets of the collection, such as datasets in Slavic languages with interrogative word order for polar questions or datasets from the Afro-Asiatic language family without morphological case-making.
### Usage
Code example for loading and filtering Slavic language in which polar questions are formed using the interrogative word order
```python
import datasets
mms_dataset = datasets.load_dataset("Brand24/mms")
slavic = mms_dataset.filter(lambda row: row["Genus"] == "Slavic" and row["Polar questions"] == "interrogative word order")
```
Filtering sentiment datasets from the Afro-Asiatic language family without morphological case-making
```python
afro_asiatic = mms_dataset.filter(lambda row: row["Family"] == "Afro-Asiatic" and row["Number of cases"] == "no morphological case-making")
```
## Dataset Creation
### Who are the source language producers?
The data comes from multiple papers and covers a large variety of languages. For the specific dataset information, please check out the companion paper.
### Annotations
Similarly, like for data producers, you should check papers that propose the specific datasets you are interested in.
#### Annotation process
We describe the annotations process of our internally created dataset in this corpus.
## Considerations for Using the Data
### Social Impact and Limitations
Corpus is intended to bring more sentiment annotated data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the training of state-of-the-art ML models for sentiment analysis.
## Additional Information
### Dataset Curators
The corpus was put together by
- [@laugustyniak](https://www.linkedin.com/in/lukaszaugustyniak/)
- [@swozniak](https://www.linkedin.com/in/wscode/)
- [@mgruza](https://www.linkedin.com/in/marcin-gruza-276b2512b/)
- [@pgramacki](https://www.linkedin.com/in/piotrgramacki/)
- [@krajda](https://www.linkedin.com/in/krzysztof-rajda/)
- [@mmorzy](https://www.linkedin.com/in/mikolajmorzy/)
- [@tkajdanowicz](https://www.linkedin.com/in/kajdanowicz/)
### Licensing Information
These data are released under this licensing scheme.
We do not own any text from which these data and datasets have been extracted.
We license the actual packaging of these data under the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) https://creativecommons.org/licenses/by-nc/4.0/
This work is published from Poland.
Should you consider that our data contains material that is owned by you and should, therefore not be reproduced here, please:
* Clearly identify yourself with detailed contact data such as an address, telephone number, or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material claimed to be infringing and the information reasonably sufficient to allow us to locate the material.
We will comply with legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
### The main corpus citation
```bibtex
@misc{augustyniak2023massively,
title={Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark},
author={Łukasz Augustyniak and Szymon Woźniak and Marcin Gruza and Piotr Gramacki and Krzysztof Rajda and Mikołaj Morzy and Tomasz Kajdanowicz},
year={2023},
eprint={2306.07902},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### All datasets in corpus
[https://brand24-ai.github.io/mms_benchmark/citations.html](https://brand24-ai.github.io/mms_benchmark/citations.html)
## Acknowledgements
- BRAND24 - https://brand24.com
- CLARIN-PL-Biz - https://clarin.biz
| [
-0.7443710565567017,
-0.31666094064712524,
0.1169486790895462,
0.35241466760635376,
-0.1453380435705185,
0.2505258619785309,
-0.4170825779438019,
-0.1463896632194519,
0.5588738322257996,
0.40995270013809204,
-0.5035425424575806,
-0.8622235059738159,
-0.6497963070869446,
0.12236296385526657... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sihaochen/propsegment | sihaochen | 2023-05-26T18:18:53Z | 22 | 2 | null | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"NLP",
"Entailment",
"NLI",
"google-research-datasets",
"arxiv:2212.10750",
"region:us"
] | 2023-05-26T18:18:53Z | 2023-05-24T23:29:22.000Z | 2023-05-24T23:29:22 | ---
license: cc-by-4.0
task_categories:
- text-classification
- token-classification
- text-generation
language:
- en
tags:
- NLP
- Entailment
- NLI
- google-research-datasets
pretty_name: PropSegment
size_categories:
- 10K<n<100K
---
# PropSegmEnt: A Large-Scale Corpus for Proposition-Level Segmentation and Entailment Recognition
## Dataset Description
- **Homepage:** https://github.com/google-research-datasets/PropSegmEnt
- **Repository:** https://github.com/google-research-datasets/PropSegmEnt
- **Paper:** https://arxiv.org/abs/2212.10750
- **Point of Contact:** sihaoc@seas.upenn.edu
### Dataset Summary
This is a reproduced (i.e. after web-crawling) and processed version of [the "PropSegment" dataset](https://github.com/google-research-datasets/PropSegmEnt) from Google Research.
Since the [`News`](https://github.com/google-research-datasets/NewSHead) portion of the dataset is released only via urls, we reconstruct the dataset by crawling.
Overall, ~96% of the dataset can be reproduced, and the rest ~4% either have url no longer valid, or sentences that have been edited (i.e. cannot be aligned with the orignial dataset).
PropSegment (Proposition-level Segmentation and Entailment) is a large-scale, human annotated dataset for segmenting English text into propositions, and recognizing proposition-level entailment relations --- whether a different, related document entails each proposition, contradicts it, or neither.
The original dataset features >45k human annotated propositions, i.e. individual semantic units within sentences, as well as >35k entailment labels between propositions and documents.
Check out more details in the [dataset paper](https://arxiv.org/abs/2212.10750).
## Dataset Structure
Here we provide processed versions of the dataset for seq2seq model inputs/outputs.
`proposition_segmentation.*.jsonl` contains data for the text segmentation task, i.e. split a sentence into propositions.
The output propositions are concatenated as one string (with no particular order between them) by a special token `[SEP]`.
Each proposition is annotated as spans enclosed by `[M]` and `[/M]`.
```
{
"sentence": "This film marks the directorial debut for production designer Robert Stromberg.",
"propositions": "This film marks the directorial debut for [M]production designer Robert Stromberg.[/M][SEP]This [M]film marks the directorial debut for[/M] production designer [M]Robert Stromberg[/M]."
}
```
`propnli.*.jsonl` contains examples for the proposition-to-document entailment task, i.e. Given a proposition and a document, predict whether the proposition can be entailed/contradicted, or neutral with respect to the document.
```
{
"hypothesis": "[M]The Departed is[/M] a 2006 feature film [M]directed by Martin Scorsese.[/M]",
"premise": "The Departed is a 2006 American crime thriller film directed by Martin Scorsese and written by William Monahan. It starred Leonardo DiCaprio, Matt Damon, Jack Nicholson, and Mark Wahlberg, with Martin Sheen, Ray Winstone, Vera Farmiga, and Alec Baldwin in supporting roles. It is a remake of the Hong Kong film Infernal Affairs (2002).\nThe Departed won the Oscar for Best Picture at the 79th Academy Awards. Scorsese received the Oscar for Best Director, Thelma Schoonmaker the Oscar for Best Editing and William Monahan the Oscar for Best Adapted Screenplay.",
"label": "e"
}
```
### Citation
```
@inproceedings{chen2023propsegment,
title = "{PropSegmEnt}: A Large-Scale Corpus for Proposition-Level Segmentation and Entailment Recognition",
author = "Chen, Sihao and Buthpitiya, Senaka and Fabrikant, Alex and Roth, Dan and Schuster, Tal",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
year = "2023",
}
```
| [
-0.38452211022377014,
-0.7541466355323792,
0.5501036643981934,
-0.14518104493618011,
-0.4684004485607147,
-0.0630786120891571,
-0.13619989156723022,
-0.3451123535633087,
0.4002152383327484,
0.5974969863891602,
-0.5436439514160156,
-0.4081335961818695,
-0.5167021751403809,
0.234743893146514... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emozilla/booksum-summary-analysis_gptneox-8192 | emozilla | 2023-05-30T14:28:46Z | 22 | 7 | null | [
"region:us"
] | 2023-05-30T14:28:46Z | 2023-05-25T17:34:39.000Z | 2023-05-25T17:34:39 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 194097976.97925937
num_examples: 10659
- name: test
num_bytes: 25683201.043425813
num_examples: 1570
- name: validation
num_bytes: 35799607.99283796
num_examples: 1824
download_size: 92249754
dataset_size: 255580786.01552314
---
# Dataset Card for "booksum-summary-analysis-8192"
Subset of [emozilla/booksum-summary-analysis](https://huggingface.co/datasets/emozilla/booksum-summary-analysis) with only entries that are less than 8,192 tokens under the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. | [
-0.663534939289093,
-0.28961628675460815,
0.2786107659339905,
0.1437479704618454,
-0.5692812204360962,
-0.005868329666554928,
0.3789149522781372,
0.08330217003822327,
0.8024045825004578,
0.5186353921890259,
-0.7913629412651062,
-0.7968103885650635,
-0.6403197050094604,
0.24940119683742523,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Den4ikAI/ru_sberquad_long_answers | Den4ikAI | 2023-05-29T05:32:22Z | 22 | 5 | null | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:mit",
"region:us"
] | 2023-05-29T05:32:22Z | 2023-05-28T17:25:41.000Z | 2023-05-28T17:25:41 | ---
license: mit
task_categories:
- question-answering
- text2text-generation
language:
- ru
size_categories:
- 10K<n<100K
---
UPD 29.05.2023: Добавлены негативные примеры.
Датасет для ответов на вопросы по тексту.
Сгенерирован моделью Den4ikAI/FRED-T5-XL_instructor
Отличия от sberquad, xquad и т.д:
1. Ответы не односложные, развернутые, представляют несколько предложений
2. Не подходит для обучения энкодерных моделей! | [
-0.2901648283004761,
-0.545821487903595,
0.35345304012298584,
0.09013236314058304,
-0.26397261023521423,
0.5094435811042786,
0.26862582564353943,
-0.07257484644651413,
0.11998243629932404,
0.08307154476642609,
-0.5955094695091248,
-0.7347272634506226,
-0.3451591432094574,
-0.06074463203549... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AhmedBou/NCSS_2023_Data_Analysis | AhmedBou | 2023-07-21T15:52:01Z | 22 | 0 | null | [
"task_categories:token-classification",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-07-21T15:52:01Z | 2023-06-02T14:39:28.000Z | 2023-06-02T14:39:28 | ---
license: apache-2.0
task_categories:
- token-classification
- text-generation
language:
- en
size_categories:
- n<1K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
truehealth/liveqa | truehealth | 2023-06-12T18:47:46Z | 22 | 0 | null | [
"region:us"
] | 2023-06-12T18:47:46Z | 2023-06-12T15:13:08.000Z | 2023-06-12T15:13:08 | ---
dataset_info:
features:
- name: questionid
dtype: string
- name: subject
dtype: string
- name: message
dtype: string
- name: focus
dtype: string
- name: type
dtype: string
- name: answerid
dtype: string
- name: pairid
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 888907
num_examples: 635
download_size: 429730
dataset_size: 888907
---
# Dataset Card for "liveqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6396004557609558,
-0.32601386308670044,
0.12968343496322632,
0.15684418380260468,
-0.14571282267570496,
0.3228473961353302,
0.49965590238571167,
-0.14542800188064575,
1.0007100105285645,
0.5239426493644714,
-0.9498519897460938,
-0.6941130757331848,
-0.36808738112449646,
-0.4098384380340... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
v2run/invoices-donut-data-v1 | v2run | 2023-06-15T08:31:03Z | 22 | 1 | null | [
"task_categories:feature-extraction",
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] | 2023-06-15T08:31:03Z | 2023-06-15T08:26:29.000Z | 2023-06-15T08:26:29 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 234024421
num_examples: 425
- name: test
num_bytes: 14512665
num_examples: 26
- name: validation
num_bytes: 27661738
num_examples: 50
download_size: 197512750
dataset_size: 276198824
license: mit
task_categories:
- feature-extraction
language:
- en
pretty_name: Sparrow Invoice Dataset
size_categories:
- n<1K
---
# Dataset Card for Invoices (Sparrow)
This dataset contains 500 invoice documents annotated and processed to be ready for Donut ML model fine-tuning.
Annotation and data preparation task was done by [Katana ML](https://www.katanaml.io) team.
[Sparrow](https://github.com/katanaml/sparrow/tree/main) - open-source data extraction solution by Katana ML.
Original dataset [info](https://data.mendeley.com/datasets/tnj49gpmtz): Kozłowski, Marek; Weichbroth, Paweł (2021), “Samples of electronic invoices”, Mendeley Data, V2, doi: 10.17632/tnj49gpmtz.2 | [
0.009695102460682392,
-0.15084978938102722,
0.11793828010559082,
0.045658763498067856,
-0.19641996920108795,
0.030794404447078705,
0.3930783271789551,
-0.34043174982070923,
0.14085324108600616,
0.772260844707489,
-0.6652522683143616,
-0.5435376763343811,
-0.3615979552268982,
-0.04386981949... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NightMachinery/ImageNet1K-val-indexed | NightMachinery | 2023-07-13T22:54:49Z | 22 | 0 | null | [
"region:us"
] | 2023-07-13T22:54:49Z | 2023-07-13T21:23:48.000Z | 2023-07-13T21:23:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': n01440764
'1': n01443537
'2': n01484850
'3': n01491361
'4': n01494475
'5': n01496331
'6': n01498041
'7': n01514668
'8': n01514859
'9': n01518878
'10': n01530575
'11': n01531178
'12': n01532829
'13': n01534433
'14': n01537544
'15': n01558993
'16': n01560419
'17': n01580077
'18': n01582220
'19': n01592084
'20': n01601694
'21': n01608432
'22': n01614925
'23': n01616318
'24': n01622779
'25': n01629819
'26': n01630670
'27': n01631663
'28': n01632458
'29': n01632777
'30': n01641577
'31': n01644373
'32': n01644900
'33': n01664065
'34': n01665541
'35': n01667114
'36': n01667778
'37': n01669191
'38': n01675722
'39': n01677366
'40': n01682714
'41': n01685808
'42': n01687978
'43': n01688243
'44': n01689811
'45': n01692333
'46': n01693334
'47': n01694178
'48': n01695060
'49': n01697457
'50': n01698640
'51': n01704323
'52': n01728572
'53': n01728920
'54': n01729322
'55': n01729977
'56': n01734418
'57': n01735189
'58': n01737021
'59': n01739381
'60': n01740131
'61': n01742172
'62': n01744401
'63': n01748264
'64': n01749939
'65': n01751748
'66': n01753488
'67': n01755581
'68': n01756291
'69': n01768244
'70': n01770081
'71': n01770393
'72': n01773157
'73': n01773549
'74': n01773797
'75': n01774384
'76': n01774750
'77': n01775062
'78': n01776313
'79': n01784675
'80': n01795545
'81': n01796340
'82': n01797886
'83': n01798484
'84': n01806143
'85': n01806567
'86': n01807496
'87': n01817953
'88': n01818515
'89': n01819313
'90': n01820546
'91': n01824575
'92': n01828970
'93': n01829413
'94': n01833805
'95': n01843065
'96': n01843383
'97': n01847000
'98': n01855032
'99': n01855672
'100': n01860187
'101': n01871265
'102': n01872401
'103': n01873310
'104': n01877812
'105': n01882714
'106': n01883070
'107': n01910747
'108': n01914609
'109': n01917289
'110': n01924916
'111': n01930112
'112': n01943899
'113': n01944390
'114': n01945685
'115': n01950731
'116': n01955084
'117': n01968897
'118': n01978287
'119': n01978455
'120': n01980166
'121': n01981276
'122': n01983481
'123': n01984695
'124': n01985128
'125': n01986214
'126': n01990800
'127': n02002556
'128': n02002724
'129': n02006656
'130': n02007558
'131': n02009229
'132': n02009912
'133': n02011460
'134': n02012849
'135': n02013706
'136': n02017213
'137': n02018207
'138': n02018795
'139': n02025239
'140': n02027492
'141': n02028035
'142': n02033041
'143': n02037110
'144': n02051845
'145': n02056570
'146': n02058221
'147': n02066245
'148': n02071294
'149': n02074367
'150': n02077923
'151': n02085620
'152': n02085782
'153': n02085936
'154': n02086079
'155': n02086240
'156': n02086646
'157': n02086910
'158': n02087046
'159': n02087394
'160': n02088094
'161': n02088238
'162': n02088364
'163': n02088466
'164': n02088632
'165': n02089078
'166': n02089867
'167': n02089973
'168': n02090379
'169': n02090622
'170': n02090721
'171': n02091032
'172': n02091134
'173': n02091244
'174': n02091467
'175': n02091635
'176': n02091831
'177': n02092002
'178': n02092339
'179': n02093256
'180': n02093428
'181': n02093647
'182': n02093754
'183': n02093859
'184': n02093991
'185': n02094114
'186': n02094258
'187': n02094433
'188': n02095314
'189': n02095570
'190': n02095889
'191': n02096051
'192': n02096177
'193': n02096294
'194': n02096437
'195': n02096585
'196': n02097047
'197': n02097130
'198': n02097209
'199': n02097298
'200': n02097474
'201': n02097658
'202': n02098105
'203': n02098286
'204': n02098413
'205': n02099267
'206': n02099429
'207': n02099601
'208': n02099712
'209': n02099849
'210': n02100236
'211': n02100583
'212': n02100735
'213': n02100877
'214': n02101006
'215': n02101388
'216': n02101556
'217': n02102040
'218': n02102177
'219': n02102318
'220': n02102480
'221': n02102973
'222': n02104029
'223': n02104365
'224': n02105056
'225': n02105162
'226': n02105251
'227': n02105412
'228': n02105505
'229': n02105641
'230': n02105855
'231': n02106030
'232': n02106166
'233': n02106382
'234': n02106550
'235': n02106662
'236': n02107142
'237': n02107312
'238': n02107574
'239': n02107683
'240': n02107908
'241': n02108000
'242': n02108089
'243': n02108422
'244': n02108551
'245': n02108915
'246': n02109047
'247': n02109525
'248': n02109961
'249': n02110063
'250': n02110185
'251': n02110341
'252': n02110627
'253': n02110806
'254': n02110958
'255': n02111129
'256': n02111277
'257': n02111500
'258': n02111889
'259': n02112018
'260': n02112137
'261': n02112350
'262': n02112706
'263': n02113023
'264': n02113186
'265': n02113624
'266': n02113712
'267': n02113799
'268': n02113978
'269': n02114367
'270': n02114548
'271': n02114712
'272': n02114855
'273': n02115641
'274': n02115913
'275': n02116738
'276': n02117135
'277': n02119022
'278': n02119789
'279': n02120079
'280': n02120505
'281': n02123045
'282': n02123159
'283': n02123394
'284': n02123597
'285': n02124075
'286': n02125311
'287': n02127052
'288': n02128385
'289': n02128757
'290': n02128925
'291': n02129165
'292': n02129604
'293': n02130308
'294': n02132136
'295': n02133161
'296': n02134084
'297': n02134418
'298': n02137549
'299': n02138441
'300': n02165105
'301': n02165456
'302': n02167151
'303': n02168699
'304': n02169497
'305': n02172182
'306': n02174001
'307': n02177972
'308': n02190166
'309': n02206856
'310': n02219486
'311': n02226429
'312': n02229544
'313': n02231487
'314': n02233338
'315': n02236044
'316': n02256656
'317': n02259212
'318': n02264363
'319': n02268443
'320': n02268853
'321': n02276258
'322': n02277742
'323': n02279972
'324': n02280649
'325': n02281406
'326': n02281787
'327': n02317335
'328': n02319095
'329': n02321529
'330': n02325366
'331': n02326432
'332': n02328150
'333': n02342885
'334': n02346627
'335': n02356798
'336': n02361337
'337': n02363005
'338': n02364673
'339': n02389026
'340': n02391049
'341': n02395406
'342': n02396427
'343': n02397096
'344': n02398521
'345': n02403003
'346': n02408429
'347': n02410509
'348': n02412080
'349': n02415577
'350': n02417914
'351': n02422106
'352': n02422699
'353': n02423022
'354': n02437312
'355': n02437616
'356': n02441942
'357': n02442845
'358': n02443114
'359': n02443484
'360': n02444819
'361': n02445715
'362': n02447366
'363': n02454379
'364': n02457408
'365': n02480495
'366': n02480855
'367': n02481823
'368': n02483362
'369': n02483708
'370': n02484975
'371': n02486261
'372': n02486410
'373': n02487347
'374': n02488291
'375': n02488702
'376': n02489166
'377': n02490219
'378': n02492035
'379': n02492660
'380': n02493509
'381': n02493793
'382': n02494079
'383': n02497673
'384': n02500267
'385': n02504013
'386': n02504458
'387': n02509815
'388': n02510455
'389': n02514041
'390': n02526121
'391': n02536864
'392': n02606052
'393': n02607072
'394': n02640242
'395': n02641379
'396': n02643566
'397': n02655020
'398': n02666196
'399': n02667093
'400': n02669723
'401': n02672831
'402': n02676566
'403': n02687172
'404': n02690373
'405': n02692877
'406': n02699494
'407': n02701002
'408': n02704792
'409': n02708093
'410': n02727426
'411': n02730930
'412': n02747177
'413': n02749479
'414': n02769748
'415': n02776631
'416': n02777292
'417': n02782093
'418': n02783161
'419': n02786058
'420': n02787622
'421': n02788148
'422': n02790996
'423': n02791124
'424': n02791270
'425': n02793495
'426': n02794156
'427': n02795169
'428': n02797295
'429': n02799071
'430': n02802426
'431': n02804414
'432': n02804610
'433': n02807133
'434': n02808304
'435': n02808440
'436': n02814533
'437': n02814860
'438': n02815834
'439': n02817516
'440': n02823428
'441': n02823750
'442': n02825657
'443': n02834397
'444': n02835271
'445': n02837789
'446': n02840245
'447': n02841315
'448': n02843684
'449': n02859443
'450': n02860847
'451': n02865351
'452': n02869837
'453': n02870880
'454': n02871525
'455': n02877765
'456': n02879718
'457': n02883205
'458': n02892201
'459': n02892767
'460': n02894605
'461': n02895154
'462': n02906734
'463': n02909870
'464': n02910353
'465': n02916936
'466': n02917067
'467': n02927161
'468': n02930766
'469': n02939185
'470': n02948072
'471': n02950826
'472': n02951358
'473': n02951585
'474': n02963159
'475': n02965783
'476': n02966193
'477': n02966687
'478': n02971356
'479': n02974003
'480': n02977058
'481': n02978881
'482': n02979186
'483': n02980441
'484': n02981792
'485': n02988304
'486': n02992211
'487': n02992529
'488': n02999410
'489': n03000134
'490': n03000247
'491': n03000684
'492': n03014705
'493': n03016953
'494': n03017168
'495': n03018349
'496': n03026506
'497': n03028079
'498': n03032252
'499': n03041632
'500': n03042490
'501': n03045698
'502': n03047690
'503': n03062245
'504': n03063599
'505': n03063689
'506': n03065424
'507': n03075370
'508': n03085013
'509': n03089624
'510': n03095699
'511': n03100240
'512': n03109150
'513': n03110669
'514': n03124043
'515': n03124170
'516': n03125729
'517': n03126707
'518': n03127747
'519': n03127925
'520': n03131574
'521': n03133878
'522': n03134739
'523': n03141823
'524': n03146219
'525': n03160309
'526': n03179701
'527': n03180011
'528': n03187595
'529': n03188531
'530': n03196217
'531': n03197337
'532': n03201208
'533': n03207743
'534': n03207941
'535': n03208938
'536': n03216828
'537': n03218198
'538': n03220513
'539': n03223299
'540': n03240683
'541': n03249569
'542': n03250847
'543': n03255030
'544': n03259280
'545': n03271574
'546': n03272010
'547': n03272562
'548': n03290653
'549': n03291819
'550': n03297495
'551': n03314780
'552': n03325584
'553': n03337140
'554': n03344393
'555': n03345487
'556': n03347037
'557': n03355925
'558': n03372029
'559': n03376595
'560': n03379051
'561': n03384352
'562': n03388043
'563': n03388183
'564': n03388549
'565': n03393912
'566': n03394916
'567': n03400231
'568': n03404251
'569': n03417042
'570': n03424325
'571': n03425413
'572': n03443371
'573': n03444034
'574': n03445777
'575': n03445924
'576': n03447447
'577': n03447721
'578': n03450230
'579': n03452741
'580': n03457902
'581': n03459775
'582': n03461385
'583': n03467068
'584': n03476684
'585': n03476991
'586': n03478589
'587': n03481172
'588': n03482405
'589': n03483316
'590': n03485407
'591': n03485794
'592': n03492542
'593': n03494278
'594': n03495258
'595': n03496892
'596': n03498962
'597': n03527444
'598': n03529860
'599': n03530642
'600': n03532672
'601': n03534580
'602': n03535780
'603': n03538406
'604': n03544143
'605': n03584254
'606': n03584829
'607': n03590841
'608': n03594734
'609': n03594945
'610': n03595614
'611': n03598930
'612': n03599486
'613': n03602883
'614': n03617480
'615': n03623198
'616': n03627232
'617': n03630383
'618': n03633091
'619': n03637318
'620': n03642806
'621': n03649909
'622': n03657121
'623': n03658185
'624': n03661043
'625': n03662601
'626': n03666591
'627': n03670208
'628': n03673027
'629': n03676483
'630': n03680355
'631': n03690938
'632': n03691459
'633': n03692522
'634': n03697007
'635': n03706229
'636': n03709823
'637': n03710193
'638': n03710637
'639': n03710721
'640': n03717622
'641': n03720891
'642': n03721384
'643': n03724870
'644': n03729826
'645': n03733131
'646': n03733281
'647': n03733805
'648': n03742115
'649': n03743016
'650': n03759954
'651': n03761084
'652': n03763968
'653': n03764736
'654': n03769881
'655': n03770439
'656': n03770679
'657': n03773504
'658': n03775071
'659': n03775546
'660': n03776460
'661': n03777568
'662': n03777754
'663': n03781244
'664': n03782006
'665': n03785016
'666': n03786901
'667': n03787032
'668': n03788195
'669': n03788365
'670': n03791053
'671': n03792782
'672': n03792972
'673': n03793489
'674': n03794056
'675': n03796401
'676': n03803284
'677': n03804744
'678': n03814639
'679': n03814906
'680': n03825788
'681': n03832673
'682': n03837869
'683': n03838899
'684': n03840681
'685': n03841143
'686': n03843555
'687': n03854065
'688': n03857828
'689': n03866082
'690': n03868242
'691': n03868863
'692': n03871628
'693': n03873416
'694': n03874293
'695': n03874599
'696': n03876231
'697': n03877472
'698': n03877845
'699': n03884397
'700': n03887697
'701': n03888257
'702': n03888605
'703': n03891251
'704': n03891332
'705': n03895866
'706': n03899768
'707': n03902125
'708': n03903868
'709': n03908618
'710': n03908714
'711': n03916031
'712': n03920288
'713': n03924679
'714': n03929660
'715': n03929855
'716': n03930313
'717': n03930630
'718': n03933933
'719': n03935335
'720': n03937543
'721': n03938244
'722': n03942813
'723': n03944341
'724': n03947888
'725': n03950228
'726': n03954731
'727': n03956157
'728': n03958227
'729': n03961711
'730': n03967562
'731': n03970156
'732': n03976467
'733': n03976657
'734': n03977966
'735': n03980874
'736': n03982430
'737': n03983396
'738': n03991062
'739': n03992509
'740': n03995372
'741': n03998194
'742': n04004767
'743': n04005630
'744': n04008634
'745': n04009552
'746': n04019541
'747': n04023962
'748': n04026417
'749': n04033901
'750': n04033995
'751': n04037443
'752': n04039381
'753': n04040759
'754': n04041544
'755': n04044716
'756': n04049303
'757': n04065272
'758': n04067472
'759': n04069434
'760': n04070727
'761': n04074963
'762': n04081281
'763': n04086273
'764': n04090263
'765': n04099969
'766': n04111531
'767': n04116512
'768': n04118538
'769': n04118776
'770': n04120489
'771': n04125021
'772': n04127249
'773': n04131690
'774': n04133789
'775': n04136333
'776': n04141076
'777': n04141327
'778': n04141975
'779': n04146614
'780': n04147183
'781': n04149813
'782': n04152593
'783': n04153751
'784': n04154565
'785': n04162706
'786': n04179913
'787': n04192698
'788': n04200800
'789': n04201297
'790': n04204238
'791': n04204347
'792': n04208210
'793': n04209133
'794': n04209239
'795': n04228054
'796': n04229816
'797': n04235860
'798': n04238763
'799': n04239074
'800': n04243546
'801': n04251144
'802': n04252077
'803': n04252225
'804': n04254120
'805': n04254680
'806': n04254777
'807': n04258138
'808': n04259630
'809': n04263257
'810': n04264628
'811': n04265275
'812': n04266014
'813': n04270147
'814': n04273569
'815': n04275548
'816': n04277352
'817': n04285008
'818': n04286575
'819': n04296562
'820': n04310018
'821': n04311004
'822': n04311174
'823': n04317175
'824': n04325704
'825': n04326547
'826': n04328186
'827': n04330267
'828': n04332243
'829': n04335435
'830': n04336792
'831': n04344873
'832': n04346328
'833': n04347754
'834': n04350905
'835': n04355338
'836': n04355933
'837': n04356056
'838': n04357314
'839': n04366367
'840': n04367480
'841': n04370456
'842': n04371430
'843': n04371774
'844': n04372370
'845': n04376876
'846': n04380533
'847': n04389033
'848': n04392985
'849': n04398044
'850': n04399382
'851': n04404412
'852': n04409515
'853': n04417672
'854': n04418357
'855': n04423845
'856': n04428191
'857': n04429376
'858': n04435653
'859': n04442312
'860': n04443257
'861': n04447861
'862': n04456115
'863': n04458633
'864': n04461696
'865': n04462240
'866': n04465501
'867': n04467665
'868': n04476259
'869': n04479046
'870': n04482393
'871': n04483307
'872': n04485082
'873': n04486054
'874': n04487081
'875': n04487394
'876': n04493381
'877': n04501370
'878': n04505470
'879': n04507155
'880': n04509417
'881': n04515003
'882': n04517823
'883': n04522168
'884': n04523525
'885': n04525038
'886': n04525305
'887': n04532106
'888': n04532670
'889': n04536866
'890': n04540053
'891': n04542943
'892': n04548280
'893': n04548362
'894': n04550184
'895': n04552348
'896': n04553703
'897': n04554684
'898': n04557648
'899': n04560804
'900': n04562935
'901': n04579145
'902': n04579432
'903': n04584207
'904': n04589890
'905': n04590129
'906': n04591157
'907': n04591713
'908': n04592741
'909': n04596742
'910': n04597913
'911': n04599235
'912': n04604644
'913': n04606251
'914': n04612504
'915': n04613696
'916': n06359193
'917': n06596364
'918': n06785654
'919': n06794110
'920': n06874185
'921': n07248320
'922': n07565083
'923': n07579787
'924': n07583066
'925': n07584110
'926': n07590611
'927': n07613480
'928': n07614500
'929': n07615774
'930': n07684084
'931': n07693725
'932': n07695742
'933': n07697313
'934': n07697537
'935': n07711569
'936': n07714571
'937': n07714990
'938': n07715103
'939': n07716358
'940': n07716906
'941': n07717410
'942': n07717556
'943': n07718472
'944': n07718747
'945': n07720875
'946': n07730033
'947': n07734744
'948': n07742313
'949': n07745940
'950': n07747607
'951': n07749582
'952': n07753113
'953': n07753275
'954': n07753592
'955': n07754684
'956': n07760859
'957': n07768694
'958': n07802026
'959': n07831146
'960': n07836838
'961': n07860988
'962': n07871810
'963': n07873807
'964': n07875152
'965': n07880968
'966': n07892512
'967': n07920052
'968': n07930864
'969': n07932039
'970': n09193705
'971': n09229709
'972': n09246464
'973': n09256479
'974': n09288635
'975': n09332890
'976': n09399592
'977': n09421951
'978': n09428293
'979': n09468604
'980': n09472597
'981': n09835506
'982': n10148035
'983': n10565667
'984': n11879895
'985': n11939491
'986': n12057211
'987': n12144580
'988': n12267677
'989': n12620546
'990': n12768682
'991': n12985857
'992': n12998815
'993': n13037406
'994': n13040303
'995': n13044778
'996': n13052670
'997': n13054560
'998': n13133613
'999': n15075141
- name: id
dtype: int64
splits:
- name: train
num_bytes: 6633504145.375
num_examples: 49101
download_size: 6622641479
dataset_size: 6633504145.375
---
# Dataset Card for "ImageNet1K-val-indexed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.700855016708374,
-0.1442982256412506,
-0.09987430274486542,
0.42454105615615845,
-0.39428675174713135,
-0.24057123064994812,
0.6104385852813721,
-0.13876043260097504,
1.0385359525680542,
0.6953532099723816,
-0.8262604475021362,
-0.8484773635864258,
-0.7038716077804565,
-0.14240138232707... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/math | andersonbcdefg | 2023-07-21T01:39:49Z | 22 | 5 | null | [
"region:us"
] | 2023-07-21T01:39:49Z | 2023-07-21T01:39:10.000Z | 2023-07-21T01:39:10 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 75291197
num_examples: 50000
download_size: 35174383
dataset_size: 75291197
---
# Dataset Card for "math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6574289798736572,
-0.3909147083759308,
0.1347373127937317,
0.32782086730003357,
-0.08445506542921066,
0.03767668828368187,
0.23570747673511505,
-0.005269511602818966,
0.8013448119163513,
0.35588914155960083,
-0.8913255929946899,
-0.6854824423789978,
-0.6023814082145691,
-0.4879412949085... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lapki/perekrestok-reviews | lapki | 2023-07-28T13:01:25Z | 22 | 0 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"reviews",
"region:us"
] | 2023-07-28T13:01:25Z | 2023-07-28T12:13:22.000Z | 2023-07-28T12:13:22 | ---
task_categories:
- text-classification
- text-generation
language:
- ru
tags:
- reviews
size_categories:
- 100K<n<1M
pretty_name: Dataset of user reviews from "Перекрёсток/Perekrestok" shop.
---
### Dataset
Dataset of user reviews from "Перекрёсток/Perekrestok" shop.
### Dataset Format
Dataset is in JSONLines format. Trivia:
`product_id` - Product internal ID (https://www.perekrestok.ru/cat/1/p/ID)
`product_name` - Product name
`product_category` - Category of product
`product_price` - Product price in RUB (decimal)
`review_id` - Review internal ID
`review_author` - Author of review
`review_text` - Text of review
`rating` - Review rating (decimal, from 0.0 to 5.0)
| [
-0.3524878919124603,
-0.5906341671943665,
0.08857603371143341,
0.5353265404701233,
-0.6276971101760864,
0.1441047638654709,
0.010879473760724068,
0.0941138043999672,
0.3755258321762085,
0.7264889478683472,
-0.6238912343978882,
-1.1617423295974731,
-0.21461741626262665,
0.3246687054634094,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jeffnyman/emotions | jeffnyman | 2023-07-29T18:10:20Z | 22 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"emotion-classification",
"region:us"
] | 2023-07-29T18:10:20Z | 2023-07-29T16:18:01.000Z | 2023-07-29T16:18:01 | ---
pretty_name: Emotions
license: cc-by-sa-4.0
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- multi-class-classification
tags:
- emotion-classification
dataset_info:
- config_name: split
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
"0": sadness
"1": joy
"2": love
"3": anger
"4": fear
"5": surprise
splits:
- name: train
num_bytes: 1741597
num_examples: 16000
- name: validation
num_bytes: 214703
num_examples: 2000
- name: test
num_bytes: 217181
num_examples: 2000
download_size: 740883
dataset_size: 2173481
- config_name: unsplit
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
"0": sadness
"1": joy
"2": love
"3": anger
"4": fear
"5": surprise
splits:
- name: train
num_bytes: 45445685
num_examples: 416809
download_size: 15388281
dataset_size: 45445685
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "emotions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper:** [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404/)
- **Size of downloaded dataset files:** 16.13 MB
- **Size of the generated dataset:** 47.62 MB
- **Total amount of disk used:** 63.75 MB
### Dataset Summary
Emotions is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper. Note that the paper does contain a larger data set with eight emotions being considered.
## Dataset Structure
### Data Instances
An example bit of data looks like this:
```
{
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
"label": 0
}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
### Data Splits
The dataset has two configurations.
- split: with a total of 20,000 examples split into train, validation and test.
- unsplit: with a total of 416,809 examples in a single train split.
| name | train | validation | test |
| ------- | -----: | ---------: | ---: |
| split | 16000 | 2000 | 2000 |
| unsplit | 416809 | n/a | n/a |
## Additional Information
### Licensing Information
The dataset should be used for educational and research purposes only. It is licensed under Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).
### Citation Information
If you use this dataset, please cite:
```
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
```
| [
-0.31385692954063416,
-0.5481840372085571,
0.300740510225296,
0.3914082646369934,
-0.4862293303012848,
-0.12215720117092133,
-0.2966028153896332,
-0.49597904086112976,
0.4402388632297516,
-0.008980195969343185,
-0.5909491181373596,
-0.8867982029914856,
-0.7277519106864929,
0.40112251043319... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Gideonah/egw_800 | Gideonah | 2023-08-13T12:03:35Z | 22 | 0 | null | [
"region:us"
] | 2023-08-13T12:03:35Z | 2023-08-11T09:32:11.000Z | 2023-08-11T09:32:11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5149287
num_examples: 3416
download_size: 2496111
dataset_size: 5149287
---
# Dataset Card for "egw_800"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6633880138397217,
-0.3774105906486511,
0.27335646748542786,
-0.09326314926147461,
-0.21608583629131317,
-0.26681655645370483,
0.4555307924747467,
-0.32976704835891724,
0.9377977848052979,
0.40701085329055786,
-0.7365965247154236,
-0.6866676807403564,
-0.593863308429718,
-0.0366632193326... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ds4sd/DocLayNet-v1.1 | ds4sd | 2023-09-01T09:58:52Z | 22 | 3 | null | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"license:other",
"layout-segmentation",
"COCO",
"document-understanding",
"PDF",
"region:us"
] | 2023-09-01T09:58:52Z | 2023-08-17T13:10:53.000Z | 2023-08-17T13:10:53 | ---
annotations_creators:
- crowdsourced
license: other
pretty_name: DocLayNet
size_categories:
- 10K<n<100K
tags:
- layout-segmentation
- COCO
- document-understanding
- PDF
task_categories:
- object-detection
- image-segmentation
task_ids:
- instance-segmentation
dataset_info:
features:
- name: image
dtype: image
- name: bboxes
sequence:
sequence: float64
- name: category_id
sequence: int64
- name: segmentation
sequence:
sequence:
sequence: float64
- name: area
sequence: float64
- name: pdf_cells
list:
list:
- name: bbox
sequence: float64
- name: font
struct:
- name: color
sequence: int64
- name: name
dtype: string
- name: size
dtype: float64
- name: text
dtype: string
- name: metadata
struct:
- name: coco_height
dtype: int64
- name: coco_width
dtype: int64
- name: collection
dtype: string
- name: doc_category
dtype: string
- name: image_id
dtype: int64
- name: num_pages
dtype: int64
- name: original_filename
dtype: string
- name: original_height
dtype: float64
- name: original_width
dtype: float64
- name: page_hash
dtype: string
- name: page_no
dtype: int64
splits:
- name: train
num_bytes: 28172005254.125
num_examples: 69375
- name: test
num_bytes: 1996179229.125
num_examples: 4999
- name: val
num_bytes: 2493896901.875
num_examples: 6489
download_size: 7766115331
dataset_size: 32662081385.125
---
# Dataset Card for DocLayNet v1.1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
## Dataset Structure
This dataset is structured differently from the other repository [ds4sd/DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet), as this one includes the content (PDF cells) of the detections, and abandons the COCO format.
* `image`: page PIL image.
* `bboxes`: a list of layout bounding boxes.
* `category_id`: a list of class ids corresponding to the bounding boxes.
* `segmentation`: a list of layout segmentation polygons.
* `pdf_cells`: a list of lists corresponding to `bbox`. Each list contains the PDF cells (content) inside the bbox.
* `metadata`: page and document metadetails.
Bounding boxes classes / categories:
```
1: Caption
2: Footnote
3: Formula
4: List-item
5: Page-footer
6: Page-header
7: Picture
8: Section-header
9: Table
10: Text
11: Title
```
The `["metadata"]["doc_category"]` field uses one of the following constants:
```
* financial_reports,
* scientific_articles,
* laws_and_regulations,
* government_tenders,
* manuals,
* patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
``` | [
-0.5649354457855225,
-0.3605087697505951,
0.439028263092041,
0.060934972018003464,
-0.17459359765052795,
-0.1572437584400177,
0.03691164404153824,
-0.2810976207256317,
0.2698991596698761,
0.5281053185462952,
-0.4757388234138489,
-0.8990421295166016,
-0.5070183277130127,
0.00705430237576365... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SaiedAlshahrani/Wikipedia-Corpora-Report | SaiedAlshahrani | 2023-10-30T09:44:27Z | 22 | 0 | null | [
"size_categories:1K<n<10K",
"license:mit",
"region:us"
] | 2023-10-30T09:44:27Z | 2023-08-19T02:28:29.000Z | 2023-08-19T02:28:29 | ---
license: mit
pretty_name: Wikipedia-Corpora-Report
size_categories:
- 1K<n<10K
---
# Dataset Card for "Wikipedia-Corpora-Report"
This dataset is used as a metadata database for the online [WIKIPEDIA CORPORA META REPORT](https://wikipedia-corpora-report.streamlit.app/) dashboard that illustrates how humans and bots generate or edit Wikipedia editions and provides metrics for “pages” and “edits” for all Wikipedia editions (320 languages). The “pages” metric counts articles and non-articles, while the “edits” metric tallies edits on articles and non-articles, all categorized by contributor type: humans or bots. The metadata is downloaded from [Wikimedia Statistics](https://stats.wikimedia.org/#/all-projects), then processed and uploaded to the Hugging Face Hub as a dataset.
For more details about the dataset, please **read** and **cite** our paper:
```bash
@inproceedings{alshahrani-etal-2023-implications,
title = "{{Performance Implications of Using Unrepresentative Corpora in Arabic Natural Language Processing}}",
author = "Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna",
booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)",
month = dec,
year = "2023",
address = "Singapore (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://webspace.clarkson.edu/~alshahsf/unrepresentative_corpora.pdf",
doi = "#################",
pages = "###--###",
abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
}
| [
-0.6932079792022705,
-0.43593457341194153,
0.01719684526324272,
0.08390465378761292,
-0.31908315420150757,
0.12337212264537811,
-0.5603105425834656,
-0.7476679682731628,
0.2082926630973816,
0.33056220412254333,
-0.34431275725364685,
-0.725936233997345,
-0.5826412439346313,
0.52344685792922... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ymoslem/Law-StackExchange | ymoslem | 2023-08-20T17:25:54Z | 22 | 8 | null | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"legal",
"region:us"
] | 2023-08-20T17:25:54Z | 2023-08-20T16:54:45.000Z | 2023-08-20T16:54:45 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
- text-classification
- sentence-similarity
language:
- en
tags:
- legal
pretty_name: Law Stack Exchange Questions and Answers
size_categories:
- 10K<n<100K
---
All StackExchange legal questions and their answers from the Law site, up to 14 August 2023. The repository includes a notebook for the process using the official StackExchange API. | [
-0.6171919703483582,
-0.6851475238800049,
0.8343748450279236,
0.5531277656555176,
-0.2564294636249542,
-0.5224117636680603,
0.19956853985786438,
-0.8743064403533936,
0.3420015871524811,
1.0044231414794922,
-0.6715635061264038,
-0.04996681213378906,
-0.19737152755260468,
0.07101764529943466... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NTQAI/sharegpt-clean-ja | NTQAI | 2023-08-22T16:19:47Z | 22 | 1 | null | [
"region:us"
] | 2023-08-22T16:19:47Z | 2023-08-22T16:15:31.000Z | 2023-08-22T16:15:31 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
neovalle/H4rmony | neovalle | 2023-11-10T13:22:00Z | 22 | 3 | null | [
"task_categories:reinforcement-learning",
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"Ecolinguistics",
"Sustainability",
"ecolinguistic",
"environment",
"doi:10.57967/hf/1148",
"region:us"
] | 2023-11-10T13:22:00Z | 2023-09-02T18:39:29.000Z | 2023-09-02T18:39:29 | ---
license: cc-by-4.0
task_categories:
- reinforcement-learning
- text-classification
- question-answering
language:
- en
tags:
- Ecolinguistics
- Sustainability
- ecolinguistic
- environment
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset H4rmony
### Dataset Summary
The H4rmony dataset is a collection of prompts and completions aimed at integrating ecolinguistic principles into AI Large Language Models (LLMs).
Developed with collaborative efforts from ecolinguistics enthusiasts and experts, it offers a series of prompts and corresponding pairwise responses
ranked in terms of environmental awareness and alignment. This ranking provides a clear metric for the desired alignment and establishes a framework for LLMs fine-tuning, particularly in reinforcement learning,
via reward model.
This dataset aims to bridge the gap between AI and ecolinguistic values,
pushing the envelope for creating generative AI models that are environmentally and sustainability aware by design.
H4rmony is not just a dataset; it's a project towards harmonising AI with nature by means of fine-tuning.
We believe in the potential of using ecolinguistics to fine-tune and influence LLMs towards more eco-aware outputs.
This dataset is currently work in progress.
### Languages
Currently only English but will be extended to multi-lingual.
## Dataset Structure
### Data Fields

### Ecological Issues - Codes meaning
This table show the meaning of the codes used for the ecological issues classification as well as examples of their manifestation
and their relation to 17 sustainable development goals defined by UNEP.

### Data Splits
There are no splits on the dataset. Splits can be created when loading the dataset:
dataset = (load_dataset('neovalle/H4rmony', split='train').train_test_split(test_size=0.2))
## Dataset Creation
### Curation Rationale
Given the multidisciplinary nature of the challenge, H4rmony dataset is being enriched by contributions from environmentalists, AI specialists, and ecolinguistics enthusiasts.
This collective effort ensures the data is both technically sound and ecologically meaningful.
### Source Data
#### Initial Data Collection and Normalization
The core of the H4rmony dataset originated from active collaborations within the ecolinguistics community.
Contributors were asked to submit prompts that would help uncover AI models' alignment with ecolinguistic values.
A number of prompts and completions were AI-generated using prompt engineering.
To this intial group of prompts, human crafted prompts.
### Personal and Sensitive Information
This dataset doesn't contain sensitive information.
## Considerations for Using the Data
This dataset is still under construction and it might contain offensive language.
### Social Impact of Dataset
The H4rmony project aims to help AI LLMs to give priority to the crucial importance of environmental consciousness.
By serving as the fourth "H", "Harmony with nature", it complements the existing triad of Helpfulness, Honesty, and Harmlessness already well known in ethical AI development.
The following models have been fine tuned using H4rmony Dataset:
https://huggingface.co/neovalle/H4rmoniousCaramel = google/flan-t5-Large + H4rmony dataset (instruction fine tuning)
https://huggingface.co/neovalle/H4rmoniousPampero = HuggingFaceH4/zephyr-7b-alpha + H4rmony dataset (reinforcement learning)
https://huggingface.co/neovalle/H4rmoniousBreeze = HuggingFaceH4/zephyr-7b-beta + H4rmony dataset (reinforcement learning)
### Discussion of Biases
Not known biases.
### Other Known Limitations
The dataset is still under constructions and the current number of rows might not be enough for some usage cases.
## Additional Information
### Dataset Curators
Jorge Vallego - airesearch@neovalle.co.uk
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
dataset neovalle/H4rmony - airesearch@neovalle.co.uk
### Testing and PoC Repository
https://github.com/Neovalle/H4rmony
### Note
This project has its roots in the article "Ecolinguistics and AI: Integrating eco-awareness in natural
language processing" https://www.ecoling.net/_files/ugd/ae088a_13cc4828a28e4955804d38e8721056cf.pdf
| [
-0.4413423538208008,
-0.5219990611076355,
0.2934702932834625,
0.24408285319805145,
-0.01983947865664959,
0.02400847338140011,
-0.3743663728237152,
-0.8336964249610901,
0.0724446028470993,
0.3117446303367615,
-0.7190868258476257,
-0.5005397796630859,
-0.17439720034599304,
0.6498941779136658... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chengli-thu/yuebuqun | chengli-thu | 2023-09-03T02:01:38Z | 22 | 0 | null | [
"license:cc-by-4.0",
"arxiv:2308.09597",
"region:us"
] | 2023-09-03T02:01:38Z | 2023-09-03T01:59:42.000Z | 2023-09-03T01:59:42 | ---
license: cc-by-4.0
---
支持ChatHaruhi2 的 岳不群 数据,可以使用如下方式调用
```python
from chatharuhi import ChatHaruhi
chatbot = ChatHaruhi( role_from_hf = 'chengli-thu/yuebuqun', \
llm = 'openai')
response = chatbot.chat(role='令狐冲', text = '师父,我来了')
print(response)
```
上传者: 李鲁鲁
更具体的信息,见 [ChatHaruhi](https://github.com/LC1332/Chat-Haruhi-Suzumiya)
欢迎加入我们的 [众筹角色创建项目](https://github.com/LC1332/Chat-Haruhi-Suzumiya/tree/main/characters/novel_collecting)
### Citation引用
Please cite the repo if you use the data or code in this repo.
```
@misc{li2023chatharuhi,
title={ChatHaruhi: Reviving Anime Character in Reality via Large Language Model},
author={Cheng Li and Ziang Leng and Chenxi Yan and Junyi Shen and Hao Wang and Weishi MI and Yaying Fei and Xiaoyang Feng and Song Yan and HaoSheng Wang and Linkang Zhan and Yaokai Jia and Pingyu Wu and Haozhen Sun},
year={2023},
eprint={2308.09597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
0.17600449919700623,
-0.6628331542015076,
-0.16541267931461334,
0.2509584426879883,
-0.21822793781757355,
-0.004461574833840132,
-0.41312694549560547,
-0.5015005469322205,
0.41290366649627686,
0.22212465107440948,
-0.35825541615486145,
0.06305093318223953,
-0.2779007852077484,
-0.104788623... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deadbits/vigil-instruction-bypass-ada-002 | deadbits | 2023-09-09T18:29:12Z | 22 | 0 | null | [
"embeddings",
"text",
"security",
"region:us"
] | 2023-09-09T18:29:12Z | 2023-09-09T16:49:16.000Z | 2023-09-09T16:49:16 | ---
tags:
- embeddings
- text
- security
pretty_name: 'Vigil: LLM Instruction Bypass text-embedding-ada-002 '
---
# Vigil: LLM Instruction Bypass all-MiniLM-L6-v2
- **Repo:** [github.com/deadbits/vigil-llm](https://github.com/deadbits/vigil-llm)
`Vigil` is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains `text-embedding-ada-002` embeddings for all Instruction Bypass style prompts ("Ignore instructions ...") used by [Vigil](https://github.com/deadbits/prompt-injection-defense).
You can use the [parquet2vdb.py](https://github.com/deadbits/prompt-injection-defense/blob/main/vigil/utils/parquet2vdb.py) utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
```json
[
{
"text": str,
"embedding": [],
"model": "text-embedding-ada-002"
}
]
```
Instruction bypass prompts generated with: https://gist.github.com/deadbits/e93a90aa36c9aa7b5ce1179597a6fe3d#file-generate-phrases-py | [
0.08236189186573029,
-1.111305832862854,
0.7915599346160889,
0.1946927309036255,
-0.4539523422718048,
0.08881177753210068,
0.036040134727954865,
-0.14662303030490875,
0.2074555903673172,
0.48244965076446533,
-0.6026670336723328,
-0.9555085897445679,
-0.5865722894668579,
0.08216473460197449... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Otter-AI/MMBench | Otter-AI | 2023-10-08T14:23:37Z | 22 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-08T14:23:37Z | 2023-09-15T09:01:35.000Z | 2023-09-15T09:01:35 | ---
license: apache-2.0
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jangmin/ecommerce_purchase_history | jangmin | 2023-10-14T13:35:03Z | 22 | 1 | null | [
"size_categories:10K<n<100K",
"language:ko",
"region:us"
] | 2023-10-14T13:35:03Z | 2023-09-21T05:09:07.000Z | 2023-09-21T05:09:07 | ---
language:
- ko
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: user_id
dtype: int64
- name: day
dtype: string
- name: order_ts
dtype: string
- name: positive_prod_id
dtype: int64
- name: negative_prod_id
dtype: int64
- name: negative_prod_ids
sequence: int64
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 122282877.9602969
num_examples: 58535
- name: test
num_bytes: 52690471.08509643
num_examples: 17332
- name: rigorous_test
num_bytes: 24661037.47070749
num_examples: 8112
download_size: 33220918
dataset_size: 199634386.51610082
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: rigorous_test
path: data/rigorous_test-*
---
# Dataset Card for "ecommerce_purchase_history"
## Dataset Description
# Dataset Summary
이 데이터셋은 특정 이커머스 회사의 추천 시스템 연구 개발을 위한 데이터셋이다. 특정 기간에 대해 약 90일 동안의 구매 히스토리로부터 생성되었다. 구매 히스토리를 텍스트로 기술하였다.
llama2 토크나이저 기준 2,048 개의 토큰 미만의 예제 쌍만을 남기도록 수정하였다.
또한, test 스플릿의 경우 user_id, positive_prod_id 기준으로 train_split에 등장하지 않는 것만을 남겼다.
# Supported Tasks and Leaderboards
# Languages
This dataset is only made of `ko`(korean).
# Dataset Structure | [
-0.3012981116771698,
-0.7718191146850586,
-0.07262010872364044,
0.41904357075691223,
-0.4745005965232849,
0.07279916852712631,
0.2124108523130417,
-0.11629510670900345,
0.4543263614177704,
0.6562343239784241,
-0.8771880269050598,
-1.089645504951477,
-0.02590174973011017,
0.2447620928287506... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/indolem_ner_ugm | SEACrowd | 2023-09-26T12:28:37Z | 22 | 0 | null | [
"language:ind",
"license:cc-by-4.0",
"named-entity-recognition",
"region:us"
] | 2023-09-26T12:28:37Z | 2023-09-26T11:11:17.000Z | 2023-09-26T11:11:17 | ---
license: cc-by-4.0
tags:
- named-entity-recognition
language:
- ind
---
# indolem_ner_ugm
NER UGM is a Named Entity Recognition dataset that comprises 2,343 sentences from news articles, and was constructed at the University of Gajah Mada based on five named entity classes: person, organization, location, time, and quantity.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{koto-etal-2020-indolem,
title = "{I}ndo{LEM} and {I}ndo{BERT}: A Benchmark Dataset and Pre-trained Language Model for {I}ndonesian {NLP}",
author = "Koto, Fajri and
Rahimi, Afshin and
Lau, Jey Han and
Baldwin, Timothy",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2020.coling-main.66",
doi = "10.18653/v1/2020.coling-main.66",
pages = "757--770"
}
@phdthesis{fachri2014pengenalan,
title = {Pengenalan Entitas Bernama Pada Teks Bahasa Indonesia Menggunakan Hidden Markov Model},
author = {FACHRI, MUHAMMAD},
year = {2014},
school = {Universitas Gadjah Mada}
}
```
## License
Creative Commons Attribution 4.0
## Homepage
[https://indolem.github.io/](https://indolem.github.io/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.6274850964546204,
-0.45026618242263794,
0.20076076686382294,
0.15041524171829224,
-0.4998514652252197,
-0.16031092405319214,
-0.3960619568824768,
-0.3558292090892792,
0.23852291703224182,
0.4199056923389435,
-0.2377101480960846,
-0.637728750705719,
-0.5044578909873962,
0.457280218601226... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/indolem_tweet_ordering | SEACrowd | 2023-09-26T12:34:03Z | 22 | 0 | null | [
"language:ind",
"license:cc-by-4.0",
"sentence-ordering",
"arxiv:2011.00677",
"region:us"
] | 2023-09-26T12:34:03Z | 2023-09-26T11:18:05.000Z | 2023-09-26T11:18:05 | ---
license: cc-by-4.0
tags:
- sentence-ordering
language:
- ind
---
# indolem_tweet_ordering
IndoLEM (Indonesian Language Evaluation Montage) is a comprehensive Indonesian benchmark that comprises of seven tasks for the Indonesian language. This benchmark is categorized into three pillars of NLP tasks: morpho-syntax, semantics, and discourse.
This task is based on the sentence ordering task of Barzilay and Lapata (2008) to assess text relatedness. We construct the data by shuffling Twitter threads (containing 3 to 5 tweets), and assessing the predicted ordering in terms of rank correlation (p) with the original. The experiment is based on 5-fold cross validation.
Train: 4327 threads
Development: 760 threads
Test: 1521 threads
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{DBLP:journals/corr/abs-2011-00677,
author = {Fajri Koto and
Afshin Rahimi and
Jey Han Lau and
Timothy Baldwin},
title = {IndoLEM and IndoBERT: {A} Benchmark Dataset and Pre-trained Language
Model for Indonesian {NLP}},
journal = {CoRR},
volume = {abs/2011.00677},
year = {2020},
url = {https://arxiv.org/abs/2011.00677},
eprinttype = {arXiv},
eprint = {2011.00677},
timestamp = {Fri, 06 Nov 2020 15:32:47 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2011-00677.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## License
Creative Commons Attribution 4.0
## Homepage
[https://indolem.github.io/](https://indolem.github.io/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.41092798113822937,
-0.6036213040351868,
-0.01416268851608038,
0.609965980052948,
-0.6805405616760254,
0.11411263048648834,
-0.30047607421875,
-0.6216042041778564,
0.19595614075660706,
0.5247799158096313,
-0.0431813970208168,
-0.7954544425010681,
-0.7669693231582642,
0.483150839805603,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FahdSeddik/AGS-Corpus | FahdSeddik | 2023-09-29T12:36:04Z | 22 | 1 | null | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:ar",
"license:cc-by-nc-4.0",
"chemistry",
"biology",
"legal",
"finance",
"music",
"art",
"code",
"climate",
"medical",
"region:us"
] | 2023-09-29T12:36:04Z | 2023-09-28T13:01:41.000Z | 2023-09-28T13:01:41 | ---
license: cc-by-nc-4.0
task_categories:
- summarization
language:
- ar
tags:
- chemistry
- biology
- legal
- finance
- music
- art
- code
- climate
- medical
pretty_name: AGS Corpus
size_categories:
- 100K<n<1M
---
# Dataset Card for AGS
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Curation Rationale
- Source Data
- Personal and Sensitive Information
## Dataset Description
- **Paper:** [Atef, A., Seddik, F., & Elbedewy, A. (2023).
AGS: Arabic GPT Summarization Corpus]()
- **Point of Contact:** fahdseddik@gmail.com
### Dataset Summary
AGS is the first publicly accessible abstractive summarization dataset for Arabic. It consists of 142,000 pairs of articles and summaries, all written in Modern Standard Arabic (MSA). The summaries are generated using GPT-3.5 Turbo, a large language model, through meticulous prompt engineering. The dataset covers a wide range of topics, such as politics, sports, culture, science, and technology.
### Supported Tasks and Leaderboards
The supported task is abstractive text summarization, which involves generating a concise and informative summary from a longer text. The dataset can be used to train and evaluate models for this task, as well as to benchmark their performance against existing methods.
There is no official leaderboard for this dataset, but the we report the results of several models on the test set, using Rouge-L, SS-Population mean, and Compression ratio metrics. The best performing model is mT5, which achieves 21.27, 82.65, and 62 scores on these metrics, respectively.
### Languages
The dataset is in Arabic (ISO 639-1: ar).
## Dataset Structure
### Data Instances
An example data instance is:
```
{ “text”: “نظرية التعقيد هي فرع من فروع نظرية
الحوسبة والرياضيات، وهذه النظرية تتركز في تصنيف المسائل الحاسوبية حسب صعوبتها وربط أقسام التعقيد ببعضها، والمسألة
الحاسوبية هي المسألة التي يستطيع الحاسوب بحلها.ويمكن اعتبارها مسألة صعبة إذا استخدمت كمية مُعينة من الموارد أياً كانت
الخوارزمية. ولعل النماذج الحسابية هي الطريقة الأمثل في هذه النظرية لدراسة هذه المسائل وتحديد كمية الموارد اللازمة
مثل: الوقت أو حجم المكان الإضافي اللازم، وتوجد معايير تعقيد أخرى مثل: الاتصال (مستخدم في نظرية تعقيد الاتصال) وعدد
البوابات في الدارات المنطقية (مستخدم في نظرية تعقيد الدارات المنطقية) وكذلك عدد المعالجات (مستخدم في الحساب المتوازي).”,
“summary”: “نظرية التعقيد هي فرع من نظرية
الحوسبة والرياضيات، تصنف المسائل الحاسوبية حسب صعوبتها وتربط أقسام التعقيد ببعضها. تحديد كمية الموارد
اللازمة يتم باستخدام النماذج الحسابية، مثل الوقت وحجم المكان الإضافي وعدد البوابات في الدارات المنطقية.” }
```
### Data Fields
- 'id' : an identification number
- `text`: the original text of the article, written in Arabic.
- `summary`: the abstractive summary of the article, written in Arabic.
## Dataset Creation
### Curation Rationale
The dataset was created to address the lack of abstractive summarization datasets for Arabic, which is a low-resource and under-studied language. The dataset aims to provide a large and diverse corpus of articles and summaries that can be used to train and evaluate models for this task, as well as to advance the research in this field.
### Source Data
The source data was collected from Wikipedia & Youm7 websites, covering a wide range of topics, such as politics, sports, culture, science, and technology. The websites were selected based on their popularity, credibility, and content quality. The data collection process involved web crawling, text sampling, and prompt engineering.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information, as it only consists of articles and summaries that are publicly available on the web. The dataset creators are not responsible for any misuse or harm that may result from the use of this data.
| [
-0.6708011031150818,
-0.5491591095924377,
0.16979163885116577,
0.0984174981713295,
-0.5761945843696594,
0.060917165130376816,
0.12017499655485153,
-0.2580934166908264,
0.46905821561813354,
0.23765403032302856,
-0.320480078458786,
-1.103187918663025,
-0.9519473910331726,
0.41374215483665466... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
baber/mmlu | baber | 2023-09-29T02:12:59Z | 22 | 0 | null | [
"region:us"
] | 2023-09-29T02:12:59Z | 2023-09-28T14:51:08.000Z | 2023-09-28T14:51:08 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
asgaardlab/GamePhysics-FullResolution | asgaardlab | 2023-10-08T01:54:35Z | 22 | 2 | null | [
"task_categories:video-classification",
"size_categories:10K<n<100K",
"language:en",
"license:creativeml-openrail-m",
"video-game",
"game",
"video-understanding",
"ood",
"vidoe-ood",
"arxiv:2203.11096",
"region:us"
] | 2023-10-08T01:54:35Z | 2023-10-05T01:10:33.000Z | 2023-10-05T01:10:33 | ---
dataset_info:
features:
- name: id
dtype: string
- name: game
dtype: string
- name: filepath
dtype: string
- name: filename
dtype: string
- name: archive
dtype: string
- name: reddit_url
dtype: string
splits:
- name: validation
num_bytes: 3692759
num_examples: 26954
download_size: 1232477
dataset_size: 3692759
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
license: creativeml-openrail-m
task_categories:
- video-classification
language:
- en
tags:
- video-game
- game
- video-understanding
- ood
- vidoe-ood
pretty_name: GamePhysics
size_categories:
- 10K<n<100K
---
# GamePhysics Dataset
[](https://asgaardlab.github.io/CLIPxGamePhysics/)
[](https://arxiv.org/abs/2203.11096)
[](https://huggingface.co/spaces/taesiri/CLIPxGamePhysics)
The GamePhysics dataset is a collection of gameplay bug videos sourced from the [GamePhysics subreddit](https://www.reddit.com/r/GamePhysics/).
## Sample videos
<video src="https://asgaardlab.github.io/CLIPxGamePhysics/static/videos/9rqabp.mp4" controls="controls" muted="muted" playsinline="playsinline" width=480></video>
<video src="https://asgaardlab.github.io/CLIPxGamePhysics/static/videos/g5pm35.mp4" controls="controls" muted="muted" playsinline="playsinline" width=480></video>
<video src="https://asgaardlab.github.io/CLIPxGamePhysics/static/videos/6xplqg.mp4" controls="controls" muted="muted" playsinline="playsinline" width=480></video>
<video src="https://asgaardlab.github.io/CLIPxGamePhysics/static/videos/4jirzj.mp4" controls="controls" muted="muted" playsinline="playsinline" width=480></video> | [
-0.6711006164550781,
-0.3957226276397705,
0.4721251428127289,
0.32959261536598206,
-0.3384573459625244,
0.1624016910791397,
0.03898610547184944,
-0.09608866274356842,
0.5381497144699097,
0.22532403469085693,
-1.1899538040161133,
-0.7542590498924255,
-0.581434428691864,
-0.2947311997413635,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cis-lmu/udhr-lid | cis-lmu | 2023-10-26T09:56:41Z | 22 | 1 | null | [
"multilinguality:multilingual",
"language:tir",
"language:rmn",
"language:arb",
"language:mxv",
"language:mal",
"language:fij",
"language:som",
"language:cot",
"language:fur",
"language:vie",
"language:zlm",
"language:bam",
"language:chr",
"language:maz",
"language:yad",
"language:zt... | 2023-10-26T09:56:41Z | 2023-10-22T18:49:59.000Z | 2023-10-22T18:49:59 | ---
license: cc0-1.0
configs:
- config_name: default
data_files:
- split: test
path: "udhr-lid.csv"
multilinguality:
- multilingual
language:
- tir
- rmn
- arb
- mxv
- mal
- fij
- som
- cot
- fur
- vie
- zlm
- bam
- chr
- maz
- yad
- ztu
- ykg
- ccp
- alt
- ayr
- njo
- bci
- gyr
- run
- haw
- rgn
- cak
- kwi
- fra
- agr
- duu
- ilo
- nhn
- kdh
- cnh
- bod
- mya
- ady
- pol
- ydd
- cos
- lot
- arl
- glv
- gag
- bfa
- afr
- lij
- zlm
- ibb
- toi
- tzm
- ron
- ojb
- san
- eng
- bum
- pam
- kqs
- dje
- auc
- smo
- por
- fry
- lad
- pov
- tyv
- guc
- huu
- ese
- kbp
- eve
- yrk
- lin
- tdt
- qvc
- top
- nav
- twi
- oss
- lia
- ame
- hun
- lit
- que
- qug
- nku
- csa
- lao
- knc
- kjh
- jav
- mam
- ita
- ppl
- aar
- tbz
- ssw
- bug
- srp
- kaz
- min
- mad
- orh
- tgk
- kat
- uig
- tzo
- hat
- shn
- kbd
- niv
- idu
- krl
- abk
- mto
- gla
- ijs
- cri
- uzn
- tah
- tob
- kir
- quy
- hnj
- srr
- lvs
- nan
- hns
- snk
- swh
- ekk
- guu
- div
- dzo
- spa
- hms
- ell
- ace
- war
- ind
- cjy
- cfm
- nds
- ewe
- tel
- src
- fuf
- vmw
- zro
- men
- kqn
- nzi
- taj
- khk
- ddn
- nso
- mxi
- pon
- fvr
- hau
- ktu
- tem
- yor
- pnb
- ltz
- evn
- cjs
- nba
- niu
- dan
- acu
- zgh
- chj
- heb
- lua
- quz
- uig
- cbi
- jav
- cpu
- wuu
- mah
- kmb
- mcd
- ben
- lus
- ajg
- azj
- tha
- dga
- isl
- sus
- fuf
- fkv
- jiv
- mor
- nio
- als
- buc
- kde
- nbl
- ceb
- ven
- sun
- cbt
- swb
- tur
- dyo
- sin
- pbu
- ada
- pap
- qvh
- loz
- pan
- qva
- sme
- bax
- tuk
- hsb
- hus
- qvn
- ban
- cha
- zyb
- hin
- tat
- uzn
- qxu
- gej
- quc
- mnw
- bho
- udu
- kha
- kbr
- tsz
- pau
- mkd
- shp
- ike
- lue
- tgl
- yap
- yua
- koi
- hrv
- emk
- tet
- ndo
- cbu
- vep
- cmn
- sag
- nym
- rus
- gjn
- guk
- kri
- ote
- lun
- vai
- bis
- arn
- tsn
- gle
- hak
- gkp
- ura
- tca
- xho
- wln
- amc
- mos
- lld
- bul
- qxn
- bcl
- ctd
- dip
- dag
- kek
- bre
- mri
- fin
- sah
- cym
- kan
- fao
- gsw
- sey
- bem
- bos
- bin
- chv
- tpi
- ami
- oaa
- lob
- ast
- nno
- sco
- tuk
- khm
- pes
- pbb
- tam
- ibo
- san
- sid
- plt
- guj
- hsn
- kin
- lug
- slr
- koo
- xsm
- jpn
- oki
- deu
- rar
- pcm
- hni
- vec
- gld
- sot
- crs
- fuv
- srp
- npi
- nya
- kea
- blt
- roh
- cbr
- chk
- kal
- mfq
- quh
- kor
- slv
- cof
- shk
- zul
- qwh
- fon
- mic
- prs
- mag
- bel
- iii
- mar
- dyu
- boa
- swe
- pis
- mlt
- amh
- umb
- cnr
- mai
- toj
- csw
- ina
- bba
- cbs
- kng
- oci
- pcd
- miq
- lat
- qvm
- wwa
- bos
- urd
- kmr
- ido
- gaa
- epo
- gaz
- cat
- hye
- cni
- suk
- gug
- gan
- cjk
- tzh
- zam
- ces
- cic
- mcf
- not
- kaa
- tso
- piu
- fat
- mzi
- snn
- tly
- eus
- nld
- nob
- wol
- hlt
- sna
- tiv
- ton
- hea
- skr
- lns
- rup
- cab
- glg
- tgl
- yao
- nyn
- aii
- tzm
- slk
- ukr
- kkh
- zdj
- amr
- yue
- crh
- hil
tags:
- UDHR
- udhr
- language identification
- LID
- glot
- GlotLID
pretty_name: UDHR-LID
---
# UDHR-LID
**Why UDHR-LID?**
You can access UDHR [here](http://www.unicode.org/udhr/d/), but when a verse is missing, they have texts such as "missing" or "?". Also, about 1/3 of the sentences consist only of "articles 1-30" in different languages. We cleaned the entire dataset from XML files and selected only the paragraphs. We cleared any unrelated language texts from the data and also removed the cases that were incorrect.
Incorrect? Look at the ckb and kmr files in the UDHR. Both are the same! ckb is known for the Arabic script, although it can also be written in Latin. Clearly, a unique file cannot belong to two different languages. We also deleted files that we believe those scripts are no longer in use.
The deleted files include:
- ckb_Latn (Arabic is in use.)
- azb_Latn (Arabic is in use.)
- khk_Mong (Cyrillic is in use.)
- vie_Hani (Latin is in use.)
For dealing with scripts in other languages, if you are interested, check Glotscript [code](https://github.com/cisnlp/GlotScript) and [paper](https://arxiv.org/abs/2309.13320). We have prepared a tool for detecting the script of a text, as well as metadata to determine the correct script for each language.
We believe UDHR should remain a test corpus in NLP, not a training corpus. Of course, we are not opposed to great works such as Franc built on top of UDHR. However, if your work scale is much bigger than UDHR, do not put UDHR in your data. Use it as test/validation, or find out what is wrong with your training data with help of UDHR. Be aware that a part of UDHR may be hosted on other websites such as Wikipedia, news websites like BBC, collaborative translation communities like Tatoeba. Before using UDHR as a test, exclude any sentence where UDHR is a part of your training.
We created this corpus for language identification evaluation task in our GlotLID [paper](https://arxiv.org/abs/2310.16248), but feel free to use it for your own task. The texts here are not in order, and they're not parallel. However, each row of data belongs to the determined language, long, cleaned, and has rich linguistic content!
## Usage (HF Loader)
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/udhr-lid', split='test')
print(dataset[0]) # First row of udhr-lid
```
## Download
If you are not a fan of the HF dataloader, download each language directly:
```python
! wget https://huggingface.co/datasets/cis-lmu/udhr-lid/resolve/main/udhr-lid.csv
```
or clone the whole repository:
```python
! git clone https://huggingface.co/datasets/cis-lmu/udhr-lid
```
## License
UDHR is the most translated copyright-free document in the world.
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law).
## Citation
If you use any part of this data in your research, please cite it (along with http://www.unicode.org/udhr/d/) using the following BibTeX entry.
```
@inproceedings{
kargaran2023glotlid,
title={{GlotLID}: Language Identification for Low-Resource Languages},
author={Kargaran, Amir Hossein and Imani, Ayyoob and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023},
url={https://openreview.net/forum?id=dl4e3EBz5j}
}
```
| [
-0.21574315428733826,
-0.5699239373207092,
0.1838560849428177,
0.270965039730072,
-0.2813686728477478,
0.14518161118030548,
-0.47690579295158386,
-0.6306240558624268,
0.09444981813430786,
0.5308913588523865,
-0.3199990689754486,
-0.7015080451965332,
-0.43131861090660095,
0.4684296250343323... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ItsNotRohit/Food121-224 | ItsNotRohit | 2023-10-28T07:04:35Z | 22 | 1 | null | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"food101",
"image classification",
"region:us"
] | 2023-10-28T07:04:35Z | 2023-10-24T13:26:42.000Z | 2023-10-24T13:26:42 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple_pie
'1': baby_back_ribs
'2': baklava
'3': beef_carpaccio
'4': beef_tartare
'5': beet_salad
'6': beignets
'7': bibimbap
'8': biryani
'9': bread_pudding
'10': breakfast_burrito
'11': bruschetta
'12': caesar_salad
'13': cannoli
'14': caprese_salad
'15': carrot_cake
'16': ceviche
'17': chai
'18': chapati
'19': cheese_plate
'20': cheesecake
'21': chicken_curry
'22': chicken_quesadilla
'23': chicken_wings
'24': chocolate_cake
'25': chocolate_mousse
'26': chole_bhature
'27': churros
'28': clam_chowder
'29': club_sandwich
'30': crab_cakes
'31': creme_brulee
'32': croque_madame
'33': cup_cakes
'34': dabeli
'35': dal
'36': deviled_eggs
'37': dhokla
'38': donuts
'39': dosa
'40': dumplings
'41': edamame
'42': eggs_benedict
'43': escargots
'44': falafel
'45': filet_mignon
'46': fish_and_chips
'47': foie_gras
'48': french_fries
'49': french_onion_soup
'50': french_toast
'51': fried_calamari
'52': fried_rice
'53': frozen_yogurt
'54': garlic_bread
'55': gnocchi
'56': greek_salad
'57': grilled_cheese_sandwich
'58': grilled_salmon
'59': guacamole
'60': gyoza
'61': hamburger
'62': hot_and_sour_soup
'63': hot_dog
'64': huevos_rancheros
'65': hummus
'66': ice_cream
'67': idli
'68': jalebi
'69': kathi_rolls
'70': kofta
'71': kulfi
'72': lasagna
'73': lobster_bisque
'74': lobster_roll_sandwich
'75': macaroni_and_cheese
'76': macarons
'77': miso_soup
'78': momos
'79': mussels
'80': naan
'81': nachos
'82': omelette
'83': onion_rings
'84': oysters
'85': pad_thai
'86': paella
'87': pakoda
'88': pancakes
'89': pani_puri
'90': panna_cotta
'91': panner_butter_masala
'92': pav_bhaji
'93': peking_duck
'94': pho
'95': pizza
'96': pork_chop
'97': poutine
'98': prime_rib
'99': pulled_pork_sandwich
'100': ramen
'101': ravioli
'102': red_velvet_cake
'103': risotto
'104': samosa
'105': sashimi
'106': scallops
'107': seaweed_salad
'108': shrimp_and_grits
'109': spaghetti_bolognese
'110': spaghetti_carbonara
'111': spring_rolls
'112': steak
'113': strawberry_shortcake
'114': sushi
'115': tacos
'116': takoyaki
'117': tiramisu
'118': tuna_tartare
'119': vadapav
'120': waffles
splits:
- name: train
num_bytes: 2004526002
num_examples: 96800
- name: test
num_bytes: 513682668.4
num_examples: 24200
download_size: 3295817653
dataset_size: 2518208670.4
language:
- en
tags:
- food101
- image classification
size_categories:
- 10K<n<100K
task_categories:
- image-classification
---
## Dataset Details
### Dataset Description
This dataset is the downscaled version of the [Food121](https://huggingface.co/datasets/ItsNotRohit/Food121) dataset. All images are downscaled to a maximum of 224*224.
This dataset is the combination of the [Food101](https://huggingface.co/datasets/food101), [Indian Food Classification](https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification) and [The-massive-Indian-Food-Dataset](https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset) datasets.
This Dataset aims to be a viable dataset for Image Classification of Foods with an added Indian context. This dataset has 121 classes with each class having 800 images in the train split and 200 images in the test split.
### Dataset Sources
- **Food101:** https://huggingface.co/datasets/food101
- **Indian Food Classification:** https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification
- **The-massive-Indian-Food-Dataset:** https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset | [
-0.5118431448936462,
-0.36024633049964905,
-0.2765357196331024,
0.2490992248058319,
0.07163834571838379,
0.03481239825487137,
-0.13988319039344788,
-0.3906974494457245,
0.6010746359825134,
0.46743351221084595,
-0.6717579960823059,
-0.4966394901275635,
-0.7777068018913269,
0.326168596744537... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OpenLLMAI/anthropic_hh_oasst1_split | OpenLLMAI | 2023-10-25T02:15:43Z | 22 | 0 | null | [
"region:us"
] | 2023-10-25T02:15:43Z | 2023-10-25T02:12:08.000Z | 2023-10-25T02:12:08 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tianyang/repobench_ablation_8k | tianyang | 2023-10-25T02:25:45Z | 22 | 0 | null | [
"region:us"
] | 2023-10-25T02:25:45Z | 2023-10-25T02:21:48.000Z | 2023-10-25T02:21:48 | ---
configs:
- config_name: default
data_files:
- split: cross_file_first
path: data/cross_file_first-*
- split: cross_file_random
path: data/cross_file_random-*
- split: in_file
path: data/in_file-*
dataset_info:
features:
- name: repo_name
dtype: string
- name: file_path
dtype: string
- name: context
list:
- name: identifier
dtype: string
- name: path
dtype: string
- name: snippet
dtype: string
- name: import_statement
dtype: string
- name: token_num
dtype: int64
- name: cropped_code
dtype: string
- name: all_code
dtype: string
- name: next_line
dtype: string
- name: gold_snippet_index
dtype: int64
splits:
- name: cross_file_first
num_bytes: 76590132
num_examples: 4000
- name: cross_file_random
num_bytes: 77383139
num_examples: 3919
- name: in_file
num_bytes: 74963194
num_examples: 4000
download_size: 83713495
dataset_size: 228936465
---
# Dataset Card for "repobench_ablation_8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5572723746299744,
0.015748754143714905,
-0.008521043695509434,
0.17057597637176514,
-0.3753586709499359,
-0.04578544944524765,
0.4205318093299866,
-0.16411957144737244,
0.7495389580726624,
0.801800549030304,
-0.6687875390052795,
-0.6820957660675049,
-0.3861909508705139,
-0.0058653391897... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tuanio/book_corpus-input_ids-valid-len256 | tuanio | 2023-10-26T08:47:25Z | 22 | 0 | null | [
"region:us"
] | 2023-10-26T08:47:25Z | 2023-10-25T11:18:04.000Z | 2023-10-25T11:18:04 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 6319319328
num_examples: 6156107
download_size: 2939435774
dataset_size: 6319319328
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "book_corpus-input_ids-valid-len256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.48155224323272705,
-0.27212342619895935,
0.2028699517250061,
0.2828889489173889,
-0.2677263617515564,
-0.0911259651184082,
-0.05167973041534424,
-0.012475349940359592,
0.457224041223526,
0.4432503581047058,
-0.5586918592453003,
-1.0151493549346924,
-0.5166803002357483,
0.069064319133758... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Naveengo/flickr8k | Naveengo | 2023-10-26T08:06:49Z | 22 | 0 | null | [
"task_categories:image-to-text",
"license:apache-2.0",
"region:us"
] | 2023-10-26T08:06:49Z | 2023-10-26T08:02:48.000Z | 2023-10-26T08:02:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1132031887.712
num_examples: 8091
download_size: 1114562282
dataset_size: 1132031887.712
license: apache-2.0
task_categories:
- image-to-text
---
# Dataset Card for "flickr8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6903163194656372,
0.07526080310344696,
0.21464581787586212,
0.16154776513576508,
-0.399967759847641,
-0.0655880868434906,
0.5942295789718628,
-0.1657881885766983,
0.6939918994903564,
0.4774690866470337,
-0.8796880841255188,
-0.6436339616775513,
-0.6194800734519958,
-0.17091535031795502,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlignmentLab-AI/langchain | AlignmentLab-AI | 2023-10-29T22:12:44Z | 22 | 0 | null | [
"region:us"
] | 2023-10-29T22:12:44Z | 2023-10-29T22:09:00.000Z | 2023-10-29T22:09:00 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kdawoud91/Arxiv_train_test | kdawoud91 | 2023-10-31T14:40:58Z | 22 | 0 | null | [
"region:us"
] | 2023-10-31T14:40:58Z | 2023-10-31T14:40:02.000Z | 2023-10-31T14:40:02 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
QuyenAnhDE/Concat_medical | QuyenAnhDE | 2023-11-02T11:12:06Z | 22 | 0 | null | [
"language:en",
"medical",
"region:us"
] | 2023-11-02T11:12:06Z | 2023-11-02T11:05:16.000Z | 2023-11-02T11:05:16 | ---
language:
- en
tags:
- medical
---
## Dataset Details
This is a dataset of disease names, their definitions and descriptions.
The information is extracted from the Disease Ontology.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Question** [More Information Needed]
- **Context** [More Information Needed]
| [
-0.01658366061747074,
-0.5191823244094849,
0.08104046434164047,
-0.16542376577854156,
-0.171891450881958,
-0.2999911606311798,
0.3723096549510956,
-0.30767449736595154,
0.6081169247627258,
1.0236306190490723,
-0.8956385254859924,
-0.7612481117248535,
-0.7588258385658264,
0.1334735304117202... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SoAp9035/shoe_size | SoAp9035 | 2023-11-03T20:06:18Z | 22 | 1 | null | [
"license:apache-2.0",
"biology",
"region:us"
] | 2023-11-03T20:06:18Z | 2023-11-03T19:52:53.000Z | 2023-11-03T19:52:53 | ---
license: apache-2.0
tags:
- biology
---
# Shoe size dataset
This dataset contains information on gender, height and shoe size.
| [
-0.11930108070373535,
-0.0897645503282547,
0.07421155273914337,
0.4743649363517761,
-0.16245946288108826,
0.5464490652084351,
0.4322773814201355,
-0.3010658621788025,
0.013613698072731495,
0.20378974080085754,
-0.679018497467041,
-0.7750746607780457,
-0.70500648021698,
-0.37113475799560547... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anyspeech/ucla_test | anyspeech | 2023-11-04T17:35:07Z | 22 | 0 | null | [
"region:us"
] | 2023-11-04T17:35:07Z | 2023-11-04T17:34:46.000Z | 2023-11-04T17:34:46 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: phones
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 726465945
num_examples: 5444
download_size: 558156867
dataset_size: 726465945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ucla_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6615945100784302,
-0.5480158925056458,
0.10105376690626144,
0.20314320921897888,
-0.026315467432141304,
-0.0355524867773056,
0.4433787167072296,
-0.07555410265922546,
0.6193245053291321,
0.42576491832733154,
-0.9069024324417114,
-0.6817516684532166,
-0.28832119703292847,
-0.254261046648... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KelNoMel/llama2-poi-traj-prediction-geo | KelNoMel | 2023-11-24T07:45:48Z | 22 | 0 | null | [
"region:us"
] | 2023-11-24T07:45:48Z | 2023-11-06T14:45:44.000Z | 2023-11-06T14:45:44 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
leeseeun/KorQuAD_2.0 | leeseeun | 2023-11-07T05:45:49Z | 22 | 0 | null | [
"region:us"
] | 2023-11-07T05:45:49Z | 2023-11-07T05:42:38.000Z | 2023-11-07T05:42:38 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 48148796
num_examples: 83486
download_size: 29849379
dataset_size: 48148796
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "KorQuAD_2.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5557165741920471,
-0.2698919177055359,
0.11775277554988861,
0.28396371006965637,
-0.42523133754730225,
0.008755437098443508,
0.48129111528396606,
-0.226066455245018,
0.6581251621246338,
0.6664508581161499,
-0.5414669513702393,
-0.7037060260772705,
-0.5804826617240906,
-0.515612304210662... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gowitheflow/wiki1M-word-character-all-multiple | gowitheflow | 2023-11-07T22:41:42Z | 22 | 0 | null | [
"region:us"
] | 2023-11-07T22:41:42Z | 2023-11-07T22:25:17.000Z | 2023-11-07T22:25:17 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aminlouhichi/data | aminlouhichi | 2023-11-08T14:13:55Z | 22 | 0 | null | [
"region:us"
] | 2023-11-08T14:13:55Z | 2023-11-08T13:40:39.000Z | 2023-11-08T13:40:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 32311493.0
num_examples: 142
- name: validation
num_bytes: 13269660.0
num_examples: 59
- name: test
num_bytes: 13666341.0
num_examples: 59
download_size: 56280635
dataset_size: 59247494.0
---
# Dataset Card for "data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6054472923278809,
-0.3282342851161957,
0.24143676459789276,
0.18057432770729065,
-0.20366446673870087,
0.09827492386102676,
0.28238993883132935,
-0.20685429871082306,
0.922287106513977,
0.5007419586181641,
-0.8535171151161194,
-0.7960274815559387,
-0.6018335819244385,
-0.273856788873672... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
skymericsales/rexroth-finetune | skymericsales | 2023-11-09T06:13:03Z | 22 | 0 | null | [
"region:us"
] | 2023-11-09T06:13:03Z | 2023-11-09T06:06:42.000Z | 2023-11-09T06:06:42 | ---
dataset_info:
features:
- name: Human
dtype: string
- name: Assistant
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 292324
num_examples: 675
download_size: 103134
dataset_size: 292324
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rexroth-finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7315947413444519,
-0.2572125792503357,
0.12304887175559998,
-0.14810709655284882,
-0.5914466977119446,
-0.10244470089673996,
0.1370222121477127,
-0.24752487242221832,
0.9618476629257202,
0.42770251631736755,
-0.7971042394638062,
-0.617431104183197,
-0.4919687509536743,
-0.10177515447139... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alvarobartt/stack-exchange-paired-mini | alvarobartt | 2023-11-10T09:54:18Z | 22 | 0 | null | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"region:us"
] | 2023-11-10T09:54:18Z | 2023-11-10T09:51:03.000Z | 2023-11-10T09:51:03 | ---
dataset_info:
features:
- name: qid
dtype: int64
- name: question
dtype: string
- name: date
dtype: string
- name: metadata
sequence: string
- name: response_j
dtype: string
- name: response_k
dtype: string
splits:
- name: train
num_bytes: 335534
num_examples: 100
download_size: 105377
dataset_size: 335534
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
- question-answering
language:
- en
size_categories:
- n<1K
---
# StackExchange Paired Mini (100 samples)
This is a subset of the `StackExchange Paired` [lvwerra/stack-exchange-paired](https://hf.co/lvwerra/stack-exchange-paired) dataset.
## Disclaimer
For licensing or any other related detail, please refer to the original dataset linked above. | [
-0.41259515285491943,
-0.5080611109733582,
-0.04740157350897789,
0.06628453731536865,
-0.14406374096870422,
0.24386349320411682,
0.1580149084329605,
-0.10568980127573013,
0.9813444018363953,
0.8506143093109131,
-1.2434442043304443,
-0.20716503262519836,
-0.1794186383485794,
-0.120056085288... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kinianlo/MMTS | kinianlo | 2023-11-15T13:48:57Z | 22 | 0 | null | [
"region:us"
] | 2023-11-15T13:48:57Z | 2023-11-10T12:45:32.000Z | 2023-11-10T12:45:32 | ---
dataset_info:
- config_name: laion2B-en-words-count
features:
- name: count
dtype: int64
- name: word
dtype: string
splits:
- name: train
num_bytes: 2040588603
num_examples: 91658096
download_size: 1365127988
dataset_size: 2040588603
- config_name: shakespeare_laion2B-en_words
features:
- name: word
dtype: string
- name: word_lemma
dtype: string
- name: tag
dtype: string
- name: count_corpus_tag
dtype: int64
- name: count_corpus
dtype: int64
- name: count_laion2B-en
dtype: int64
- name: is_physical_entity
dtype: bool
- name: concreteness
dtype: float64
- name: concreteness_lemma
dtype: float64
splits:
- name: train
num_bytes: 1244660
num_examples: 18548
download_size: 0
dataset_size: 1244660
- config_name: shakespeare_words
features:
- name: word
dtype: string
- name: count_corpus
dtype: int64
- name: count_laion2B-en
dtype: int64
splits:
- name: train
num_bytes: 309689
num_examples: 11456
download_size: 193309
dataset_size: 309689
configs:
- config_name: laion2B-en-words-count
data_files:
- split: train
path: laion2B-en-words-count/train-*
- config_name: shakespeare_laion2B-en_words
data_files:
- split: train
path: shakespeare_laion2B-en_words/train-*
- config_name: shakespeare_words
data_files:
- split: train
path: shakespeare_words/train-*
---
# Dataset Card for "MMTS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.628580629825592,
-0.2117462456226349,
0.3455130457878113,
0.13069374859333038,
-0.25817570090293884,
-0.03651911020278931,
0.4269992411136627,
0.05148021876811981,
0.9224974513053894,
0.3966083526611328,
-1.139663577079773,
-0.6303923726081848,
-0.6632546186447144,
-0.18153412640094757,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/BC5CDR_train | hippocrates | 2023-11-13T19:42:42Z | 22 | 0 | null | [
"region:us"
] | 2023-11-13T19:42:42Z | 2023-11-10T20:51:30.000Z | 2023-11-10T20:51:30 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4158911
num_examples: 5228
- name: valid
num_bytes: 4200415
num_examples: 5330
- name: test
num_bytes: 4539438
num_examples: 5865
download_size: 3693403
dataset_size: 12898764
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
# Dataset Card for "BC5CDR_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7292607426643372,
0.02241200953722,
0.23141776025295258,
0.2741166651248932,
-0.22418013215065002,
-0.0045816898345947266,
0.28708672523498535,
-0.12217207252979279,
0.49899426102638245,
0.24724425375461578,
-0.8462346792221069,
-0.6895274519920349,
-0.5986230969429016,
-0.1989353746175... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
manishiitg/databricks-databricks-dolly-15k-hi | manishiitg | 2023-11-13T08:10:06Z | 22 | 0 | null | [
"region:us"
] | 2023-11-13T08:10:06Z | 2023-11-12T16:03:05.000Z | 2023-11-12T16:03:05 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: context_hindi
dtype: string
- name: response_hindi
dtype: string
- name: instruction_hindi
dtype: string
splits:
- name: train
num_bytes: 42833534
num_examples: 15006
download_size: 19698771
dataset_size: 42833534
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "databricks-databricks-dolly-15k-hi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3834720551967621,
-0.3317665457725525,
-0.07359383255243301,
0.586847186088562,
-0.3018742501735687,
0.11871500313282013,
0.6518606543540955,
-0.053227655589580536,
0.9003803133964539,
0.48633983731269836,
-0.8748647570610046,
-0.5839147567749023,
-0.5210304260253906,
-0.073406055569648... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arieg/bw_spec_cls_100_00_noise_200 | arieg | 2023-11-12T16:22:08Z | 22 | 0 | null | [
"region:us"
] | 2023-11-12T16:22:08Z | 2023-11-12T16:21:05.000Z | 2023-11-12T16:21:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '10'
'1': '140'
'2': '2'
'3': '5'
'4': '141'
'5': '190'
'6': '193'
'7': '194'
'8': '197'
'9': '200'
'10': '203'
'11': '204'
'12': '207'
'13': '210'
'14': '211'
'15': '212'
'16': '213'
'17': '255'
'18': '256'
'19': '368'
'20': '424'
'21': '534'
'22': '540'
'23': '546'
'24': '574'
'25': '615'
'26': '620'
'27': '621'
'28': '625'
'29': '666'
'30': '667'
'31': '676'
'32': '694'
'33': '695'
'34': '714'
'35': '715'
'36': '716'
'37': '718'
'38': '777'
'39': '814'
'40': '821'
'41': '822'
'42': '825'
'43': '853'
'44': '897'
'45': '995'
'46': '997'
'47': '998'
'48': '1039'
'49': '1040'
'50': '1082'
'51': '1083'
'52': '1102'
'53': '1193'
'54': '1195'
'55': '1196'
'56': '1197'
'57': '1270'
'58': '1276'
'59': '1277'
'60': '1278'
'61': '1417'
'62': '1427'
'63': '1443'
'64': '1482'
'65': '1510'
'66': '1544'
'67': '1642'
'68': '1644'
'69': '1649'
'70': '1661'
'71': '1663'
'72': '1666'
'73': '1673'
'74': '1680'
'75': '1681'
'76': '1682'
'77': '1683'
'78': '1684'
'79': '1685'
'80': '1686'
'81': '1687'
'82': '1688'
'83': '1689'
'84': '1701'
'85': '1702'
'86': '1703'
'87': '1704'
'88': '1706'
'89': '1720'
'90': '1732'
'91': '1733'
'92': '1735'
'93': '1736'
'94': '1883'
'95': '1891'
'96': '1924'
'97': '1925'
'98': '1929'
'99': '1930'
splits:
- name: train
num_bytes: 1159801335.0
num_examples: 20560
download_size: 603798465
dataset_size: 1159801335.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bw_spec_cls_100_00_noise_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6832284331321716,
-0.2737119495868683,
0.20180971920490265,
0.5191762447357178,
-0.15231294929981232,
-0.2831825613975525,
-0.11033841967582703,
-0.22382034361362457,
0.5782375335693359,
0.41648319363594055,
-1.0196768045425415,
-0.7498538494110107,
-0.3115062415599823,
-0.2598588168621... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
attila-balint-kul/electricity-demand | attila-balint-kul | 2023-11-16T08:33:45Z | 22 | 1 | null | [
"task_categories:time-series-forecasting",
"language:en",
"license:bsd-2-clause",
"energy",
"electricity",
"region:us"
] | 2023-11-16T08:33:45Z | 2023-11-15T09:10:27.000Z | 2023-11-15T09:10:27 | ---
license: bsd-2-clause
task_categories:
- time-series-forecasting
language:
- en
tags:
- energy
- electricity
pretty_name: Electricity Demand Dataset
configs:
- config_name: demand
data_files: "data/demand.parquet"
- config_name: metadata
data_files: "data/metadata.parquet"
- config_name: weather
data_files: "data/weather.parquet"
---
# Electricity Demand Dataset
<!-- Provide a quick summary of the dataset. -->
This dataset compiles and harmonizes large body smart meter data, enabling machine learning solutions to address climate challenges.
- **Curated by:** Attila Balint
- **License:** BSD 2-clause "Simplified" licence
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
This smart meter dataset facilitates primarily electricity demand forecasting.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset contains three main files.
- data/load.parquet
- data/metadata.parquet
- data/weather.parquet
### data/demand.parquet
This file contains the electricity consumption values and has three columns.
- unique_id: a unique id of the meter
- timestamp: the timestamp of the recording in local time
- y: the electricity consumption in **kWh**
### data/metadata.csv
This file collects the available metadata for the meters. The file contains the folloging columns:
- unique_id: the unique id of the meter
- location_id: a unique id for the location
- latitude: approximate latitude of the building
- longitude: approximate longitude of the building
- building_type: type of the building (e.g. Residential, Hospital, etc.)
### data/weather.parquet
This file contains the collected weather data for all locations. The columns are the following:
- location_id: the unique id for the location
- timestamp: the timestamp of the observation in local time
- temperature: the temperature of air at 2m above the surface of land in **°C**
- dew_point: the temperature to which the air, at 2 metres above the surface of the Earth, would have to be cooled for saturation to occur in **°C**
- pressure: the pressure of the atmosphere at the surface of the Earth, adjusted to the height of mean sea level in **hPa**
- wind_speed: the absolute wind speed at a height of ten metres above the surface of the Earth, in **m/s**
- wind_gust: maximum 3 second wind at 10 m height as defined by WMO, in **m/s**
- wind_bearing: te direction the wind is originates from in **degrees**
- precipitation: the accumulated liquid and frozen water, comprising rain and snow, that falls to the Earth's surface in **mm**
- snow: the accumulated snow that falls to the Earth's surface in **mm**
- cloud_cover: the proportion of a grid box covered by cloud in fractions between 0 to 1
- solar_radiation: the amount of solar radiation that reaches a horizontal plane at the surface of the Earth in **W/m2**
| [
-0.4781044125556946,
-0.47495517134666443,
0.4929734468460083,
0.42123672366142273,
-0.30579400062561035,
-0.18231450021266937,
0.09111442416906357,
-0.1426745355129242,
0.012741188518702984,
0.6051768064498901,
-0.3692255914211273,
-0.8165629506111145,
-0.3671009838581085,
-0.127975881099... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AdamCodd/no_robots-alpaca | AdamCodd | 2023-11-16T00:40:47Z | 22 | 2 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2203.02155",
"region:us"
] | 2023-11-16T00:40:47Z | 2023-11-16T00:24:13.000Z | 2023-11-16T00:24:13 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
- conversational
language:
- en
pretty_name: No Robots Alpaca
size_categories:
- 10K<n<100K
---
## No Robots: Alpaca edition
This dataset is a cleaned (missing/extra spaces...) and reformatted version of the [No Robots dataset](https://huggingface.co/datasets/HuggingFaceH4/no_robots) from HuggingFaceH4, adapted to conform with the Alpaca instruction set.
Notably, it diverges from the original dataset in the way the 'Chat' category is handled; it has been decomposed into single-turn conversations to align with Alpaca's limitations regarding multi-turn interactions. The dataset's IDs have been generated using the SHA256 algorithm. Furthermore, only the categories 'Classify', 'Summarize', 'Rewrite', 'Extract', and 'Chat' include an '<b>Input</b>' field.
-------------------------------------------
## Original README
# Dataset Card for No Robots 🙅♂️🤖
_Look Ma, an instruction dataset that wasn't generated by GPTs!_
## Dataset Description
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** Lewis Tunstall
### Dataset Summary
No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is comprised mostly of single-turn instructions across the following categories:
| Category | Count |
|:-----------|--------:|
| Generation | 4560 |
| Open QA | 1240 |
| Brainstorm | 1120 |
| Chat | 850 |
| Rewrite | 660 |
| Summarize | 420 |
| Coding | 350 |
| Classify | 350 |
| Closed QA | 260 |
| Extract | 190 |
### Supported Tasks and Leaderboards
The No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:
* [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench): a multi-turn benchmark spanning 80 dialogues and 10 domains.
* [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval): a single-turn benchmark which evaluates the performance of chat and instruct models against `text-davinci-003`.
Note that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:
* [Chatbot Arena](https://chat.lmsys.org): a live, human evaluation of chat models in head-to-head comparisons.
### Languages
The data in No Robots are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of the `train_sft` or `test_sft` splits looks as follows:
```
{'prompt': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.',
'prompt_id': '2dc7ea89a2b6a2ed97d4eda07903162a801824261d3d3ae4dd2513db66fd79c8',
'messages': [{'content': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.',
'role': 'system'},
{'content': 'When was the Libary of Alexandria burned down?',
'role': 'user'},
{'content': "Umm, I-I think that was in 48 BC, b-but I'm not sure, I'm sorry.",
'role': 'assistant'},
{'content': 'Who is the founder of Coca-Cola?', 'role': 'user'},
{'content': "D-don't quote me on this, but I- it might be John Pemberton.",
'role': 'assistant'},
{'content': "When did Loyle Carner's debut album come out, and what was its name?",
'role': 'user'},
{'content': "I-It could have b-been on the 20th January of 2017, and it might be called Yesterday's Gone, b-but I'm probably wrong.",
'role': 'assistant'}],
'category': 'Chat'}
```
### Data Fields
The data fields are as follows:
* `prompt`: Describes the task the model should perform.
* `prompt_id`: A unique ID for the prompt.
* `messages`: An array of messages, where each message indicates the role (system, user, assistant) and the content.
* `category`: Which category the example belongs to (e.g. `Chat` or `Coding`).
### Data Splits
| | train_sft | test_sft |
|---------------|------:| ---: |
| no_robots | 9500 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{no_robots,
author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf},
title = {No Robots},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}}
}
``` | [
-0.38881900906562805,
-0.9846829771995544,
0.2620929479598999,
0.10056918114423752,
0.11582286655902863,
0.05331931635737419,
-0.15978406369686127,
-0.3449712097644806,
0.4810309112071991,
0.746827244758606,
-0.9250923991203308,
-0.7447549700737,
-0.4455385208129883,
0.07483571767807007,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
centroIA/MistralLlamaformat1 | centroIA | 2023-11-23T11:38:53Z | 22 | 0 | null | [
"region:us"
] | 2023-11-23T11:38:53Z | 2023-11-23T11:38:52.000Z | 2023-11-23T11:38:52 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2638662
num_examples: 967
download_size: 702207
dataset_size: 2638662
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Frrrrrrrrank/process_dateset | Frrrrrrrrank | 2023-11-23T15:42:25Z | 22 | 0 | null | [
"region:us"
] | 2023-11-23T15:42:25Z | 2023-11-23T15:42:04.000Z | 2023-11-23T15:42:04 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/EverythingIsAllYouNeed0.25_stringified-jsonifize | jsonifize | 2023-11-24T14:05:20Z | 22 | 0 | null | [
"region:us"
] | 2023-11-24T14:05:20Z | 2023-11-24T14:04:04.000Z | 2023-11-24T14:04:04 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/GPT4-LLM-Cleaned_stringified-jsonifize | jsonifize | 2023-11-24T14:05:55Z | 22 | 0 | null | [
"region:us"
] | 2023-11-24T14:05:55Z | 2023-11-24T14:05:51.000Z | 2023-11-24T14:05:51 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lukarape/erebuni_MLZ | lukarape | 2023-11-24T18:31:03Z | 22 | 0 | null | [
"region:us"
] | 2023-11-24T18:31:03Z | 2023-11-24T18:26:27.000Z | 2023-11-24T18:26:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: phone
dtype: string
- name: id
dtype: string
- name: department
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 795763423.0
num_examples: 327
download_size: 788849966
dataset_size: 795763423.0
---
# Dataset Card for "erebuni_MLZ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7474523186683655,
-0.4632089138031006,
0.04878252372145653,
0.354175865650177,
-0.1744638830423355,
-0.21110881865024567,
0.07231008261442184,
-0.2081363946199417,
0.9131386280059814,
0.47814735770225525,
-0.9517441391944885,
-0.9697423577308655,
-0.3636164665222168,
-0.1978139877319336... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ju-bezdek/conll2003-SK-NER | ju-bezdek | 2023-03-21T08:13:05Z | 21 | 0 | null | [
"task_categories:other",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
... | 2023-03-21T08:13:05Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- sk
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|conll2003
task_categories:
- other
task_ids:
- named-entity-recognition
- part-of-speech
pretty_name: conll-2003-sk-ner
tags:
- structure-prediction
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
## Dataset Description
This is translated version of the original CONLL2003 dataset (translated from English to Slovak via Google translate) Annotation was done mostly automatically with word matching scripts. Records where some tags were not matched, were annotated manually (10%) Unlike the original Conll2003 dataset, this one contains only NER tags
- **Point of Contact: [@ju-bezdek](https://github.com/ju-bezdek) **
### Supported Tasks and Leaderboards
NER
labels:
- 0: O
- 1: B-PER
- 2: I-PER
- 3: B-ORG
- 4: I-ORG
- 5: B-LOC
- 6: I-LOC
- 7: B-MISC
- 8: I-MISC
### Languages
sk
## Dataset Structure
### Data Splits
train, test, val
## Dataset Creation
### Source Data
https://huggingface.co/datasets/conll2003
### Annotations
#### Annotation process
- Machine Translation
- Machine pairing tags with reverse translation, and hardcoded rules (including phrase regex matching etc.)
- Manual annotation of records that couldn't be automatically matched
| [
-0.6588134765625,
-0.5325102210044861,
-0.004715627990663052,
0.3909912705421448,
-0.21295587718486786,
-0.03476004675030708,
-0.34802886843681335,
-0.5163840055465698,
0.4198262095451355,
0.5661260485649109,
-1.0053393840789795,
-0.997778058052063,
-0.46915239095687866,
0.4803714156150818... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
k-halid/ar | k-halid | 2021-02-05T16:05:32Z | 21 | 0 | null | [
"region:us"
] | 2021-02-05T16:05:32Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrm8488/fake-news | mrm8488 | 2021-10-15T16:06:35Z | 21 | 0 | null | [
"region:us"
] | 2021-10-15T16:06:35Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nickmuchi/financial-classification | nickmuchi | 2023-01-27T23:44:03Z | 21 | 7 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:1K<n<10K",
"language:en",
"finance",
"region:us"
] | 2023-01-27T23:44:03Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
train-eval-index:
- config: sentences_50agree
- task: text-classification
- task_ids: multi_class_classification
- splits:
eval_split: train
- col_mapping:
sentence: text
label: target
size_categories:
- 1K<n<10K
tags:
- finance
---
## Dataset Creation
This [dataset](https://huggingface.co/datasets/nickmuchi/financial-classification) combines financial phrasebank dataset and a financial text dataset from [Kaggle](https://www.kaggle.com/datasets/percyzheng/sentiment-classification-selflabel-dataset).
Given the financial phrasebank dataset does not have a validation split, I thought this might help to validate finance models and also capture the impact of COVID on financial earnings with the more recent Kaggle dataset. | [
-0.15078681707382202,
-0.41287508606910706,
0.07831347733736038,
0.5407782793045044,
-0.005424901377409697,
0.3100090026855469,
0.2423296421766281,
-0.32412412762641907,
0.4803519546985626,
0.5528498291969299,
-0.5538767576217651,
-0.5152615308761597,
-0.3993942439556122,
-0.13893623650074... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sarulab-speech/bvcc-voicemos2022 | sarulab-speech | 2022-02-25T06:26:53Z | 21 | 0 | null | [
"region:us"
] | 2022-02-25T06:26:53Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teven/prompted_examples | teven | 2021-12-06T15:54:19Z | 21 | 0 | null | [
"region:us"
] | 2021-12-06T15:54:19Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xiaobendanyn/tacred | xiaobendanyn | 2021-10-29T09:23:40Z | 21 | 4 | null | [
"region:us"
] | 2021-10-29T09:23:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion1B-nolang | laion | 2022-03-09T15:04:35Z | 21 | 4 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-09T15:04:35Z | 2022-03-09T14:25:39.000Z | 2022-03-09T14:25:39 | ---
license: cc-by-4.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-19 | tomekkorbak | 2022-03-18T22:06:32Z | 21 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:32Z | 2022-03-18T22:06:13.000Z | 2022-03-18T22:06:13 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-23 | tomekkorbak | 2022-03-18T22:06:32Z | 21 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:32Z | 2022-03-18T22:06:14.000Z | 2022-03-18T22:06:14 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-29 | tomekkorbak | 2022-03-18T22:06:32Z | 21 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:32Z | 2022-03-18T22:06:14.000Z | 2022-03-18T22:06:14 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.