id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ID3/wikilibros_artesculinarias_recetas | ID3 | 2023-03-26T03:33:17Z | 16 | 0 | null | [
"language:es",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-03-26T03:33:17Z | 2023-03-26T03:25:48.000Z | 2023-03-26T03:25:48 | ---
dataset_info:
features:
- name: comensales
dtype: string
- name: tiempo
dtype: string
- name: dificultad
dtype: string
- name: ingredientes
sequence: string
- name: procedimiento
sequence: string
- name: titulo
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 727791
num_examples: 753
- name: validation
num_bytes: 78214
num_examples: 84
download_size: 444915
dataset_size: 806005
license: cc-by-sa-3.0
language:
- es
pretty_name: Recetas de cocina Wikilibros
---
# Dataset Card for "wikilibros_artesculinarias_recetas"
## Dataset Description
Subconjunto de recetas de cocina extraidas de [Artes Culinarias](https://es.wikibooks.org/wiki/Artes_culinarias/Recetas) | [
-0.2993074059486389,
-0.294105589389801,
-0.2239087074995041,
-0.03848598152399063,
-0.8071568608283997,
0.04417124763131142,
0.3356287181377411,
-0.1856910139322281,
0.9765153527259827,
0.5032286047935486,
-0.8870790004730225,
-0.875576913356781,
-0.40120354294776917,
0.3020652234554291,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
suolyer/pile_openwebtext2 | suolyer | 2023-03-27T03:03:15Z | 16 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-03-27T03:03:15Z | 2023-03-26T16:38:21.000Z | 2023-03-26T16:38:21 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/alpaca_vi | vietgpt | 2023-11-03T21:23:33Z | 16 | 0 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:vi",
"SFT",
"region:us"
] | 2023-11-03T21:23:33Z | 2023-03-27T18:32:58.000Z | 2023-03-27T18:32:58 | ---
language:
- vi
size_categories:
- 10K<n<100K
task_categories:
- text-generation
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 26909147
num_examples: 51548
download_size: 13361628
dataset_size: 26909147
tags:
- SFT
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cartesinus/leyzer-fedcsis-translated | cartesinus | 2023-03-27T21:52:34Z | 16 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:pl",
"license:cc-by-4.0",
"natural-language-understanding",
"region:us"
] | 2023-03-27T21:52:34Z | 2023-03-27T21:51:34.000Z | 2023-03-27T21:51:34 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- pl
tags:
- natural-language-understanding
size_categories:
- 10K<n<100K
---
# Leyzer: A Dataset for Multilingual Virtual Assistants
Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language understanding (NLU) models and the strategies of localization of
virtual assistants. It consists of 20 domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of samples, ranging from 1 to 672
sentences per intent. For more stats please refer to wiki.
| [
-0.6054393649101257,
-0.7386212944984436,
0.448956698179245,
0.4061356484889984,
0.13448503613471985,
0.21309572458267212,
0.12839317321777344,
-0.37483853101730347,
0.31953004002571106,
0.5464681386947632,
-0.9395983815193176,
-0.7263869643211365,
-0.3425629138946533,
0.4897688925266266,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/alpaca_en | vietgpt | 2023-11-03T21:23:19Z | 16 | 1 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"SFT",
"region:us"
] | 2023-11-03T21:23:19Z | 2023-03-29T15:52:38.000Z | 2023-03-29T15:52:38 | ---
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- text-generation
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 20207911
num_examples: 51848
download_size: 11466948
dataset_size: 20207911
tags:
- SFT
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Francesco/thermal-dogs-and-people-x6ejw | Francesco | 2023-03-30T09:19:15Z | 16 | 0 | null | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | 2023-03-30T09:19:15Z | 2023-03-30T09:18:56.000Z | 2023-03-30T09:18:56 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': thermal-dogs-n-people
'1': dog
'2': person
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: thermal-dogs-and-people-x6ejw
tags:
- rf100
---
# Dataset Card for thermal-dogs-and-people-x6ejw
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/thermal-dogs-and-people-x6ejw
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
thermal-dogs-and-people-x6ejw
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/thermal-dogs-and-people-x6ejw
### Citation Information
```
@misc{ thermal-dogs-and-people-x6ejw,
title = { thermal dogs and people x6ejw Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/thermal-dogs-and-people-x6ejw } },
url = { https://universe.roboflow.com/object-detection/thermal-dogs-and-people-x6ejw },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | [
-0.7024958729743958,
-0.2500540018081665,
0.1284792423248291,
-0.16775615513324738,
-0.32823827862739563,
-0.17647041380405426,
-0.11385740339756012,
-0.650445282459259,
0.28785693645477295,
0.3127567172050476,
-0.6038893461227417,
-0.9302768111228943,
-0.3446741998195648,
0.23883894085884... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kevin-M-Smith/flint_images | Kevin-M-Smith | 2023-04-03T00:55:04Z | 16 | 0 | null | [
"region:us"
] | 2023-04-03T00:55:04Z | 2023-04-03T00:53:18.000Z | 2023-04-03T00:53:18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': clutter
'1': email
'2': email-squished
'3': handwritten-document
'4': spreadsheet
'5': typeset-document
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 178391248.0
num_examples: 4965
- name: test
num_bytes: 42819947.0
num_examples: 1242
download_size: 221040943
dataset_size: 221211195.0
---
# Dataset Card for "flint_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.532364010810852,
-0.013935687020421028,
0.3877037763595581,
0.332468718290329,
-0.43697383999824524,
0.05025233328342438,
0.4912499785423279,
-0.29758426547050476,
0.7876496911048889,
0.29906633496284485,
-0.7049325704574585,
-0.8298531770706177,
-0.6858646273612976,
-0.1617825180292129... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kevin-M-Smith/flint_images_600_600 | Kevin-M-Smith | 2023-04-08T14:15:38Z | 16 | 0 | null | [
"region:us"
] | 2023-04-08T14:15:38Z | 2023-04-08T14:10:31.000Z | 2023-04-08T14:10:31 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': clutter
'1': email
'2': email-squished
'3': handwritten-document
'4': spreadsheet
'5': typeset-document
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 648700686.0
num_examples: 4965
- name: test
num_bytes: 159791287.0
num_examples: 1242
download_size: 807442120
dataset_size: 808491973.0
---
# Dataset Card for "flint_images_600_600"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5005266070365906,
0.08017342537641525,
0.3959548771381378,
0.2856366038322449,
-0.4013763964176178,
-0.0656009316444397,
0.5559266209602356,
-0.19060391187667847,
0.8474408388137817,
0.33897173404693604,
-0.619997501373291,
-0.7898963689804077,
-0.6778687834739685,
-0.10552093386650085,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kevin-M-Smith/flint_images_900_900 | Kevin-M-Smith | 2023-04-08T14:36:28Z | 16 | 0 | null | [
"region:us"
] | 2023-04-08T14:36:28Z | 2023-04-08T14:26:01.000Z | 2023-04-08T14:26:01 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': clutter
'1': email
'2': email-squished
'3': handwritten-document
'4': spreadsheet
'5': typeset-document
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 1326456197.0
num_examples: 4965
- name: test
num_bytes: 327048562.0
num_examples: 1242
download_size: 1650313094
dataset_size: 1653504759.0
---
# Dataset Card for "flint_images_900_900"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44037553668022156,
-0.005373908206820488,
0.39230313897132874,
0.3604835867881775,
-0.3880317807197571,
0.013330250047147274,
0.5997703075408936,
-0.1633683443069458,
0.819421112537384,
0.3585171699523926,
-0.6150554418563843,
-0.7999459505081177,
-0.6898961663246155,
-0.130465313792228... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-somos-nlp-2023/podcasts-ner-es | hackathon-somos-nlp-2023 | 2023-04-09T23:40:50Z | 16 | 9 | null | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:es",
"license:mit",
"region:us"
] | 2023-04-09T23:40:50Z | 2023-04-08T23:40:02.000Z | 2023-04-08T23:40:02 | ---
dataset_info:
features:
- name: text
dtype: string
- name: annotation
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: start
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 43389.8358778626
num_examples: 209
- name: test
num_bytes: 11003.164122137405
num_examples: 53
download_size: 42448
dataset_size: 54393
task_categories:
- token-classification
language:
- es
size_categories:
- n<1K
license: mit
---
# Dataset Card for "podcasts-ner-es"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Team members](#team-members)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset comprises of small text snippets extracted from the "Deforme Semanal" podcast,
accompanied by annotations that identify the presence of a predetermined set of entities.
The purpose of this dataset is to facilitate Named Entity Recognition (NER) tasks.
The dataset was created to aid in the identification of entities such as famous people, books, or films in podcasts.
The transcription of the audio was first done, followed by annotation with GPT-3 and curation with Argilla.
The dataset is in Spanish, covering mostly topics such as love, feminism, and art, which are the main themes of the podcast.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
The dataset is in Spanish and the language used is primarily informal.
It is important to note that the language may include aggressive or offensive content.
## Dataset Structure
### Data Instances
```
{
"text":"Tengo 39 años, pues, ya veré cuándo yo quiero dejar de comer ternera, está mal, porque hay sobre explotación y todo esto, muy mal."
"annotation": [ { "end": 13, "label": "DATES", "start": 6 } ]
"id": "53c4748e-dbd2-4cf5-946f-d134b0bf6155"
}
```
### Data Fields
`text`: Snippet of text of no more than 512 characters extracted from a podcast episode.
`id`: Unique identification number for each instance in the dataset.
`annotation`: List of dictonary-like format with the following fields:
- `end`: end character of the entity ocurrence in the text.
- `start`: start character of the entity ocurrence in the text.
- `label`: label for the entity from the predefined set of entities. The label of the entities is one of:
'people', 'products', 'books', 'animals', 'organizations', 'topics', 'dates', 'places', 'artista', 'objects','songs', and 'films'.
### Data Splits
The dataset was shuffled and split using the `train_test_split` function from the Hugging Face datasets library.
The split was made with a train size of 0.8 and a seed of 42.
## Dataset Creation
### Curation Rationale
We created this dataset with the aim of making the information from our favorite podcasts more accessible, as retrieving information from audio formats can be challenging.
We chose to focus on the Named Entity Recognition (NER) task as it was relatively easy to annotate and validate.
### Source Data
#### Initial Data Collection and Normalization
We collected the data from a playlist on YouTube containing approximately 15 episodes of the "Deforme Semanal" podcast.
You can find the playlist at this [link](https://www.youtube.com/playlist?list=PLLbN7SMQhMVZoXhtQ00AyebQE_-ttDrs9).
We then transcribed the audio stream using OpenAI's Whisper (medium size) and split the resulting text files
into chunks of less than 512 characters.
### Annotations
#### Annotation process
To annotate the texts, we used OpenAI's API and GPT-3, with the following prompt:
```
Perform named entity recognition in Spanish. The classes are books, films, video games, songs, places, dates, topics, organizations, and people. The output should follow the format:
[{'class': 'people', 'text': 'name of the person'}, {'class': 'books', 'start': 'name of the book'}]
Sentence:
```
Finally, to ensure the quality of the dataset, we validated the annotations using Argilla by checking that the tokens were classified
correctly.
## Considerations for Using the Data
### Discussion of Biases
The dataset was obtained from the "Deforme Semanal" podcast, which primarily focuses on art, feminism, and culture.
As a result, the data is directly related to the topics and individuals discussed in these contexts. Additionally,
the language used in the podcast is informal and can be aggressive or offensive at times, which may be reflected in the dataset.
Although we attempted to minimize these biases during the validation process, their effectiveness is likely limited.
### Other Known Limitations
One issue that we have encountered with the token/entity data is that there can be some ambiguity in terms of their distinctions.
In some cases, it may not be clear how to differentiate between two tokens or entities, which can impact the accuracy
and effectiveness of models trained on this data.
Furthermore, the dataset size is relatively small, which can pose a challenge when it comes to training machine learning models.
With a limited amount of data, it can be difficult to capture the full range of variations and patterns in the data,
and overfitting can become a concern. This is especially true when dealing with complex models that require a large
amount of data to train effectively.
## Team members
[David Mora](https://huggingface.co/DavidFM43)
[Sergio Perez](https://huggingface.co/sergiopperez)
[Albeto Fernandez](https://huggingface.co/AlbertoFH98)
| [
-0.7191716432571411,
-0.3934639096260071,
0.05057515203952789,
0.29333174228668213,
-0.26542389392852783,
-0.028984172269701958,
-0.45919299125671387,
-0.48379433155059814,
0.7736930847167969,
0.39510825276374817,
-0.750945508480072,
-0.7979781627655029,
-0.7386577725410461,
0.375887930393... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
treadon/dolly-15k | treadon | 2023-04-14T14:46:03Z | 16 | 1 | null | [
"license:cc-by-3.0",
"region:us"
] | 2023-04-14T14:46:03Z | 2023-04-14T14:41:15.000Z | 2023-04-14T14:41:15 | ---
license: cc-by-3.0
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 12208856
num_examples: 14863
- name: validation
num_bytes: 117314
num_examples: 151
download_size: 7866269
dataset_size: 12326170
---
# Dataset Card for "dolly-15k"
# Summary
This is the dataset supplied by Databricks for training Dolly V2. This set is split 99% training / 1% validation, should you want to set aside some records for evaluation purposes.
## Special thanks to ❤️ Databricks for creating and making this set available.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.08544944226741791,
-0.3125571310520172,
-0.30276939272880554,
0.3752512037754059,
-0.37881991267204285,
-0.09470384567975998,
0.4740343391895294,
-0.09499286115169525,
0.41059350967407227,
0.636627733707428,
-1.0537488460540771,
-0.39705750346183777,
-0.5975306034088135,
-0.124017268419... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/sydt | mstz | 2023-04-18T08:27:15Z | 16 | 0 | null | [
"task_categories:tabular-classification",
"language:en",
"sydt",
"tabular_classification",
"binary_classification",
"synthetic",
"region:us"
] | 2023-04-18T08:27:15Z | 2023-04-18T08:25:12.000Z | 2023-04-18T08:25:12 | ---
language:
- en
tags:
- sydt
- tabular_classification
- binary_classification
- synthetic
pretty_name: Sydt
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- sydt
---
# Sydt
Synthetic dataset. | [
-0.24664688110351562,
-0.4947572350502014,
0.24466365575790405,
0.493059903383255,
-0.44495201110839844,
0.3745419979095459,
0.24726687371730804,
0.11101479828357697,
0.6884315013885498,
0.5983927249908447,
-0.9649308323860168,
-0.373629093170166,
-0.3727318048477173,
0.35848262906074524,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
checkai/instruction-poems | checkai | 2023-04-19T03:02:09Z | 16 | 5 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-04-19T03:02:09Z | 2023-04-19T00:36:02.000Z | 2023-04-19T00:36:02 | ---
license: cc-by-4.0
---
Poem dataset to be used with instruction fine tuning | [
-0.0034147948026657104,
-0.5892324447631836,
0.10190417617559433,
0.4606592655181885,
-0.3787189722061157,
-0.5889772772789001,
-0.4831981658935547,
-0.24741752445697784,
-0.20728842914104462,
0.7366107106208801,
-0.6673414707183838,
-0.7878087162971497,
-0.7099793553352356,
-0.12205068022... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
monet-joe/cv_backbones | monet-joe | 2023-11-22T16:03:21Z | 16 | 1 | null | [
"task_categories:image-classification",
"task_categories:feature-extraction",
"size_categories:n<1K",
"language:en",
"license:mit",
"code",
"region:us"
] | 2023-11-22T16:03:21Z | 2023-04-27T17:42:10.000Z | 2023-04-27T17:42:10 | ---
license: mit
task_categories:
- image-classification
- feature-extraction
language:
- en
tags:
- code
pretty_name: Vi-Backbones
size_categories:
- n<1K
viewer: false
---
# Dataset Card for "monet-joe/cv_backbones"
## Usage
```
from datasets import load_dataset
backbones_on_in1k_v1 = load_dataset("monet-joe/cv_backbones", split="IMAGENET1K_V1")
backbones_on_in1k_v2 = load_dataset("monet-joe/cv_backbones", split="IMAGENET1K_V2")
for weights in backbones_on_in1k_v1:
print(weights)
for weights in backbones_on_in1k_v2:
print(weights)
```
## Reference
```
https://pytorch.org/vision/main/_modules
``` | [
-0.1139054223895073,
0.147318497300148,
-0.13080629706382751,
0.17534402012825012,
-0.5958648324012756,
-0.046848151832818985,
0.37867531180381775,
-0.037836670875549316,
0.596027135848999,
0.6125301718711853,
-0.6557216644287109,
-0.5258333086967468,
-0.5280818939208984,
-0.00367863220162... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SJTU-CL/ArguGPT | SJTU-CL | 2023-05-02T08:44:22Z | 16 | 1 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"AIGC for education",
"arxiv:2304.07666",
"region:us"
] | 2023-05-02T08:44:22Z | 2023-05-02T08:11:18.000Z | 2023-05-02T08:11:18 | ---
license: cc
task_categories:
- text-classification
language:
- en
tags:
- AIGC for education
size_categories:
- 1K<n<10K
---
# Machine-essays generation pipeline
Please check out our [github repo](https://github.com/huhailinguist/ArguGPT).
This document only introduces how we collected **machine-generated essays**.
| model | timestamp | # total | # valid | # short | # repetitive | # overlapped |
|------------------|-------------|---------|---------|---------|--------------|--------------|
| gpt2-xl | Nov, 2019 | 4,573 | 563 | 1,637 | 0 | 2,373 |
| text-babbage-001 | April, 2022 | 917 | 479 | 181 | 240 | 17 |
| text-curie-001 | April, 2022 | 654 | 498 | 15 | 110 | 31 |
| text-davinci-001 | April, 2022 | 632 | 493 | 1 | 41 | 97 |
| text-davinci-002 | April, 2022 | 621 | 495 | 1 | 56 | 69 |
| text-davinci-003 | Nov, 2022 | 1,130 | 1,090 | 0 | 30 | 10 |
| gpt-3.5-turbo | Mar, 2023 | 1,122 | 1,090 | 0 | 4 | 28 |
| total | - | 9,647 | 4,708 | 1,835 | 481 | 2,625 |
## Models
We chose 7 models from GPT family: 1) `gpt2-xl`, 2) `text-babbage-001`, 3) `text-curie-001`, 4) `text-davinci-001`, 5) `text-davinci-002`,
6) `text-davinci-003`, and 7) `gpt-3.5-turbo`.
More information about these models can be seen in [OpenAI documentation](https://platform.openai.com/docs/model-index-for-researchers).
For WECCL and TOEFL, we used all 7 models to generate argumentative essays.
As for GRE, of which the writing task is more difficult than WECCL and TOEFL, we only used `text-davinci-003` and `gpt-3.5-turbo`.
**Notes**: Since `gpt2-xl` cannot respond to prompts as InstructGPTs and other later models,
we fed `gpt2-xl` the prompt along with one beginning sentence randomly extracted from human essays for continuous writing.
Therefore, the first sentence of each essay generated by `gpt2-xl` is actually human-authored.
## Prompts selection
Our writing topics are collected from human-WECCL, human-TOEFL, and human-GRE.
In a writing task, a topic statement is presented for students (or machines) to attack or defend.
The topic statement here is refered to `ESSAY_PROMPT`, and our added instructions for machine is refered to `ADDED_PROMPT`.
Therefore, our prompt format is as follow: `ESSAY_PROMPT` + `ADDED_PROMPT`.
For instance,
- `ESSAY_PROMPT`: It is better to have broad knowledge of many academic subjects than to specialize in one specific subject.
- `ADDED_PROMPT`: Do you agree or disagree? Use specific reasons and examples to support your answer. Write an essay of roughly {300/400/500} words.
We asked the machine to write 300 words for writing tasks in WECCL, 400 for TOEFL, and 500 for GRE.
## Essays filtering, preprocessing, and automated scoring
We then filtered out the essays that are short, repetitive and overlapped.
- Short: we set the threshold of 50 words for `gpt2-xl`, and 100 words for others.
- Repetitive: 40% of sentences are *similar*.
- Overlapped: 40% of sentences are *similar* with any other essay already generated.
- Definition of *similar*: "I like a dog." and "I don't like a cat." have 3 words in common. The similarity therefore is 6 / 9 = 0.67. If the similarity is greater than 0.8, the two sentences are *similar*.
We deleted "As an AI model, ..." generated by gpt-3.5-turbo.
And we used [YouDao automated scoring system](https://ai.youdao.com/) to score all the essays,
and categorized them into low, mid, and high levels.
## Citation
Please cite our work [arXiv:2304.07666](https://arxiv.org/abs/2304.07666) as
```
@misc{liu2023argugpt,
title={ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models},
author={Yikang Liu and Ziyin Zhang and Wanyang Zhang and Shisen Yue and Xiaojing Zhao and Xinyuan Cheng and Yiwen Zhang and Hai Hu},
year={2023},
eprint={2304.07666},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.6113457083702087,
-0.9521952271461487,
0.7722107172012329,
-0.1524999588727951,
-0.11465130746364594,
-0.08494666963815689,
0.09374112635850906,
-0.33254581689834595,
-0.18904997408390045,
0.42784547805786133,
-0.4370168447494507,
-0.47245457768440247,
-0.6454206705093384,
0.13556422293... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
divers/jobsedcription-requirement | divers | 2023-05-05T17:50:23Z | 16 | 4 | null | [
"region:us"
] | 2023-05-05T17:50:23Z | 2023-05-05T17:50:17.000Z | 2023-05-05T17:50:17 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: job_description
dtype: string
- name: job_requirements
dtype: string
- name: unknown
dtype: float64
- name: __index_level_0__
dtype: float64
splits:
- name: train
num_bytes: 25599853
num_examples: 4551
download_size: 12633905
dataset_size: 25599853
---
# Dataset Card for "jobsedcription-requirement"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.35083043575286865,
0.07042229920625687,
0.19617629051208496,
0.28278473019599915,
-0.17059975862503052,
-0.22895395755767822,
0.2500000596046448,
-0.34392377734184265,
0.9690059423446655,
0.718428373336792,
-1.0738388299942017,
-0.7099791765213013,
-0.6144709587097168,
-0.23120836913585... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mauregato/affectnet_short | Mauregato | 2023-05-06T19:55:41Z | 16 | 1 | null | [
"region:us"
] | 2023-05-06T19:55:41Z | 2023-05-06T19:54:49.000Z | 2023-05-06T19:54:49 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': anger
'1': surprise
'2': contempt
'3': happy
'4': neutral
'5': fear
'6': sad
'7': disgust
splits:
- name: train
num_bytes: 432233297.875
num_examples: 23233
- name: val
num_bytes: 108197028.875
num_examples: 5809
download_size: 540092363
dataset_size: 540430326.75
---
# Dataset Card for "affectnet_short"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5724591016769409,
-0.22569124400615692,
0.22839076817035675,
0.2577754855155945,
-0.31532466411590576,
-0.24576987326145172,
0.06918133050203323,
-0.14485126733779907,
1.401308536529541,
0.2771391272544861,
-0.7702322006225586,
-0.6356256008148193,
-0.708807647228241,
-0.177018582820892... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biglam/on_the_books | biglam | 2023-06-07T08:44:39Z | 16 | 1 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-3.0",
"lam",
"legal",
"region:us"
] | 2023-06-07T08:44:39Z | 2023-05-12T14:54:18.000Z | 2023-05-12T14:54:18 | ---
license: cc-by-3.0
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: jim_crow
dtype:
class_label:
names:
'0': no_jim_crow
'1': jim_crow
- name: type
dtype: string
- name: chapter_num
dtype: int32
- name: section_num
dtype: int32
- name: chapter_text
dtype: string
- name: section_text
dtype: string
splits:
- name: train
num_bytes: 2119395
num_examples: 1785
download_size: 2085196
dataset_size: 2119395
task_categories:
- text-classification
language:
- en
tags:
- lam
- legal
pretty_name: On the Books Training Set
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juletxara/xstory_cloze_mt | juletxara | 2023-07-21T10:23:00Z | 16 | 0 | null | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|story_cloze",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2112.10668",
"region:us"
] | 2023-07-21T10:23:00Z | 2023-05-22T09:37:14.000Z | 2023-05-22T09:37:14 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: XStoryCloze
size_categories:
- 1K<n<10K
source_datasets:
- extended|story_cloze
tags: []
task_categories:
- other
task_ids: []
dataset_info:
- config_name: nllb-200-distilled-600M
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 492764
num_examples: 1511
- name: zh
num_bytes: 500346
num_examples: 1511
- name: es
num_bytes: 495103
num_examples: 1511
- name: ar
num_bytes: 490629
num_examples: 1511
- name: hi
num_bytes: 497109
num_examples: 1511
- name: id
num_bytes: 491970
num_examples: 1511
- name: te
num_bytes: 472103
num_examples: 1511
- name: sw
num_bytes: 493285
num_examples: 1511
- name: eu
num_bytes: 486194
num_examples: 1511
- name: my
num_bytes: 545031
num_examples: 1511
download_size: 4619083
dataset_size: 4964534
- config_name: nllb-200-distilled-1.3B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 493120
num_examples: 1511
- name: zh
num_bytes: 512485
num_examples: 1511
- name: es
num_bytes: 494845
num_examples: 1511
- name: ar
num_bytes: 488763
num_examples: 1511
- name: hi
num_bytes: 495752
num_examples: 1511
- name: id
num_bytes: 491866
num_examples: 1511
- name: te
num_bytes: 472752
num_examples: 1511
- name: sw
num_bytes: 493712
num_examples: 1511
- name: eu
num_bytes: 491839
num_examples: 1511
- name: my
num_bytes: 517974
num_examples: 1511
download_size: 4607136
dataset_size: 4953108
- config_name: nllb-200-1.3B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 493690
num_examples: 1511
- name: zh
num_bytes: 498665
num_examples: 1511
- name: es
num_bytes: 493934
num_examples: 1511
- name: ar
num_bytes: 489966
num_examples: 1511
- name: hi
num_bytes: 495889
num_examples: 1511
- name: id
num_bytes: 492249
num_examples: 1511
- name: te
num_bytes: 472101
num_examples: 1511
- name: sw
num_bytes: 492297
num_examples: 1511
- name: eu
num_bytes: 485674
num_examples: 1511
- name: my
num_bytes: 510821
num_examples: 1511
download_size: 4579397
dataset_size: 4925286
- config_name: nllb-200-3.3B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 495392
num_examples: 1511
- name: zh
num_bytes: 500965
num_examples: 1511
- name: es
num_bytes: 495521
num_examples: 1511
- name: ar
num_bytes: 491594
num_examples: 1511
- name: hi
num_bytes: 498082
num_examples: 1511
- name: id
num_bytes: 494296
num_examples: 1511
- name: te
num_bytes: 477315
num_examples: 1511
- name: sw
num_bytes: 496170
num_examples: 1511
- name: eu
num_bytes: 499829
num_examples: 1511
- name: my
num_bytes: 517806
num_examples: 1511
download_size: 4621130
dataset_size: 4966970
- config_name: xglm-564M
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 541125
num_examples: 1511
- name: zh
num_bytes: 825126
num_examples: 1511
- name: es
num_bytes: 552675
num_examples: 1511
- name: ar
num_bytes: 560267
num_examples: 1511
- name: hi
num_bytes: 567030
num_examples: 1511
- name: id
num_bytes: 506136
num_examples: 1511
- name: te
num_bytes: 889610
num_examples: 1511
- name: sw
num_bytes: 556752
num_examples: 1511
- name: eu
num_bytes: 585440
num_examples: 1511
- name: my
num_bytes: 1112539
num_examples: 1511
download_size: 6352902
dataset_size: 6696700
- config_name: xglm-1.7B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 490340
num_examples: 1511
- name: zh
num_bytes: 486527
num_examples: 1511
- name: es
num_bytes: 510488
num_examples: 1511
- name: ar
num_bytes: 486931
num_examples: 1511
- name: hi
num_bytes: 580025
num_examples: 1511
- name: id
num_bytes: 489463
num_examples: 1511
- name: te
num_bytes: 491793
num_examples: 1511
- name: sw
num_bytes: 494668
num_examples: 1511
- name: eu
num_bytes: 540797
num_examples: 1511
- name: my
num_bytes: 531972
num_examples: 1511
download_size: 4757979
dataset_size: 5103004
- config_name: xglm-2.9B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 502967
num_examples: 1511
- name: zh
num_bytes: 487153
num_examples: 1511
- name: es
num_bytes: 498912
num_examples: 1511
- name: ar
num_bytes: 494407
num_examples: 1511
- name: hi
num_bytes: 492415
num_examples: 1511
- name: id
num_bytes: 504653
num_examples: 1511
- name: te
num_bytes: 500632
num_examples: 1511
- name: sw
num_bytes: 496000
num_examples: 1511
- name: eu
num_bytes: 488755
num_examples: 1511
- name: my
num_bytes: 537296
num_examples: 1511
download_size: 4657865
dataset_size: 5003190
- config_name: xglm-4.5B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 495315
num_examples: 1511
- name: zh
num_bytes: 491436
num_examples: 1511
- name: es
num_bytes: 496332
num_examples: 1511
- name: ar
num_bytes: 485175
num_examples: 1511
- name: hi
num_bytes: 517560
num_examples: 1511
- name: id
num_bytes: 491342
num_examples: 1511
- name: te
num_bytes: 520378
num_examples: 1511
- name: sw
num_bytes: 494811
num_examples: 1511
- name: eu
num_bytes: 701365
num_examples: 1511
- name: my
num_bytes: 684247
num_examples: 1511
download_size: 5033379
dataset_size: 5377961
- config_name: xglm-7.5B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 495206
num_examples: 1511
- name: zh
num_bytes: 494844
num_examples: 1511
- name: es
num_bytes: 496036
num_examples: 1511
- name: ar
num_bytes: 486592
num_examples: 1511
- name: hi
num_bytes: 492188
num_examples: 1511
- name: id
num_bytes: 489364
num_examples: 1511
- name: te
num_bytes: 493587
num_examples: 1511
- name: sw
num_bytes: 492293
num_examples: 1511
- name: eu
num_bytes: 498066
num_examples: 1511
- name: my
num_bytes: 513770
num_examples: 1511
download_size: 4606340
dataset_size: 4951946
- config_name: bloom-560m
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 957051
num_examples: 1511
- name: zh
num_bytes: 582347
num_examples: 1511
- name: es
num_bytes: 524210
num_examples: 1511
- name: ar
num_bytes: 522499
num_examples: 1511
- name: hi
num_bytes: 554814
num_examples: 1511
- name: id
num_bytes: 485479
num_examples: 1511
- name: te
num_bytes: 624860
num_examples: 1511
- name: sw
num_bytes: 999225
num_examples: 1511
- name: eu
num_bytes: 699035
num_examples: 1511
- name: my
num_bytes: 673321
num_examples: 1511
download_size: 6278136
dataset_size: 6622841
- config_name: bloom-1b1
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 698567
num_examples: 1511
- name: zh
num_bytes: 489197
num_examples: 1511
- name: es
num_bytes: 474082
num_examples: 1511
- name: ar
num_bytes: 476907
num_examples: 1511
- name: hi
num_bytes: 491779
num_examples: 1511
- name: id
num_bytes: 477646
num_examples: 1511
- name: te
num_bytes: 516529
num_examples: 1511
- name: sw
num_bytes: 600000
num_examples: 1511
- name: eu
num_bytes: 546887
num_examples: 1511
- name: my
num_bytes: 676233
num_examples: 1511
download_size: 5102727
dataset_size: 5447827
- config_name: bloom-1b7
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 525134
num_examples: 1511
- name: zh
num_bytes: 479852
num_examples: 1511
- name: es
num_bytes: 486508
num_examples: 1511
- name: ar
num_bytes: 490589
num_examples: 1511
- name: hi
num_bytes: 498850
num_examples: 1511
- name: id
num_bytes: 485372
num_examples: 1511
- name: te
num_bytes: 483735
num_examples: 1511
- name: sw
num_bytes: 500094
num_examples: 1511
- name: eu
num_bytes: 502181
num_examples: 1511
- name: my
num_bytes: 971749
num_examples: 1511
download_size: 5078628
dataset_size: 5424064
- config_name: bloom-3b
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 516891
num_examples: 1511
- name: zh
num_bytes: 484312
num_examples: 1511
- name: es
num_bytes: 491618
num_examples: 1511
- name: ar
num_bytes: 489534
num_examples: 1511
- name: hi
num_bytes: 497902
num_examples: 1511
- name: id
num_bytes: 487465
num_examples: 1511
- name: te
num_bytes: 492470
num_examples: 1511
- name: sw
num_bytes: 492754
num_examples: 1511
- name: eu
num_bytes: 499445
num_examples: 1511
- name: my
num_bytes: 624041
num_examples: 1511
download_size: 4730785
dataset_size: 5076432
- config_name: bloom-7b1
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 503684
num_examples: 1511
- name: zh
num_bytes: 482989
num_examples: 1511
- name: es
num_bytes: 491622
num_examples: 1511
- name: ar
num_bytes: 482758
num_examples: 1511
- name: hi
num_bytes: 489960
num_examples: 1511
- name: id
num_bytes: 482001
num_examples: 1511
- name: te
num_bytes: 489799
num_examples: 1511
- name: sw
num_bytes: 490640
num_examples: 1511
- name: eu
num_bytes: 486618
num_examples: 1511
- name: my
num_bytes: 753138
num_examples: 1511
download_size: 4807399
dataset_size: 5153209
- config_name: llama-7B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 492427
num_examples: 1511
- name: zh
num_bytes: 529522
num_examples: 1511
- name: es
num_bytes: 498252
num_examples: 1511
- name: ar
num_bytes: 512201
num_examples: 1511
- name: hi
num_bytes: 511073
num_examples: 1511
- name: id
num_bytes: 488707
num_examples: 1511
- name: te
num_bytes: 728118
num_examples: 1511
- name: sw
num_bytes: 492448
num_examples: 1511
- name: eu
num_bytes: 525786
num_examples: 1511
- name: my
num_bytes: 928002
num_examples: 1511
download_size: 5362668
dataset_size: 5706536
- config_name: llama-13B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 495334
num_examples: 1511
- name: zh
num_bytes: 496403
num_examples: 1511
- name: es
num_bytes: 502224
num_examples: 1511
- name: ar
num_bytes: 495769
num_examples: 1511
- name: hi
num_bytes: 494207
num_examples: 1511
- name: id
num_bytes: 485652
num_examples: 1511
- name: te
num_bytes: 658993
num_examples: 1511
- name: sw
num_bytes: 513663
num_examples: 1511
- name: eu
num_bytes: 543032
num_examples: 1511
- name: my
num_bytes: 868225
num_examples: 1511
download_size: 5208039
dataset_size: 5553502
- config_name: llama-30B
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 496406
num_examples: 1511
- name: zh
num_bytes: 503443
num_examples: 1511
- name: es
num_bytes: 502714
num_examples: 1511
- name: ar
num_bytes: 499679
num_examples: 1511
- name: hi
num_bytes: 506243
num_examples: 1511
- name: id
num_bytes: 495591
num_examples: 1511
- name: te
num_bytes: 622441
num_examples: 1511
- name: sw
num_bytes: 501886
num_examples: 1511
- name: eu
num_bytes: 534447
num_examples: 1511
- name: my
num_bytes: 679405
num_examples: 1511
download_size: 4998062
dataset_size: 5342255
- config_name: RedPajama-INCITE-Base-3B-v1
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 508585
num_examples: 1511
- name: zh
num_bytes: 530992
num_examples: 1511
- name: es
num_bytes: 497511
num_examples: 1511
- name: ar
num_bytes: 539293
num_examples: 1511
- name: hi
num_bytes: 611424
num_examples: 1511
- name: id
num_bytes: 491386
num_examples: 1511
- name: te
num_bytes: 721849
num_examples: 1511
- name: sw
num_bytes: 565920
num_examples: 1511
- name: eu
num_bytes: 610413
num_examples: 1511
- name: my
num_bytes: 785689
num_examples: 1511
download_size: 5517969
dataset_size: 5863062
- config_name: RedPajama-INCITE-7B-Base
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 503227
num_examples: 1511
- name: zh
num_bytes: 520232
num_examples: 1511
- name: es
num_bytes: 500357
num_examples: 1511
- name: ar
num_bytes: 478504
num_examples: 1511
- name: hi
num_bytes: 542515
num_examples: 1511
- name: id
num_bytes: 486431
num_examples: 1511
- name: te
num_bytes: 564067
num_examples: 1511
- name: sw
num_bytes: 506463
num_examples: 1511
- name: eu
num_bytes: 469138
num_examples: 1511
- name: my
num_bytes: 734203
num_examples: 1511
download_size: 4960585
dataset_size: 5305137
- config_name: open_llama_3b
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 505442
num_examples: 1511
- name: zh
num_bytes: 532884
num_examples: 1511
- name: es
num_bytes: 501815
num_examples: 1511
- name: ar
num_bytes: 545831
num_examples: 1511
- name: hi
num_bytes: 558097
num_examples: 1511
- name: id
num_bytes: 503375
num_examples: 1511
- name: te
num_bytes: 658210
num_examples: 1511
- name: sw
num_bytes: 496637
num_examples: 1511
- name: eu
num_bytes: 565262
num_examples: 1511
- name: my
num_bytes: 102748
num_examples: 1511
download_size: 4629042
dataset_size: 4970301
- config_name: open_llama_7b
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 497597
num_examples: 1511
- name: zh
num_bytes: 514370
num_examples: 1511
- name: es
num_bytes: 499117
num_examples: 1511
- name: ar
num_bytes: 527002
num_examples: 1511
- name: hi
num_bytes: 457692
num_examples: 1511
- name: id
num_bytes: 486815
num_examples: 1511
- name: te
num_bytes: 651761
num_examples: 1511
- name: sw
num_bytes: 518217
num_examples: 1511
- name: eu
num_bytes: 528817
num_examples: 1511
- name: my
num_bytes: 102748
num_examples: 1511
download_size: 4438467
dataset_size: 4784136
- config_name: open_llama_13b
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 497392
num_examples: 1511
- name: zh
num_bytes: 506192
num_examples: 1511
- name: es
num_bytes: 502102
num_examples: 1511
- name: ar
num_bytes: 515020
num_examples: 1511
- name: hi
num_bytes: 458156
num_examples: 1511
- name: id
num_bytes: 492514
num_examples: 1511
- name: te
num_bytes: 653860
num_examples: 1511
- name: sw
num_bytes: 497731
num_examples: 1511
- name: eu
num_bytes: 542967
num_examples: 1511
- name: my
num_bytes: 102748
num_examples: 1511
download_size: 4423124
dataset_size: 4768682
- config_name: falcon-7b
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 559221
num_examples: 1511
- name: zh
num_bytes: 490736
num_examples: 1511
- name: es
num_bytes: 496936
num_examples: 1511
- name: ar
num_bytes: 555943
num_examples: 1511
- name: hi
num_bytes: 760911
num_examples: 1511
- name: id
num_bytes: 465017
num_examples: 1511
- name: te
num_bytes: 929729
num_examples: 1511
- name: sw
num_bytes: 475843
num_examples: 1511
- name: eu
num_bytes: 660103
num_examples: 1511
- name: my
num_bytes: 918371
num_examples: 1511
download_size: 5972550
dataset_size: 6312810
- config_name: xgen-7b-4k-base
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 499102
num_examples: 1511
- name: zh
num_bytes: 496212
num_examples: 1511
- name: es
num_bytes: 498105
num_examples: 1511
- name: ar
num_bytes: 518805
num_examples: 1511
- name: hi
num_bytes: 511187
num_examples: 1511
- name: id
num_bytes: 483581
num_examples: 1511
- name: te
num_bytes: 564125
num_examples: 1511
- name: sw
num_bytes: 539692
num_examples: 1511
- name: eu
num_bytes: 526559
num_examples: 1511
- name: my
num_bytes: 102748
num_examples: 1511
download_size: 4394369
dataset_size: 4740116
- config_name: xgen-7b-8k-base
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 496008
num_examples: 1511
- name: zh
num_bytes: 500737
num_examples: 1511
- name: es
num_bytes: 496059
num_examples: 1511
- name: ar
num_bytes: 492099
num_examples: 1511
- name: hi
num_bytes: 522832
num_examples: 1511
- name: id
num_bytes: 489283
num_examples: 1511
- name: te
num_bytes: 610098
num_examples: 1511
- name: sw
num_bytes: 527305
num_examples: 1511
- name: eu
num_bytes: 516098
num_examples: 1511
- name: my
num_bytes: 102748
num_examples: 1511
download_size: 4408200
dataset_size: 4753267
- config_name: xgen-7b-8k-inst
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 497057
num_examples: 1511
- name: zh
num_bytes: 519732
num_examples: 1511
- name: es
num_bytes: 499680
num_examples: 1511
- name: ar
num_bytes: 504726
num_examples: 1511
- name: hi
num_bytes: 519968
num_examples: 1511
- name: id
num_bytes: 499549
num_examples: 1511
- name: te
num_bytes: 612858
num_examples: 1511
- name: sw
num_bytes: 554762
num_examples: 1511
- name: eu
num_bytes: 540275
num_examples: 1511
- name: my
num_bytes: 102748
num_examples: 1511
download_size: 4507822
dataset_size: 4851355
- config_name: open_llama_7b_v2
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 494880
num_examples: 1511
- name: zh
num_bytes: 505101
num_examples: 1511
- name: es
num_bytes: 498933
num_examples: 1511
- name: ar
num_bytes: 480929
num_examples: 1511
- name: hi
num_bytes: 526710
num_examples: 1511
- name: id
num_bytes: 485906
num_examples: 1511
- name: te
num_bytes: 653870
num_examples: 1511
- name: sw
num_bytes: 510160
num_examples: 1511
- name: eu
num_bytes: 538023
num_examples: 1511
- name: my
num_bytes: 928002
num_examples: 1511
download_size: 5277748
dataset_size: 5622514
- config_name: polylm-1.7b
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 501578
num_examples: 1511
- name: zh
num_bytes: 492368
num_examples: 1511
- name: es
num_bytes: 489279
num_examples: 1511
- name: ar
num_bytes: 523803
num_examples: 1511
- name: hi
num_bytes: 883583
num_examples: 1511
- name: id
num_bytes: 494420
num_examples: 1511
- name: te
num_bytes: 772310
num_examples: 1511
- name: sw
num_bytes: 591325
num_examples: 1511
- name: eu
num_bytes: 755232
num_examples: 1511
- name: my
num_bytes: 928002
num_examples: 1511
download_size: 6086882
dataset_size: 6431900
- config_name: polylm-13b
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 498554
num_examples: 1511
- name: zh
num_bytes: 490097
num_examples: 1511
- name: es
num_bytes: 497570
num_examples: 1511
- name: ar
num_bytes: 497095
num_examples: 1511
- name: hi
num_bytes: 682306
num_examples: 1511
- name: id
num_bytes: 494517
num_examples: 1511
- name: te
num_bytes: 712521
num_examples: 1511
- name: sw
num_bytes: 470834
num_examples: 1511
- name: eu
num_bytes: 503702
num_examples: 1511
- name: my
num_bytes: 928002
num_examples: 1511
download_size: 5430508
dataset_size: 5775198
- config_name: polylm-multialpaca-13b
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 496565
num_examples: 1511
- name: zh
num_bytes: 494789
num_examples: 1511
- name: es
num_bytes: 497108
num_examples: 1511
- name: ar
num_bytes: 485852
num_examples: 1511
- name: hi
num_bytes: 788707
num_examples: 1511
- name: id
num_bytes: 491246
num_examples: 1511
- name: te
num_bytes: 881984
num_examples: 1511
- name: sw
num_bytes: 512261
num_examples: 1511
- name: eu
num_bytes: 508426
num_examples: 1511
- name: my
num_bytes: 928002
num_examples: 1511
download_size: 5739667
dataset_size: 6084940
- config_name: open_llama_3b_v2
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 492909
num_examples: 1511
- name: zh
num_bytes: 505746
num_examples: 1511
- name: es
num_bytes: 499516
num_examples: 1511
- name: ar
num_bytes: 498564
num_examples: 1511
- name: hi
num_bytes: 573411
num_examples: 1511
- name: id
num_bytes: 484221
num_examples: 1511
- name: te
num_bytes: 832372
num_examples: 1511
- name: sw
num_bytes: 485921
num_examples: 1511
- name: eu
num_bytes: 547044
num_examples: 1511
- name: my
num_bytes: 928002
num_examples: 1511
download_size: 5503115
dataset_size: 5847706
- config_name: Llama-2-7b-hf
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 496817
num_examples: 1511
- name: zh
num_bytes: 501800
num_examples: 1511
- name: es
num_bytes: 504213
num_examples: 1511
- name: ar
num_bytes: 501610
num_examples: 1511
- name: hi
num_bytes: 504739
num_examples: 1511
- name: id
num_bytes: 494323
num_examples: 1511
- name: te
num_bytes: 588684
num_examples: 1511
- name: sw
num_bytes: 501136
num_examples: 1511
- name: eu
num_bytes: 520420
num_examples: 1511
- name: my
num_bytes: 570585
num_examples: 1511
download_size: 4838759
dataset_size: 5184327
- config_name: Llama-2-13b-hf
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 497558
num_examples: 1511
- name: zh
num_bytes: 499829
num_examples: 1511
- name: es
num_bytes: 500668
num_examples: 1511
- name: ar
num_bytes: 502267
num_examples: 1511
- name: hi
num_bytes: 499806
num_examples: 1511
- name: id
num_bytes: 491094
num_examples: 1511
- name: te
num_bytes: 634645
num_examples: 1511
- name: sw
num_bytes: 508836
num_examples: 1511
- name: eu
num_bytes: 524520
num_examples: 1511
- name: my
num_bytes: 777348
num_examples: 1511
download_size: 5090710
dataset_size: 5436571
- config_name: Llama-2-7b-chat-hf
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 255428
num_examples: 1511
- name: zh
num_bytes: 259590
num_examples: 1511
- name: es
num_bytes: 337962
num_examples: 1511
- name: ar
num_bytes: 549212
num_examples: 1511
- name: hi
num_bytes: 542237
num_examples: 1511
- name: id
num_bytes: 445799
num_examples: 1511
- name: te
num_bytes: 753517
num_examples: 1511
- name: sw
num_bytes: 575797
num_examples: 1511
- name: eu
num_bytes: 573902
num_examples: 1511
- name: my
num_bytes: 669211
num_examples: 1511
download_size: 4617898
dataset_size: 4962655
- config_name: Llama-2-13b-chat-hf
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: ru
num_bytes: 513558
num_examples: 1511
- name: zh
num_bytes: 524461
num_examples: 1511
- name: es
num_bytes: 502511
num_examples: 1511
- name: ar
num_bytes: 546387
num_examples: 1511
- name: hi
num_bytes: 556189
num_examples: 1511
- name: id
num_bytes: 503053
num_examples: 1511
- name: te
num_bytes: 812325
num_examples: 1511
- name: sw
num_bytes: 587048
num_examples: 1511
- name: eu
num_bytes: 646107
num_examples: 1511
- name: my
num_bytes: 804207
num_examples: 1511
download_size: 5650367
dataset_size: 5995846
---
# Dataset Card for XStoryCloze MT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://cs.rochester.edu/nlp/rocstories/](https://cs.rochester.edu/nlp/rocstories/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Few-shot Learning with Multilingual Generative Language Models](https://arxiv.org/pdf/2112.10668.pdf)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.03 MB
- **Size of the generated dataset:** 2.03 MB
- **Total amount of disk used:** 2.05 MB
### Dataset Summary
XStoryCloze consists of the professionally translated version of the [English StoryCloze dataset](https://cs.rochester.edu/nlp/rocstories/) (Spring 2016 version) to 10 non-English languages. This dataset is released by Meta AI. This dataset is the machine-translated version of XstoryCloze to en from ru, zh, es, ar, hi, id, te, sw, eu, my.
### Supported Tasks and Leaderboards
commonsense reasoning
### Languages
This dataset is the machine-translated version of XstoryCloze to en from ru, zh (Simplified), es (Latin America), ar, hi, id, te, sw, eu, my.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 2.03 MB
- **Size of the generated dataset:** 2.03 MB
- **Total amount of disk used:** 2.05 MB
An example of 'train' looks as follows.
```
{'answer_right_ending': 1,
'input_sentence_1': 'Rick grew up in a troubled household.',
'input_sentence_2': 'He never found good support in family, and turned to gangs.',
'input_sentence_3': "It wasn't long before Rick got shot in a robbery.",
'input_sentence_4': 'The incident caused him to turn a new leaf.',
'sentence_quiz1': 'He is happy now.',
'sentence_quiz2': 'He joined a gang.',
'story_id': '138d5bfb-05cc-41e3-bf2c-fa85ebad14e2'}
```
### Data Fields
The data fields are the same among all splits.
- `input_sentence_1`: The first statement in the story.
- `input_sentence_2`: The second statement in the story.
- `input_sentence_3`: The third statement in the story.
- `input_sentence_4`: The forth statement in the story.
- `sentence_quiz1`: first possible continuation of the story.
- `sentence_quiz2`: second possible continuation of the story.
- `answer_right_ending`: correct possible ending; either 1 or 2.
- `story_id`: story id.
### Data Splits
This dataset is intended to be used for evaluating the zero- and few-shot learning capabilities of multlingual language models. We split the data for each language into train and test (360 vs. 1510 examples, respectively). The released data files for different languages maintain a line-by-line alignment.
| name |test|
|-------|---:|
|ru|1510|
|zh|1510|
|es|1510|
|ar|1510|
|hi|1510|
|id|1510|
|te|1510|
|sw|1510|
|eu|1510|
|my|1510|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
XStoryCloze is opensourced under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode), the same license as the original English StoryCloze.
### Citation Information
```
@article{DBLP:journals/corr/abs-2112-10668,
author = {Xi Victoria Lin and
Todor Mihaylov and
Mikel Artetxe and
Tianlu Wang and
Shuohui Chen and
Daniel Simig and
Myle Ott and
Naman Goyal and
Shruti Bhosale and
Jingfei Du and
Ramakanth Pasunuru and
Sam Shleifer and
Punit Singh Koura and
Vishrav Chaudhary and
Brian O'Horo and
Jeff Wang and
Luke Zettlemoyer and
Zornitsa Kozareva and
Mona T. Diab and
Veselin Stoyanov and
Xian Li},
title = {Few-shot Learning with Multilingual Language Models},
journal = {CoRR},
volume = {abs/2112.10668},
year = {2021},
url = {https://arxiv.org/abs/2112.10668},
eprinttype = {arXiv},
eprint = {2112.10668},
timestamp = {Tue, 04 Jan 2022 15:59:27 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2112-10668.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@juletx](https://github.com/juletx). | [
-0.403106153011322,
-0.517632246017456,
0.23281586170196533,
0.007709966041147709,
-0.23378334939479828,
0.054184067994356155,
-0.37152576446533203,
-0.49023276567459106,
0.48368895053863525,
0.4294678270816803,
-0.8474065065383911,
-1.0192031860351562,
-0.4898320138454437,
0.2428483217954... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dynosaur/dynosaur-sub-superni | Dynosaur | 2023-07-06T22:49:54Z | 16 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-06T22:49:54Z | 2023-05-22T22:55:03.000Z | 2023-05-22T22:55:03 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lansinuote/diffusion.9.custom_diffusion | lansinuote | 2023-05-24T11:08:03Z | 16 | 0 | null | [
"region:us"
] | 2023-05-24T11:08:03Z | 2023-05-24T11:02:55.000Z | 2023-05-24T11:02:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 85296454.0
num_examples: 200
download_size: 85295617
dataset_size: 85296454.0
---
# Dataset Card for "diffusion.9.custom_diffusion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7433063983917236,
-0.6822167634963989,
0.4857725501060486,
0.3056325316429138,
-0.04231567680835724,
0.10173886269330978,
0.36876118183135986,
0.1293882578611374,
1.1391627788543701,
0.4283827543258667,
-0.6139191389083862,
-0.7371585369110107,
-0.7498291730880737,
-0.47611308097839355,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
s-nlp/paranmt_for_detox | s-nlp | 2023-09-08T08:35:36Z | 16 | 0 | null | [
"task_categories:text-generation",
"language:en",
"license:openrail++",
"region:us"
] | 2023-09-08T08:35:36Z | 2023-05-30T12:23:16.000Z | 2023-05-30T12:23:16 | ---
license: openrail++
task_categories:
- text-generation
language:
- en
---
# ParaNMTDetox: Detoxification with Parallel Data (English)
This repository contains information about filtered [ParaNMT](https://aclanthology.org/P18-1042/) dataset for text detoxification task. Here, we have paraphrasing pairs where one text is toxic and another is non-toxic. Toxicity levels were defined by English toxicity [classifier](https://huggingface.co/s-nlp/roberta_toxicity_classifier).
The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) with SOTA text detoxification was presented at ACL 2022 main conference.
## ParaNMTDetox Filtering Pipeline
The ParaNMT filtering for text detoxiifcation was done by adapting [ParaDetox](https://huggingface.co/datasets/s-nlp/paradetox) Dataset collection [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The filtering was done in three steps:
* *Task 1:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 2:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
## Citation
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
## Contacts
If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/skoltech-nlp/paradetox/issues).
For any questions, please contact: Daryna Dementieva (dardem96@gmail.com) | [
-0.12981021404266357,
-0.36190521717071533,
0.712806224822998,
0.2678670883178711,
-0.33507025241851807,
-0.06162271648645401,
-0.11072025448083878,
0.026724539697170258,
0.1354840099811554,
0.8327361941337585,
-0.30950045585632324,
-0.9621069431304932,
-0.5679404139518738,
0.4681955277919... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlpai-lab/databricks-dolly-15k-ko | nlpai-lab | 2023-06-16T03:01:52Z | 16 | 6 | null | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:ko",
"license:cc-by-sa-3.0",
"arxiv:2203.02155",
"region:us"
] | 2023-06-16T03:01:52Z | 2023-06-01T10:19:09.000Z | 2023-06-01T10:19:09 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- ko
size_categories:
- 10K<n<100K
---
Korean translation of databricks-dolly-15k via the DeepL API
Note: There are cases where multilingual data has been converted to monolingual data during batch translation to Korean using the API.
Below is databricks-dolly-15k's README.
---
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. | [
-0.4329181909561157,
-1.0363112688064575,
0.24364617466926575,
0.2716693878173828,
-0.13778553903102875,
-0.02387576550245285,
-0.31687256693840027,
-0.17527762055397034,
0.012932118028402328,
0.4989241659641266,
-0.7038384675979614,
-0.6451900005340576,
-0.26911747455596924,
0.36280530691... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tabtoyou/KoLLaVA-Instruct-150k | tabtoyou | 2023-06-25T12:31:12Z | 16 | 6 | null | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:ko",
"license:cc-by-nc-4.0",
"region:us"
] | 2023-06-25T12:31:12Z | 2023-06-04T10:24:11.000Z | 2023-06-04T10:24:11 | ---
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
- question-answering
language:
- ko
pretty_name: Korean Visual Instruct
---
# Korean Visual Instruct 150K Dataset Card
🌋[LLaVA](https://llava-vl.github.io/)의 Instruction-following Dataset을 한국어로 번역한 데이터셋입니다. (feat. DeepL)
### 1. Conversation
- 이미지에 대해 질문하는 사람과 이에 답하는 Assistant 사이의 대화 형식으로 디자인합니다. 대답은 Assistant가 이미지를 보고 질문에 대답하는 것과 같은 어조이며, 이미지의 시각적인 정보(객체의 유형, 수, 행동, 위치, 객체간의 상대적인 위치 등)에 대해 다양한 질문을 합니다. 또한 명확하게 답변이 있는 질문만 고려합니다.
### 2. Detailed description
- 이미지에 대한 풍부하고 포괄적인 설명을 내포하게 디자인 했습니다. 이러한 자세한 설명을 요구하는 여러 promt 리스트를 만든 뒤 그중 하나를 샘플해 답을 생성합니다.
### 3. Complex reasoning
- 위의 두 가지 유형은 시각적 content 자체에 중점을 두는데요. Complex reasoning에서는 이를 기반으로 심층 추론 질문을 추가로 생성합니다. 답변은 타당한 논리를 갖춘 단계별 추론 프로세스를 요구합니다.
## Done
- Detail_23k
- Conversation_58k
- Complex_resoning_77k
- ko_llava_instruct_150k
## Project Repo
- Github Repo : [tabtoyou/KoLLaVA](https://github.com/tabtoyou/KoLLaVA)
### License
- Attribution-NonCommercial 4.0 International | OpenAI [policy](https://openai.com/policies/terms-of-use) 준수 | [
-0.4637463688850403,
-0.9027596712112427,
0.5125383734703064,
0.3546431362628937,
-0.4432063400745392,
-0.01909162849187851,
-0.014239723794162273,
-0.15285153687000275,
0.27284717559814453,
0.6930198669433594,
-0.726324200630188,
-0.919629693031311,
-0.37327584624290466,
-0.13529528677463... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
grantprice/DND-NLP | grantprice | 2023-06-09T23:34:20Z | 16 | 1 | null | [
"region:us"
] | 2023-06-09T23:34:20Z | 2023-06-06T20:51:17.000Z | 2023-06-06T20:51:17 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Patt/MultiRC_TH | Patt | 2023-06-09T20:25:21Z | 16 | 0 | null | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | 2023-06-09T20:25:21Z | 2023-06-09T20:10:29.000Z | 2023-06-09T20:10:29 | ---
task_categories:
- text-classification
language:
- en
- th
---
# Dataset Card for MultiRC_TH
### Dataset Description
This dataset is Thai translated version of [multirc](https://huggingface.co/datasets/super_glue/viewer/multirc) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
| [
-0.3750467002391815,
-0.5199938416481018,
-0.037841711193323135,
0.42937353253364563,
-0.5786228179931641,
0.24464046955108643,
-0.34005239605903625,
-0.11513479799032211,
0.6832618117332458,
0.5077448487281799,
-0.6899023652076721,
-0.8017958998680115,
-0.5254279971122742,
0.1496215015649... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
notrichardren/easy_qa | notrichardren | 2023-06-26T12:33:45Z | 16 | 0 | null | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-06-26T12:33:45Z | 2023-06-11T21:29:56.000Z | 2023-06-11T21:29:56 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
pretty_name: Easy Question Answer
---
# EasyQA: A Kindergarten-Level QA Dataset for Investigating Truthfulness.
EasyQA is a GPT-3.5-turbo-generated dataset of easy kindergarten-level facts, meant to be used to prompt and evaluate large language models for "common-sense" truthful responses. This dataset was originally created to understand how different types of truthfulness may be represented in the intermediate activations of large language models. EasyQA compromises 2346 questions that span 50 categories, including art, technology, education, music, and animals. The questions are meant to be extremely simple and obvious, eliciting an obvious truth that would not be susceptible to misconceptions -- making it an excellent comparison compared to benchmarks related to other types of truth (e.g. TruthfulQA, which focuses on common misconceptions).
Credits to Kevin Wang, Richard Ren, and Phillip Guo.
## Dataset Creation
The dataset was created by prompting GPT-3.5-turbo with: "*Please generate 50 easy, obvious, common-knowledge questions that a kindergartener would learn in class about the topic prompted, as well as correct and incorrect responses. These questions should be less like trivia questions (i.e. Who is known as the Queen of Jazz?) and more like obvious facts (ie What color is the sky?). Your generations should be in the format: Question: {Your question here} Right: {Right answer} Wrong: {Wrong answer} where each question is a new line. Please follow this format verbatim (e.g. do not number the questions).*"
The following categories were used:
```
Animals
Plants
Food and drink
Music
Movies
Television shows
Literature
Sports
Geography
History
Science
Mathematics
Art
Technology
Politics
Business and Economy
Education
Health and Fitness
Environment and Climate
Space and Astronomy
Fashion and Style
Video Games
Travel and Tourism
Language and Literature
Religion and Spirituality
Famous Personalities
Cultural Events/Festivals
Cars and Automobiles
Photography
Architecture
Medicine and Health
Psychology
Philosophy
Law
Social Sciences
Human Rights
Current Events/News
Global Affairs
National Landmarks
Celebrities and Entertainment
Nature
Cooking and Baking
Gardening
DIY Projects
Dance
Comic Books and Graphic Novels
Mythology and Folklore
Internet and Social Media
Parenting and Family Life
Home Decor
``` | [
-0.37389039993286133,
-0.759170413017273,
0.3636835217475891,
0.07629989087581635,
-0.04184747859835625,
-0.123191699385643,
0.22282269597053528,
-0.07654672116041183,
-0.34024181962013245,
0.23087118566036224,
-0.8139327764511108,
-0.4560457170009613,
-0.38609635829925537,
0.0447285138070... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nadav/pixel_glue_qqp | Nadav | 2023-06-12T19:21:21Z | 16 | 0 | null | [
"region:us"
] | 2023-06-12T19:21:21Z | 2023-06-12T18:39:41.000Z | 2023-06-12T18:39:41 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 4725063877.25
num_examples: 363846
- name: validation
num_bytes: 525056314.25
num_examples: 40430
download_size: 5039025536
dataset_size: 5250120191.5
---
# Dataset Card for "pixel_glue_qqp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4151485562324524,
-0.21130165457725525,
0.24526897072792053,
0.18235835433006287,
-0.092414990067482,
0.21404002606868744,
0.4713170528411865,
0.061421532183885574,
0.8733528852462769,
0.228102907538414,
-0.8514151573181152,
-0.8634321689605713,
-0.4059321880340576,
-0.5676772594451904,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RafaelMPereira/HealthCareMagic-100k-Chat-Format-en | RafaelMPereira | 2023-06-15T14:44:18Z | 16 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2023-06-15T14:44:18Z | 2023-06-15T14:42:57.000Z | 2023-06-15T14:42:57 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alxfgh/PubChem_Drug_Instruction_Tuning | alxfgh | 2023-06-24T00:22:00Z | 16 | 1 | null | [
"region:us"
] | 2023-06-24T00:22:00Z | 2023-06-15T19:42:08.000Z | 2023-06-15T19:42:08 | ---
pretty_name: PubChem Drug Instruction Tuning
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PNLPhub/Persian-News | PNLPhub | 2023-06-20T11:05:30Z | 16 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-06-20T11:05:30Z | 2023-06-20T10:54:23.000Z | 2023-06-20T10:54:23 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KaiLv/UDR_Amazon | KaiLv | 2023-06-21T12:23:17Z | 16 | 0 | null | [
"region:us"
] | 2023-06-21T12:23:17Z | 2023-06-21T12:22:34.000Z | 2023-06-21T12:22:34 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: headline
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 13936883
num_examples: 30000
- name: test
num_bytes: 1382953
num_examples: 3000
- name: debug
num_bytes: 2318411
num_examples: 5000
download_size: 11799872
dataset_size: 17638247
---
# Dataset Card for "UDR_Amazon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6164225339889526,
-0.2244700789451599,
0.029204996302723885,
0.23207537829875946,
-0.2755301594734192,
0.1540006399154663,
0.5651887655258179,
-0.2136467844247818,
0.56020188331604,
0.6917855739593506,
-0.7783819437026978,
-0.75298011302948,
-0.40850362181663513,
-0.1295872926712036,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SupawitMarayat/bccd-cut | SupawitMarayat | 2023-06-25T06:04:10Z | 16 | 0 | null | [
"region:us"
] | 2023-06-25T06:04:10Z | 2023-06-25T06:03:12.000Z | 2023-06-25T06:03:12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
splits:
- name: train
num_bytes: 330961403.0
num_examples: 60000
download_size: 362035453
dataset_size: 330961403.0
---
# Dataset Card for "bccd-cut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7786232233047485,
-0.5285882353782654,
0.3010283410549164,
0.16474297642707825,
-0.32078883051872253,
0.1706656515598297,
0.3754672706127167,
-0.11339578777551651,
0.7978069186210632,
0.5425150990486145,
-1.1706838607788086,
-0.9263691306114197,
-0.4426506459712982,
-0.30317386984825134... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
globis-university/aozorabunko-clean | globis-university | 2023-10-27T13:22:32Z | 16 | 4 | null | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | 2023-10-27T13:22:32Z | 2023-06-26T13:31:28.000Z | 2023-06-26T13:31:28 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- ja
size_categories:
- 10K<n<100K
---
# Overview
This dataset provides a convenient and user-friendly format of data from [Aozora Bunko (青空文庫)](https://www.aozora.gr.jp/), a website that compiles public-domain books in Japan, ideal for Machine Learning applications.
[For Japanese] 日本語での概要説明を Qiita に記載しました: https://qiita.com/akeyhero/items/b53eae1c0bc4d54e321f
# Methodology
The code to reproduce this dataset is made available on GitHub: [globis-org/aozorabunko-exctractor](https://github.com/globis-org/aozorabunko-extractor).
## 1. Data collection
We firstly downloaded the [CSV file that lists all works](https://www.aozora.gr.jp/index_pages/person_all.html). The information extracted from this CSV is incorporated into the `meta` field.
Next, we filtered out any books not categorized as public domain.
We retrieved the main text of each book corresponding to every row in the CSV and incorporated it into the `text` field in UTF-8.
## 2. Deduplication
We removed entries where the `図書カードURL` (Library card URL) in this CSV did not match with the `作品ID` (Work ID) and `人物ID` (Person ID).
In addition, entries with text identical to previously encountered text were discarded.
## 3. Cleaning
The data in the `text` field was then cleaned in the following sequence:
1. Convert new lines to `\n`
2. Remove headers
3. Remove footnotes and add them to the `footnote` field
4. Convert inserted notes into regular parenthetical text
5. Remove ruby (phonetic guides)
6. Convert specific characters, such as external characters and iteration marks, into standard Unicode characters
7. Remove any remaining markup
8. Remove leading and trailing new lines and horizontal rules
# Tips
If you prefer to employ only modern Japanese, you can filter entries with: `row["meta"]["文字遣い種別"] == "新字新仮名"`.
# Example
```py
>>> from datasets import load_dataset
>>> ds = load_dataset('globis-university/aozorabunko-clean')
>>> ds
DatasetDict({
train: Dataset({
features: ['text', 'footnote', 'meta'],
num_rows: 16951
})
})
>>> ds = ds.filter(lambda row: row['meta']['文字遣い種別'] == '新字新仮名') # only modern Japanese
>>> ds
DatasetDict({
train: Dataset({
features: ['text', 'footnote', 'meta'],
num_rows: 10246
})
})
>>> book = ds['train'][0] # one of the works
>>> book['meta']['作品名']
'ウェストミンスター寺院'
>>> text = book['text'] # main content
>>> len(text)
10639
>>> print(text[:100])
深いおどろきにうたれて、
名高いウェストミンスターに
真鍮や石の記念碑となって
すべての王侯貴族が集まっているのをみれば、
今はさげすみも、ほこりも、見栄もない。
善にかえった貴人の姿、
華美と俗世の
```
# License
CC BY 4.0 | [
-0.26260870695114136,
-0.6728292107582092,
0.33133745193481445,
0.009283209219574928,
-0.48286086320877075,
-0.18002277612686157,
-0.31333473324775696,
-0.3367205560207367,
0.4680057168006897,
0.84746253490448,
-0.42840492725372314,
-0.9862529635429382,
-0.30155012011528015,
0.208545997738... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atrost/financial_phrasebank | atrost | 2023-06-28T20:09:38Z | 16 | 0 | null | [
"arxiv:1908.10063",
"region:us"
] | 2023-06-28T20:09:38Z | 2023-06-28T19:53:58.000Z | 2023-06-28T19:53:58 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 434511.7622781676
num_examples: 3100
- name: validation
num_bytes: 108768.10565414774
num_examples: 776
- name: test
num_bytes: 135960.1320676847
num_examples: 970
download_size: 420071
dataset_size: 679240.0
---
# Dataset Card for "financial_phrasebank"
64/16/20 Split of the `sentences_50agree` subset of [financial_phrasebank](https://huggingface.co/datasets/financial_phrasebank), according to the [FinBERT paper](https://arxiv.org/abs/1908.10063). | [
-0.5355992913246155,
-0.6696399450302124,
0.0395217090845108,
0.46261951327323914,
-0.2584474980831146,
0.11718529462814331,
-0.030290499329566956,
-0.2258935421705246,
0.3149016797542572,
0.6781890988349915,
-0.7095919251441956,
-0.6240976452827454,
-0.35359376668930054,
-0.01100559998303... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imvladikon/he_sum_chatgpt | imvladikon | 2023-11-22T15:58:41Z | 16 | 2 | null | [
"task_categories:summarization",
"language:he",
"region:us"
] | 2023-11-22T15:58:41Z | 2023-07-02T23:55:15.000Z | 2023-07-02T23:55:15 | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
splits:
- name: train
num_bytes: 6778171
num_examples: 1673
download_size: 3560217
dataset_size: 6778171
task_categories:
- summarization
language:
- he
---
# Dataset Card for "he_sum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7159203886985779,
0.058719851076602936,
-0.04415629804134369,
0.2609419524669647,
-0.17654041945934296,
0.14876367151737213,
0.18715709447860718,
-0.020616449415683746,
1.004771113395691,
0.4820227324962616,
-0.7686623930931091,
-0.6786550879478455,
-0.6764917969703674,
-0.2418924272060... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bias-amplified-splits/wanli | bias-amplified-splits | 2023-07-04T10:59:59Z | 16 | 0 | null | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.18917",
"arxiv:2201.05955",
"region:us"
] | 2023-07-04T10:59:59Z | 2023-07-03T21:15:20.000Z | 2023-07-03T21:15:20 | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold
dtype: string
- name: genre
dtype: string
- name: pairID
dtype: string
splits:
- name: train.biased
num_bytes: 17807491
num_examples: 89402
- name: train.anti_biased
num_bytes: 2690706
num_examples: 13483
- name: test.biased
num_bytes: 865310
num_examples: 4363
- name: test.anti_biased
num_bytes: 127605
num_examples: 637
download_size: 26671494
dataset_size: 21491112
- config_name: partial_input
features:
- name: id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold
dtype: string
- name: genre
dtype: string
- name: pairID
dtype: string
splits:
- name: train.biased
num_bytes: 17792846
num_examples: 89402
- name: train.anti_biased
num_bytes: 2705351
num_examples: 13483
- name: test.biased
num_bytes: 858069
num_examples: 4344
- name: test.anti_biased
num_bytes: 134846
num_examples: 656
download_size: 26671494
dataset_size: 21491112
task_categories:
- text-classification
language:
- en
pretty_name: WANLI
size_categories:
- 100K<n<1M
---
# Dataset Card for Bias-amplified Splits for WANLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [WANLI](https://arxiv.org/abs/2201.05955)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to WANLI (**W**orker-**A**I Collaboration for **NLI**), a collection of 108K English sentence pairs for the task of natural language inference (NLI). WANLI was found to be more diverse and challenging for models compared to existing NLI datasets.
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 77.1 | 61.7 |
| Biased training split | 75.5 | 31.8 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 77.1 | 62.6 |
| Biased training split | 76.7 | 49.6 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/wanli", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['test.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from WANLI, and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"id": 225295,
"premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.",
"hypothesis": "The coach is a good coach.",
"gold": "entailment",
"genre": "generated",
"pairID": "171408"
}
```
### Data Fields
- `id`: unique identifier for the example
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `gold`: one of `entailment`, `neutral`, and `contradiction`
- `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators
- `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl`
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|---------------------|------------------------------|
| Train - biased | 89402 |
| Train - anti-biased | 13483 |
| Test - biased | 4363 |
| Test - anti-biased | 637 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|---------------------|------------------------------|
| Train - biased | 89402 |
| Train - anti-biased | 13483 |
| Test - biased | 4344 |
| Test - anti-biased | 656 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/).
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
``` | [
-0.7352827787399292,
-0.7562940716743469,
-0.04682398587465286,
0.10434820502996445,
-0.2589256763458252,
-0.25462424755096436,
-0.1700284779071808,
-0.3650771379470825,
0.25810444355010986,
0.3600783348083496,
-0.8003692030906677,
-0.4466770887374878,
-0.7051421999931335,
-0.0448562093079... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TinyPixel/orca_minis | TinyPixel | 2023-07-13T11:29:53Z | 16 | 2 | null | [
"language:en",
"region:us"
] | 2023-07-13T11:29:53Z | 2023-07-04T17:01:41.000Z | 2023-07-04T17:01:41 | ---
language: en
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: system
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 164518588
num_examples: 104179
download_size: 79528616
dataset_size: 164518588
---
# Dataset Card for "orca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5262315273284912,
-0.3707503080368042,
0.11692903935909271,
0.07602444291114807,
-0.32998159527778625,
-0.1140243411064148,
0.4249497652053833,
-0.5146366953849792,
1.0081220865249634,
0.6070916056632996,
-0.7940506935119629,
-0.8765143752098083,
-0.5690041184425354,
-0.2527183890342712... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
masakhane/afriqa-gold-passages | masakhane | 2023-07-08T04:15:40Z | 16 | 1 | null | [
"task_categories:question-answering",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:bem",
"language:fon",
"language:ha",
"language:ig",
"language:kin",
"language:sw",
"language:wo",
"language:yo",
"language:zu",
"language:tw",
"license:cc-by-sa-4.0",
"cross-ling... | 2023-07-08T04:15:40Z | 2023-07-07T16:45:04.000Z | 2023-07-07T16:45:04 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- bem
- fon
- ha
- ig
- kin
- sw
- wo
- yo
- zu
- tw
pretty_name: AfriQA
size_categories:
- 10K<n<100K
multilinguality:
- multilingual
tags:
- cross-lingual
- question-answering
- qa
---
# Dataset Card for AfriQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://github.com/masakhane-io/afriqa)
- **Repository:** [github](https://github.com/masakhane-io/afriqa)
- **Paper:** [paper]()
- **Point of Contact:** [Masakhane](https://www.masakhane.io/) or oogundep@uwaterloo.ca
### Dataset Summary
AfriQA is the first cross-lingual question answering (QA) dataset with a focus on African languages. The dataset includes over 12,000 XOR QA examples across 10 African languages, making it an invaluable resource for developing more equitable QA technology.
The train/validation/test sets are available for all the 10 languages.
### Supported Tasks and Leaderboards
- `question-answering`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better) and [Exact Match Accuracy](https://huggingface.co/spaces/evaluate-metric/exact_match).
### Languages
There are 20 languages available :
- Bemba (bem)
- Fon (fon)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Swahili (swą)
- Twi (twi)
- Wolof (wol)
- Yorùbá (yor)
- Zulu (zul)
## Dataset Structure
### Data Instances
- Data Format:
- id : Question ID
- question : Question in African Language
- translated_question : Question translated into a pivot language (English/French)
- answers : Answer in African Language
- lang : Datapoint Language (African Language) e.g `bem`
- split : Dataset Split
- translated_answer : Answer in Pivot Language
- translation_type : Translation type of question and answers
```bash
{ "id": 0,
"question": "Bushe icaalo ca Egypt caali tekwapo ne caalo cimbi?",
"translated_question": "Has the country of Egypt been colonized before?",
"answers": "['Emukwai']",
"lang": "bem",
"split": "dev",
"translated_answer": "['yes']",
"translation_type": "human_translation"
}
```
### Data Splits
For all languages, there are three splits.
The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| Language | train | dev | test |
|-----------------|------:|-----------:|-----:|
| Bemba | 502 | 503 | 314 |
| Fon | 427 | 428 | 386 |
| Hausa | 435 | 436 | 300 |
| Igbo | 417 | 418 | 409 |
| Kinyarwanda | 407 | 409 | 347 |
| Swahili | 415 | 417 | 302 |
| Twi | 451 | 452 | 490 |
| Wolof | 503 | 504 | 334 |
| Yoruba | 360 | 361 | 332 |
| Zulu | 387 | 388 | 325 |
| <b>Total</b> | <b>4333</b> | <b>4346</b> |<b>3560</b> |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce question-answering resources to 10 languages that were under-served for natural language processing.
[More Information Needed]
### Source Data
...
#### Initial Data Collection and Normalization
...
#### Who are the source language producers?
...
### Annotations
#### Annotation process
Details can be found here ...
#### Who are the annotators?
Annotators were recruited from [Masakhane](https://www.masakhane.io/)
### Personal and Sensitive Information
...
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
### Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@misc{ogundepo2023afriqa,
title={AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages},
author={Odunayo Ogundepo and Tajuddeen R. Gwadabe and Clara E. Rivera and Jonathan H. Clark and Sebastian Ruder and David Ifeoluwa Adelani and Bonaventure F. P. Dossou and Abdou Aziz DIOP and Claytone Sikasote and Gilles Hacheme and Happy Buzaaba and Ignatius Ezeani and Rooweither Mabuya and Salomey Osei and Chris Emezue and Albert Njoroge Kahira and Shamsuddeen H. Muhammad and Akintunde Oladipo and Abraham Toluwase Owodunni and Atnafu Lambebo Tonja and Iyanuoluwa Shode and Akari Asai and Tunde Oluwaseyi Ajayi and Clemencia Siro and Steven Arthur and Mofetoluwa Adeyemi and Orevaoghene Ahia and Aremu Anuoluwapo and Oyinkansola Awosan and Chiamaka Chukwuneke and Bernard Opoku and Awokoya Ayodele and Verrah Otiende and Christine Mwase and Boyd Sinkala and Andre Niyongabo Rubungo and Daniel A. Ajisafe and Emeka Felix Onwuegbuzia and Habib Mbow and Emile Niyomutabazi and Eunice Mukonde and Falalu Ibrahim Lawan and Ibrahim Said Ahmad and Jesujoba O. Alabi and Martin Namukombo and Mbonu Chinedu and Mofya Phiri and Neo Putini and Ndumiso Mngoma and Priscilla A. Amuok and Ruqayya Nasir Iro and Sonia Adhiambo},
year={2023},
eprint={2305.06897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ToluClassics](https://github.com/ToluClassics) for adding this dataset. | [
-0.7041547298431396,
-0.6290549039840698,
0.11565794050693512,
0.21146956086158752,
-0.0884786918759346,
-0.009292269125580788,
-0.1448841691017151,
-0.2513728439807892,
0.5376541614532471,
0.46429169178009033,
-0.680404007434845,
-0.5237932205200195,
-0.5916298031806946,
0.327475786209106... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ssbuild/alpaca_gpt4 | ssbuild | 2023-07-08T19:09:43Z | 16 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-08T19:09:43Z | 2023-07-08T19:09:13.000Z | 2023-07-08T19:09:13 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qwerty8409/digesion_Ayurveda | qwerty8409 | 2023-07-22T06:41:00Z | 16 | 1 | null | [
"region:us"
] | 2023-07-22T06:41:00Z | 2023-07-22T06:39:33.000Z | 2023-07-22T06:39:33 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.5456173419952393,
-0.42588168382644653,
-0.051285725086927414,
0.38739174604415894,
-0.4620097875595093,
0.05422865226864815,
-0.24659407138824463,
-0.2884671688079834,
0.6999504566192627,
0.5781952142715454,
-0.9070087671279907,
-1.1513409614562988,
-0.756676435470581,
0.02905247919261... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seungheondoh/LP-MusicCaps-MSD | seungheondoh | 2023-08-01T04:06:49Z | 16 | 7 | null | [
"size_categories:100K<n<1M",
"language:en",
"art",
"music",
"text-to-music",
"music-to-text",
"arxiv:2307.16372",
"region:us"
] | 2023-08-01T04:06:49Z | 2023-07-26T12:33:38.000Z | 2023-07-26T12:33:38 | ---
language:
- en
tags:
- art
- music
- text-to-music
- music-to-text
pretty_name: LP-MusicCaps-MSD
size_categories:
- 100K<n<1M
---
======================================
**!important**: Be careful when using `caption_attribute_prediction` (We don't recommend to use)!
======================================
# Dataset Card for LP-MusicCaps-MSD
## Dataset Description
- **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps)
- **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372)
## Dataset Summary
**LP-MusicCaps** is a Large Language Model based Pseudo Music Caption dataset for `text-to-music` and `music-to-text` tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task instructions). The data sources are MusicCaps, Magnatagtune, and Million Song Dataset ECALS subset.
- **LP-MusicCaps MSD (This Repo)**: 0.5M Audio with 2.2M Caption. We utilize 1054 unique tags in the [MSD-ECALS](https://github.com/SeungHeonDoh/msd-subsets) to perform tag-to-caption generation through LLM.
- [LP-MusicCaps MTT](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MTT): 22k Audio with 88k Caption
- [LP-MusicCaps MC](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MC): 6k Audio with 22k Caption.
## Data Instances
Each instance in LP-MusicCaps MSD (This Repo) represents multiple image-text pair information with meta-attributes:
```
{
'track_id': 'TRIHXPZ128F1466744',
'title': 'In The Sunshine',
'artist_name': 'ARRESTED DEVELOPMENT',
'release': 'Zingalamaduni',
'year': 1994,
'tag': ['laid back mellow',
'hip hop',
'rnb',
'amiable good natured',
'rap',
'urban',
'gentle',
'political rap',
'soul',
'calm peaceful',
'summery',
'cheerful',
'alternative rap'
],
'caption_writing': 'An amiable and laid back alternative rap tune, this summery and cheerful song blends elements of soul and R&B with a gentle, mellow rap flow to create a calm and peaceful urban vibe that is both hip hop and political in its message.',
'caption_summary': 'This summery, alternative rap song is a mellow and gentle blend of hip hop, RnB, and political rap with a cheerful and amiable good natured vibe.',
'caption_paraphrase': 'This laid back mellow rap song infuses soulful and urban elements while showcasing a gentle and amiable good natured vibe, perfect for a summery day. With hints of cheerful R&B and hip hop, the alternative political rap lyrics bring balance to this peaceful and calming tune.',
'caption_attribute_prediction': 'This mellow, soulful tune is a perfect blend of rap and RnB, with a gentle beat and smooth flow that will transport you to the laid-back urban vibes of a sunny summertime day. The amiable good-natured lyrics touch on political themes, while the alternative rap style adds a cheerful, upbeat twist to the message. Overall, this is a hip-hop gem thats sure to put you in a peaceful, calm state of mind.',
'path': '3/0/303545.clip.mp3'
}
```
## Pseudo Caption Example:
Input Tags:
*"video game theme, no singer, instrumental, analog sounding, small keyboard, beatboxing, playful, cheerful, groovy"*
Output Pseudo Captions
*"instrumental track has a joyful and playful vibe, perfect for a video game theme. With no singer, the analog-sounding music features a small keyboard and beatboxing, creating a groovy and cheerful atmosphere"*
[More Information for pseudo caption generation](https://github.com/seungheondoh/lp-music-caps/blob/main/lpmc/llm_captioning/generate.py)
## Data Fields
| Name | Type | Description |
|------------------------------|-----------------|----------------------------------------------------------------------|
| track_id | string | Unique identifier for the track |
| title | string | Title of the song |
| artist_name | string | Name of the artist performing the song |
| release | string | Release name or album name of the song |
| year | integer | Year of the song's release |
| tag | list of strings | List of tags associated with the song |
| caption_writing | string | Pseudo caption generated through a writing instruction |
| caption_summary | string | Pseudo caption generated through a summary instruction |
| caption_paraphrase | string | Pseudo caption generated through a paraphrase instruction |
| caption_attribute_prediction | string | Pseudo caption generated through an attribute_prediction instruction |
| path | string | File path or location of the audio clip |
## Data Splits
- train: 444865
- valid: 34481
- test: 34631
## Considerations for Using the Data
The LP-MusicCaps dataset is recommended to be used for research purposes. Due to the wrong labeling issue, we recommend not using caption_attribute_prediction and pseudo_attribute unless it is specifically for large-scale pretraining. Additionally, the field "is_crawled" indicates the samples used in the reference paper mentioned below.
## Discussion of Biases
It will be described in a paper to be released soon.
## Other Known Limitations
It will be described in a paper to be released soon. | [
-0.5852830410003662,
-0.4753197431564331,
0.19951015710830688,
0.366344690322876,
-0.3860608637332916,
0.17571237683296204,
-0.24404829740524292,
-0.41724148392677307,
0.6643754839897156,
0.5383608341217041,
-1.0110111236572266,
-0.8503398895263672,
-0.3656066954135895,
-0.0142279854044318... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yunyu/wiki40b_en_100_0_split | yunyu | 2023-07-26T15:31:14Z | 16 | 0 | null | [
"region:us"
] | 2023-07-26T15:31:14Z | 2023-07-26T14:53:33.000Z | 2023-07-26T14:53:33 | ---
dataset_info:
features:
- name: _id
dtype: string
- name: datasets_id
dtype: int32
- name: wiki_id
dtype: string
- name: start_paragraph
dtype: int32
- name: start_character
dtype: int32
- name: end_paragraph
dtype: int32
- name: end_character
dtype: int32
- name: article_title
dtype: string
- name: section_title
dtype: string
- name: passage_text
dtype: string
splits:
- name: train
num_bytes: 12927635491
num_examples: 17553713
download_size: 7022389836
dataset_size: 12927635491
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wiki40b_en_100_0_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9600879549980164,
-0.3956785798072815,
-0.003269253531470895,
0.3080879747867584,
-0.30459025502204895,
0.015226105228066444,
0.08441665023565292,
-0.1652728170156479,
0.9849159121513367,
0.4858318865299225,
-0.9437226057052612,
-0.5472701787948608,
-0.43661144375801086,
0.0166237391531... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PrinceAyush/Mental_Health_conv | PrinceAyush | 2023-08-03T18:33:02Z | 16 | 1 | null | [
"region:us"
] | 2023-08-03T18:33:02Z | 2023-07-30T10:29:07.000Z | 2023-07-30T10:29:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nisaar/LLAMA2_Legal_Dataset_4.4k_Instructions | nisaar | 2023-07-30T15:25:03Z | 16 | 12 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-30T15:25:03Z | 2023-07-30T15:22:13.000Z | 2023-07-30T15:22:13 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imoxto/prompt_injection_cleaned_dataset | imoxto | 2023-08-07T15:31:57Z | 16 | 0 | null | [
"region:us"
] | 2023-08-07T15:31:57Z | 2023-08-07T15:31:44.000Z | 2023-08-07T15:31:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: level
dtype: int64
- name: prompt
dtype: string
- name: user_input
dtype: string
- name: completion
dtype: string
- name: model
dtype: string
- name: expected_completion
dtype: string
- name: token_count
dtype: int64
- name: correct
dtype: bool
- name: error
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 529771818
num_examples: 374573
- name: validation
num_bytes: 115495832
num_examples: 80266
- name: test
num_bytes: 114490591
num_examples: 80266
download_size: 243813448
dataset_size: 759758241
---
# Dataset Card for "prompt_injection_cleaned_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4169161319732666,
-0.5613595843315125,
0.36212682723999023,
0.03360256180167198,
-0.1960483193397522,
0.03209446370601654,
0.3260933458805084,
0.09172071516513824,
0.6366382837295532,
0.6599097847938538,
-0.7263163924217224,
-0.8287866711616516,
-0.38437506556510925,
-0.0473676286637783... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kernelmachine/open-license-corpus | kernelmachine | 2023-08-09T03:14:36Z | 16 | 8 | null | [
"task_categories:text-generation",
"size_categories:100B<n<1T",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-08-09T03:14:36Z | 2023-08-08T23:21:52.000Z | 2023-08-08T23:21:52 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
pretty_name: pubtext
size_categories:
- 100B<n<1T
---
# PubText
Welcome to the Open License Corpus (OLC), a 228B token corpus for training permissively-licensed language models.
**Disclaimer**: OLC should not be considered a universally safe-to-use dataset. We encourage users of OLC to consult a legal professional on the suitability of each data source for their application.
## Dataset Description
- **Repository:** [Silo LM repository](https://github.com/kernelmachine/silo-lm)
- **Paper:** [Silo LM paper](https://github.com/kernelmachine/silo-lm)
- **Point of Contact:** [Suchin Gururangan](mailto:sg01@cs.washington.edu)
### Dataset Summary
| Domain | Sources | Specific License | # BPE Tokens (in billions; GPT-NeoX tokenizer) |
|--------------|------------------------------------------------------|------------------|------------------|
| Legal | Case Law, Pile of Law (PD subset) | Public Domain | 27.1 |
| Legal | Pile of Law (CC BY-SA subset) | CC BY-SA | 0.07 |
| Code | Github (permissive) | MIT/BSD/Apache | 58.9 |
| Conversational| HackerNews, Ubuntu IRC | MIT/Apache | 5.9 |
| Conversational | Stack Overflow, Stack Exchange | CC BY-SA | 21.3 |
| Math | Deepmind Math, AMPS | Apache | 3.5 |
| Science | ArXiv abstracts, S2ORC (PD subset) | Public Domain | 1.2 |
| Science | S2ORC (CC BY-SA subset) | CC BY-SA | 70.3 |
| Books | Gutenberg | Public Domain | 2.9 |
| News | Public domain news | Public Domain | 0.2 |
| News | Wikinews | CC BY-SA | 0.01 |
| Encyclopedic | Wikipedia | CC BY-SA | 37.0 |
### Supported Tasks and Leaderboards
- `text-generation`: The dataset can be used to train a language model for text generation. The language model performance is evaluated based on perplexity.
### Languages
OLC is primarily an English-language dataset, but also contains some data in other languages (primarily in the Wikipedia subset, which draws on the [Red Pajama](https://github.com/togethercomputer/RedPajama-Data) data collection)
## Dataset Structure
The dataset is a standard text-only structure, separated into each subset that we include in the paper.
```
from datasets import load_dataset
dataset = load_dataset('kernelmachine/open-license-corpus', 'pd_law', streaming=True)['train']
```
To use a collection of sources, you should specify each individually and interleave, like so:
```
from datasets import interleave_datasets, load_dataset
d1 = load_dataset('kernelmachine/open-license-corpus', 'pd_law', streaming=True)['train']
d2 = load_dataset('kernelmachine/open-license-corpus', 'sw_github', streaming=True)['train']
d1_d2 = interleave_datasets([d1,d2], probabilities=[0.8, 0.2], seed=42)
```
### Data Instances and Fields
The dataset is standard text only structure, e.g. `{"text": "this is a document"}`. We do not add any other fields to documents.
### Data Splits
We only include the training data in this repository.
For validation data, in the paper we use the Pile validation data, which we decontaminate OLC against using a deduplication script (see more below).
The Pile validation data that we use in the paper can be found [here]().
## Dataset Creation
### License Taxonomy
* **Public Domain (PD):** Public domain text has no restrictions.
* **Permissively licensed software (SW):** including MIT, Apache, and BSD software.
* **Attribution licenses (BY):** such as Creative Commons Attribution (CC-BY) are free to use as long as "credit is given to the creator."
* **All other data:** that is not in one of the above three categories is assumed to be non-permissive. This includes: any text that is explicitly protected by copyright or licenses that are non-commercial (e.g., CC-NC), any software without clear MIT, BSD, or Apache licenses, and any generic web-crawled data where the license or copyright information may be unclear.
### Building OLC
Based on this taxonomy of licenses OLC, a 228B token corpus of PD, SW, and BY data. OLC consists of 17 manually-selected sources of
primarily English text that are under permissive licenses.
The text generally falls into eight different domains:
* **Legal:** We curate legal text from the Pile of Law, an amalgation of 31 different sources of text related to civil court cases, patents, and other legal and governmental works, either licensed as public domain or CC-BY. We also gather public domain text from the Case Law Access Project, which covers over 6.5 million decisions published by state and federal courts throughout U.S. history.
* **Code:** We use the Github subset of the RedPajama dataset, which contains code from Github repositories with three permissive software licenses: MIT, Apache, and BSD.
* **Conversation:** We source conversational text under permissive software licenses from the HackerNews (MIT license) and the Ubuntu IRC (Apache license) subsets of the Pile. We also use the Stackexchange subset of the RedPajama dataset and a Stackoverflow corpus from Kaggle, both under the CC-BY-SA license.
* **Math:** We source mathematical text from the Deepmind Mathematics and the AMPS datasets, both of which are under the Apache license.
* **Science:** We source scientific text from ArXiv abstracts that are in the public domain. We also collect full-text articles from the Semantic Scholar Research Corpus (S2ORC), either licensed as public domain or CC-BY.
* **Books:** We source books from the Gutenberg corpus, which are copyright-expired books that are in the public domain.
* **News:** We collect public domain news text from the English subset of the MOT corpus. We also collect text from Wikinews, which is under CC BY-SA.
* **Encyclopedic:** Finally, we include a large set of Wikipedia from the subset included in RedPajama.We follow RedPajama in using Wikipedia snapshots from 20 languages even though the model primarily focuses on English.
#### Initial Data Collection and Normalization
We deduplicate text using a document-level filter that considers $n$-gram overlap. We first deduplicate within each domain to remove redundant documents from similar sources (e.g. Case Law and the Pile of Law), and then then perform deduplication against the validation and test datasets of the Pile to avoid test leakage.
We do not perform any additional quality filtering, though some subsets (e.g. Github and Wikipedia) are already quality filtered by the original data curators of those subsets.
#### Who are the source language producers?
The source language producers vary by domain; the Legal subset primarily contains governmental documents, while the Github subset contains code repositories written by the public. We refer to each data source for further information.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
We do not perform additional filtering to remove personally identifiable information, so it is possible that certain subsets still pose privacy risks despite being permissively licensed.
## Considerations for Using the Data
Please see the disclaimer above. The license associated with a document may be time- and country-dependent Moreover, other legal constraints may prohibit the use of a data source despite a permissive data license. We encourage users of PubText to consult a legal professional on the suitability of each data source for their application.
### Social Impact of Dataset
OLC is the first multidomain, permissively licensed corpus, which can enable language models that align better to data-use regulations such as the fair-use doctrine in the United States and the GPDR in the European Union.
### Discussion of Biases and Limitations
While OLC mitigates copyright and privacy risks, it may exacerbate certain fairness issues, like toxicity towards marginalized groups and racial biases, especially due to the prevalence of older copyright-expired books in the training data.
In addition, OLC relies on explicit metadata to identify licenses, which may lead to underestimates of the amount and diversity of permissively licensed text actually available on the web.
### Dataset Curators
OLC was curated by the authors of SILO language models.
### Licensing Information
We release this corpus under the Apache 2.0 license.
### Citation Information
| [
-0.41246533393859863,
-0.7050137519836426,
0.45881596207618713,
0.11827537417411804,
-0.3449017405509949,
-0.32166722416877747,
-0.32862743735313416,
-0.41160935163497925,
0.05156519263982773,
0.6653667092323303,
-0.2782520353794098,
-0.800574541091919,
-0.5729497075080872,
0.0692496970295... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EgilKarlsen/AA | EgilKarlsen | 2023-08-20T16:04:53Z | 16 | 0 | null | [
"region:us"
] | 2023-08-20T16:04:53Z | 2023-08-10T15:15:13.000Z | 2023-08-10T15:15:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: log
dtype: string
- name: label
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 6352006
num_examples: 24320
- name: test
num_bytes: 1813856
num_examples: 6948
- name: validation
num_bytes: 909250
num_examples: 3475
download_size: 2288707
dataset_size: 9075112
---
# Dataset Card for "AA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5979145765304565,
-0.4012645184993744,
0.22110551595687866,
0.04882524907588959,
-0.08512821793556213,
0.13807815313339233,
0.49061551690101624,
-0.3907223045825958,
1.0019344091415405,
0.2567501962184906,
-0.8347072601318359,
-0.8488825559616089,
-0.6494905352592468,
-0.159139275550842... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ghbacct/topic-classifier-news-headlines-classification | ghbacct | 2023-08-11T14:59:10Z | 16 | 0 | null | [
"region:us"
] | 2023-08-11T14:59:10Z | 2023-08-11T14:59:09.000Z | 2023-08-11T14:59:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 587000
num_examples: 7920
- name: test
num_bytes: 147163
num_examples: 1989
download_size: 496605
dataset_size: 734163
---
# Dataset Card for "topic-classifier-news-headlines-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5848746299743652,
-0.32350093126296997,
0.19748926162719727,
0.33477112650871277,
-0.3064004182815552,
-0.017052749171853065,
0.027213020250201225,
0.098122239112854,
0.7440633177757263,
0.407200425863266,
-0.6281060576438904,
-1.0156400203704834,
-0.749754011631012,
-0.4062870442867279... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
natmin322/28k_vietnamese_voice_augmented_of_VigBigData | natmin322 | 2023-08-12T17:18:29Z | 16 | 1 | null | [
"region:us"
] | 2023-08-12T17:18:29Z | 2023-08-12T13:13:41.000Z | 2023-08-12T13:13:41 | ---
configs:
- config_name: default
data_files:
- split: train_1
path: data/train_1-*
- split: train_2
path: data/train_2-*
- split: train_3
path: data/train_3-*
- split: train_4
path: data/train_4-*
- split: train_5
path: data/train_5-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train_1
num_bytes: 1433691842.0
num_examples: 5000
- name: train_2
num_bytes: 1026073200.0
num_examples: 5000
- name: train_3
num_bytes: 1113535830.0
num_examples: 5000
- name: train_4
num_bytes: 1489647293.0
num_examples: 5000
- name: train_5
num_bytes: 1416405046.0
num_examples: 5000
- name: test
num_bytes: 886300388.18
num_examples: 3005
download_size: 6939675259
dataset_size: 7365653599.18
---
# Dataset Card for "28k_vietnamese_voice_augmented_of_VigBigData"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5913468599319458,
-0.3603454530239105,
0.08789774775505066,
0.44057485461235046,
-0.23180797696113586,
0.142355278134346,
0.14543034136295319,
-0.187065988779068,
0.7022205591201782,
0.7876137495040894,
-0.6853676438331604,
-0.8903259634971619,
-0.5039987564086914,
-0.29171958565711975,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Intel/VALERIE22 | Intel | 2023-10-26T14:55:14Z | 16 | 4 | null | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"task_ids:semantic-segmentation",
"task_ids:instance-segmentation",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"automotive",
"autonomous driving",
"synthetic",
"safe ai",
"validation",
"pedestrian detection",
"2d... | 2023-10-26T14:55:14Z | 2023-08-14T09:17:25.000Z | 2023-08-14T09:17:25 | ---
license: cc-by-4.0
task_categories:
- image-segmentation
- object-detection
task_ids:
- semantic-segmentation
- instance-segmentation
tags:
- automotive
- autonomous driving
- synthetic
- safe ai
- validation
- pedestrian detection
- 2d object-detection
- 3d object-detection
- semantic-segmentation
- instance-segmentation
pretty_name: VALERIE22
size_categories:
- 1K<n<10K
---
# VALERIE22 - A photorealistic, richly metadata annotated dataset of urban environments
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/teaser_c.png">
## Dataset Description
- **Paper:** https://arxiv.org/abs/2308.09632
- **Point of Contact:** korbinian.hagn@intel.com
### Dataset Summary
The VALERIE22 dataset was generated with the VALERIE procedural tools pipeline (see image below) providing a photorealistic sensor simulation rendered from automatically synthesized scenes. The dataset provides a uniquely rich set of metadata, allowing extraction of specific scene and semantic features (like pixel-accurate occlusion rates, positions in the scene and distance + angle to the camera). This enables a multitude of possible tests on the data and we hope to stimulate research on understanding performance of DNNs.
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/VALERIE_overview1.png">
Each sequence of the dataset contains for each scene two rendered images. One is rendered with the default Blender tonemapping (/png) whereas the second is renderd with our photorealistic sensor simulation (see hagn2022optimized). The image below shows the difference of the two methods.
<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/SensorSimulation.png">
Following are some example images showing the unique characteristics of the different sequences.
|Sequence0052|Sequence0054|Sequence0057|Sequence0058|
|:---:|:---:|:---:|:---:|
|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq52_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq54_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq57_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq58_1.png" width="500">|
|Sequence0059|Sequence0060|Sequence0062|
|:---:|:---:|:---:|
|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq59_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq60_1.jpg" width="500">|<img src="https://huggingface.co/datasets/Intel/VALERIE22/resolve/main/images/seq62_1.jpg" width="500">|
### Supported Tasks
- pedestrian detection
- 2d object-detection
- 3d object-detection
- semantic-segmentation
- instance-segmentation
- ai-validation
## Dataset Structure
```
VALERIE22
└───intel_results_sequence_0050
│ └───ground-truth
│ │ └───2d-bounding-box_json
│ │ │ └───car-camera000-0000-{UUID}-0000.json
│ │ └───3d-bounding-box_json
│ │ │ └───car-camera000-0000-{UUID}-0000.json
│ │ └───class-id_png
│ │ │ └───car-camera000-0000-{UUID}-0000.png
│ │ └───general-globally-per-frame-analysis_json
│ │ │ └───car-camera000-0000-{UUID}-0000.json
│ │ │ └───car-camera000-0000-{UUID}-0000.csv
│ │ └───semantic-group-segmentation_png
│ │ │ └───car-camera000-0000-{UUID}-0000.png
│ │ └───semantic-instance-segmentation_png
│ │ │ └───car-camera000-0000-{UUID}-0000.png
│ │ │ └───car-camera000-0000-{UUID}-0000
│ │ │ │ └───{Entity-ID}
│ └───sensor
│ │ └───camera
│ │ │ └───left
│ │ │ │ └───png
│ │ │ │ │ └───car-camera000-0000-{UUID}-0000.png
│ │ │ │ └───png_distorted
│ │ │ │ │ └───car-camera000-0000-{UUID}-0000.png
└───intel_results_sequence_0052
└───intel_results_sequence_0054
└───intel_results_sequence_0057
└───intel_results_sequence_0058
└───intel_results_sequence_0059
└───intel_results_sequence_0060
└───intel_results_sequence_0062
```
### Data Splits
13476 images for trainining:
```
dataset = load_dataset("Intel/VALERIE22", split="train")
```
8406 images for validation and test:
```
dataset = load_dataset("Intel/VALERIE22", split="validation")
dataset = load_dataset("Intel/VALERIE22", split="test")
```
### Licensing Information
CC BY 4.0
### Citation Information
Relevant publications:
```
@misc{grau2023valerie22,
title={VALERIE22 -- A photorealistic, richly metadata annotated dataset of urban environments},
author={Oliver Grau and Korbinian Hagn},
year={2023},
eprint={2308.09632},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{hagn2022increasing,
title={Increasing pedestrian detection performance through weighting of detection impairing factors},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={Proceedings of the 6th ACM Computer Science in Cars Symposium},
pages={1--10},
year={2022}
}
@inproceedings{hagn2022validation,
title={Validation of Pedestrian Detectors by Classification of Visual Detection Impairing Factors},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={European Conference on Computer Vision},
pages={476--491},
year={2022},
organization={Springer}
}
@incollection{grau2022variational,
title={A variational deep synthesis approach for perception validation},
author={Grau, Oliver and Hagn, Korbinian and Syed Sha, Qutub},
booktitle={Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety},
pages={359--381},
year={2022},
publisher={Springer International Publishing Cham}
}
@incollection{hagn2022optimized,
title={Optimized data synthesis for DNN training and validation by sensor artifact simulation},
author={Hagn, Korbinian and Grau, Oliver},
booktitle={Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety},
pages={127--147},
year={2022},
publisher={Springer International Publishing Cham}
}
@inproceedings{syed2020dnn,
title={DNN analysis through synthetic data variation},
author={Syed Sha, Qutub and Grau, Oliver and Hagn, Korbinian},
booktitle={Proceedings of the 4th ACM Computer Science in Cars Symposium},
pages={1--10},
year={2020}
}
``` | [
-0.6904959678649902,
-0.5426585078239441,
0.5293048620223999,
-0.2015034556388855,
-0.26838740706443787,
0.1579107791185379,
-0.01927359588444233,
-0.6281899213790894,
0.1367407590150833,
0.1114366427063942,
-0.6932058930397034,
-0.6787850856781006,
-0.3637702465057373,
-0.0751758292317390... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/lima | dim | 2023-08-20T18:14:11Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-20T18:14:11Z | 2023-08-14T17:42:23.000Z | 2023-08-14T17:42:23 | ---
license: mit
dataset_info:
features:
- name: conversations
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2906937
num_examples: 1030
download_size: 1677611
dataset_size: 2906937
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/wikihow_en | dim | 2023-08-15T12:10:58Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-15T12:10:58Z | 2023-08-15T12:09:40.000Z | 2023-08-15T12:09:40 | ---
license: mit
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 17125965.190821543
num_examples: 1995
download_size: 8899392
dataset_size: 17125965.190821543
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/leetcodesolutions_en_2k | dim | 2023-08-15T12:34:04Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-15T12:34:04Z | 2023-08-15T12:33:40.000Z | 2023-08-15T12:33:40 | ---
license: mit
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4847444
num_examples: 2048
download_size: 937266
dataset_size: 4847444
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deep-plants/AGM | deep-plants | 2023-10-04T11:06:53Z | 16 | 2 | null | [
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:cc",
"region:us"
] | 2023-10-04T11:06:53Z | 2023-08-16T09:37:26.000Z | 2023-08-16T09:37:26 | ---
license: cc
size_categories:
- 100K<n<1M
task_categories:
- image-classification
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 3208126820.734
num_examples: 972858
download_size: 3245813213
dataset_size: 3208126820.734
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for AGM Dataset
## Dataset Summary
The AGM (AGricolaModerna) Dataset is a comprehensive collection of high-resolution RGB images capturing harvest-ready plants in a vertical farm setting. This dataset consists of 972,858 images, each with a resolution of 120x120 pixels, covering 18 different plant crops. In the context of this dataset, a crop refers to a plant species or a mix of plant species.
## Supported Tasks
Image classification: plant phenotyping
## Languages
The dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.
## Dataset Structure
### Data Instances
A typical data instance from the training set consists of the following:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=120x120 at 0x29CEAD71780>,
'crop_type': 'by'
}
```
### Data Fields
The dataset's data instances have the following fields:
- `image`: A PIL.Image.Image object representing the image.
- `crop_type`: An string representation of the crop type in the image
### Data Splits
- **Training Set**:
- Number of Examples: 972,858
## Dataset Creation
### Curation Rationale
The creation of the AGM Dataset was motivated by the need for a large and diverse dataset that captures various aspects of modern agriculture, including plant species diversity, stress detection, and crop health assessment.
### Source Data
#### Initial Data Collection and Normalization
The images were captured using a high-resolution camera positioned above a moving table in an agricultural setting. The camera captured images of the entire table, which was filled with trays of harvested crops. The image capture process spanned from May 2022 to December 2022. The original images had a resolution of $1073{\times}650$ pixels. Each pixel in the images corresponds to a physical size of $0.5$ millimeters.
### Annotations
#### Annotation Process
Agronomists and domain experts were involved in the annotation process. They annotated each image to identify the crops present and assign them to specific categories or species. This annotation process involved labeling each image with one of 18 distinct crop categories, which include individual plant species and mixtures of species.
### Who Are the Annotators?
The annotators are agronomists employed by Agricola Moderna.
## Personal and Sensitive Information
The dataset does not contain personal or sensitive information about individuals. It primarily consists of images of plants.
## Considerations for Using the Data
### Social Impact of Dataset
The AGM Dataset has potential social impact in modern agriculture and related domains. It can advance agriculture by aiding the development of innovative technologies for crop monitoring, disease detection, and yield prediction, fostering sustainable farming practices, contributing to food security and ensuring higher agricultural productivity and affordability. The dataset supports research for environmentally sustainable agriculture, optimizing resource use and reducing environmental impact.
### Discussion of Biases and Known Limitations
The dataset primarily involves images from a single vertical farm setting therefore, while massive, includes relatively little variation in crop types. The dataset's contents and annotations may reflect regional agricultural practices and preferences. Business preferences also play a substantial role in determining the types of crops grown in vertical farms. These preferences, often influenced by market demand and profitability, can significantly differ from conventional open-air field agriculture. Therefore, the dataset may inherently reflect these business-driven crop choices, potentially affecting its representativeness of broader agricultural scenarios.
## Additional Information
### Dataset Curators
The dataset is curate by DeepPlants and AgricolaModerna. You can contact us for further informations at
nico@deepplants.com
etienne.david@agricolamoderna.com
### Licensing Information
### Citation Information
If you use the AGM dataset in your work, please consider citing the following publication:
```bibtex
@InProceedings{Sama_2023_ICCV,
author = {Sama, Nico and David, Etienne and Rossetti, Simone and Antona, Alessandro and Franchetti, Benjamin and Pirri, Fiora},
title = {A new Large Dataset and a Transfer Learning Methodology for Plant Phenotyping in Vertical Farms},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2023},
pages = {540-551}
}
``` | [
-0.5162279605865479,
-0.5520811676979065,
0.31600216031074524,
0.03924323990941048,
-0.01133783534169197,
-0.04378437250852585,
-0.15158861875534058,
-0.6906788349151611,
0.14001646637916565,
0.38918229937553406,
-0.4153907299041748,
-0.8539443016052246,
-0.8585140109062195,
0.164553076028... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
grv805/prompt | grv805 | 2023-08-18T06:04:04Z | 16 | 0 | null | [
"region:us"
] | 2023-08-18T06:04:04Z | 2023-08-18T05:05:37.000Z | 2023-08-18T05:05:37 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_KoboldAI__OPT-2.7B-Erebus | open-llm-leaderboard | 2023-10-19T17:37:09Z | 16 | 0 | null | [
"region:us"
] | 2023-10-19T17:37:09Z | 2023-08-18T11:45:16.000Z | 2023-08-18T11:45:16 | ---
pretty_name: Evaluation run of KoboldAI/OPT-2.7B-Erebus
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [KoboldAI/OPT-2.7B-Erebus](https://huggingface.co/KoboldAI/OPT-2.7B-Erebus) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KoboldAI__OPT-2.7B-Erebus\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-19T17:36:56.774550](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__OPT-2.7B-Erebus/blob/main/results_2023-10-19T17-36-56.774550.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0008389261744966443,\n\
\ \"em_stderr\": 0.0002964962989801233,\n \"f1\": 0.048876887583892685,\n\
\ \"f1_stderr\": 0.001194025950365591,\n \"acc\": 0.309724666446861,\n\
\ \"acc_stderr\": 0.007590424725381782\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0008389261744966443,\n \"em_stderr\": 0.0002964962989801233,\n\
\ \"f1\": 0.048876887583892685,\n \"f1_stderr\": 0.001194025950365591\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.003032600454890068,\n \
\ \"acc_stderr\": 0.0015145735612245438\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6164167324388319,\n \"acc_stderr\": 0.013666275889539019\n\
\ }\n}\n```"
repo_url: https://huggingface.co/KoboldAI/OPT-2.7B-Erebus
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|arc:challenge|25_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_19T17_36_56.774550
path:
- '**/details_harness|drop|3_2023-10-19T17-36-56.774550.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-19T17-36-56.774550.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_19T17_36_56.774550
path:
- '**/details_harness|gsm8k|5_2023-10-19T17-36-56.774550.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-19T17-36-56.774550.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hellaswag|10_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T17:05:35.885445.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T17:05:35.885445.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T17:05:35.885445.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_19T17_36_56.774550
path:
- '**/details_harness|winogrande|5_2023-10-19T17-36-56.774550.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-19T17-36-56.774550.parquet'
- config_name: results
data_files:
- split: 2023_07_19T17_05_35.885445
path:
- results_2023-07-19T17:05:35.885445.parquet
- split: 2023_10_19T17_36_56.774550
path:
- results_2023-10-19T17-36-56.774550.parquet
- split: latest
path:
- results_2023-10-19T17-36-56.774550.parquet
---
# Dataset Card for Evaluation run of KoboldAI/OPT-2.7B-Erebus
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/KoboldAI/OPT-2.7B-Erebus
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [KoboldAI/OPT-2.7B-Erebus](https://huggingface.co/KoboldAI/OPT-2.7B-Erebus) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KoboldAI__OPT-2.7B-Erebus",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-19T17:36:56.774550](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__OPT-2.7B-Erebus/blob/main/results_2023-10-19T17-36-56.774550.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0008389261744966443,
"em_stderr": 0.0002964962989801233,
"f1": 0.048876887583892685,
"f1_stderr": 0.001194025950365591,
"acc": 0.309724666446861,
"acc_stderr": 0.007590424725381782
},
"harness|drop|3": {
"em": 0.0008389261744966443,
"em_stderr": 0.0002964962989801233,
"f1": 0.048876887583892685,
"f1_stderr": 0.001194025950365591
},
"harness|gsm8k|5": {
"acc": 0.003032600454890068,
"acc_stderr": 0.0015145735612245438
},
"harness|winogrande|5": {
"acc": 0.6164167324388319,
"acc_stderr": 0.013666275889539019
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.47227704524993896,
-0.7398796677589417,
0.20251114666461945,
0.18812398612499237,
-0.2463589310646057,
-0.017147669568657875,
-0.4632250964641571,
-0.25399917364120483,
0.43415841460227966,
0.6160298585891724,
-0.6604693531990051,
-0.8480720520019531,
-0.5790549516677856,
0.177509739995... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fake-news-UFG/fakebr | fake-news-UFG | 2023-08-18T13:51:35Z | 16 | 0 | null | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:pt",
"region:us"
] | 2023-08-18T13:51:35Z | 2023-08-18T11:46:19.000Z | 2023-08-18T11:46:19 | ---
pretty_name: Fake.br
task_categories:
- text-classification
language:
- pt
language_details: pt-BR
size_categories:
- 1K<n<10K
multilinguality:
- monolingual
language_creators:
- found
---
# Dataset Card for fake.br
## Dataset Description
- **Homepage:**
- **Repository:** [https://github.com/roneysco/Fake.br-Corpus/](https://github.com/roneysco/Fake.br-Corpus/)
- **Paper:** [https://sites.icmc.usp.br/taspardo/OpenCor2018-SantosEtAl.pdf](https://sites.icmc.usp.br/taspardo/OpenCor2018-SantosEtAl.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Fake.Br Corpus is composed of aligned true and fake news written in Brazilian Portuguese.
### Supported Tasks and Leaderboards
The task is text classification of news content.
### Languages
The dataset is in Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use "Fake.br Dataset", please include a citation to the project website and the corresponding paper published in PROPOR 2018 conference:
```bibtex
@InProceedings{fakebr:18,
author={Monteiro, Rafael A. and Santos, Roney L. S. and Pardo, Thiago A. S. and de Almeida, Tiago A. and Ruiz, Evandro E. S. and Vale, Oto A.},
title={Contributions to the Study of Fake News in Portuguese: New Corpus and Automatic Detection Results},
booktitle={Computational Processing of the Portuguese Language},
year={2018},
publisher={Springer International Publishing},
pages={324--334},
isbn={978-3-319-99722-3},
}
```
or the paper published in Expert Systems with Applications:
```bibtex
@article{silva:20,
title = "Towards automatically filtering fake news in Portuguese",
journal = "Expert Systems with Applications",
volume = "146",
pages = "113199",
year = "2020",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2020.113199",
url = "http://www.sciencedirect.com/science/article/pii/S0957417420300257",
author = "Renato M. Silva and Roney L.S. Santos and Tiago A. Almeida and Thiago A.S. Pardo",
}
```
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset. | [
-0.4432796835899353,
-0.9029372930526733,
0.14245709776878357,
0.3683018684387207,
-0.3501427173614502,
0.27311208844184875,
-0.25804993510246277,
-0.5342270731925964,
0.6723306775093079,
0.41526225209236145,
-0.2763400971889496,
-0.7679711580276489,
-0.6776959300041199,
0.1425837874412536... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/openreview_prompts_65 | dim | 2023-08-20T20:33:33Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-20T20:33:33Z | 2023-08-19T15:13:25.000Z | 2023-08-19T15:13:25 | ---
license: mit
dataset_info:
features:
- name: full_review
dtype: string
- name: latex
dtype: string
- name: paper_url
dtype: string
- name: arxiv_url
dtype: string
- name: help_prompt
dtype: string
splits:
- name: train
num_bytes: 6752074
num_examples: 150
download_size: 1488188
dataset_size: 6752074
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/kinomania_scripts | dim | 2023-08-20T21:35:44Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-20T21:35:44Z | 2023-08-19T19:56:44.000Z | 2023-08-19T19:56:44 | ---
license: mit
dataset_info:
features:
- name: movie_script
dtype: string
- name: movie_description
dtype: string
- name: title
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 4912326
num_examples: 27
download_size: 2757276
dataset_size: 4912326
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vivym/midjourney-prompts | vivym | 2023-11-15T06:24:52Z | 16 | 12 | null | [
"task_categories:text-to-image",
"language:en",
"license:apache-2.0",
" midjourney",
"region:us"
] | 2023-11-15T06:24:52Z | 2023-08-25T16:57:14.000Z | 2023-08-25T16:57:14 | ---
license: apache-2.0
task_categories:
- text-to-image
tags:
- ' midjourney'
language:
- en
---
# midjourney-prompts
## Description
This dataset contains the cleaned midjourney prompts from Midjourney.
Total prompts: 9,085,397
| Version | Count |
| ------- | --------- |
| 5.2 | 2,272,465 |
| 5.1 | 2,060,106 |
| 5.0 | 3,530,770 |
| 4.0 | 1,204,384 |
| 3.0 | 14,991 |
| 2.0 | 791 |
| 1.0 | 1,239 |
| Style | Count |
| --------- | ----------- |
| default | 8,874,181 |
| raw | 177,953 |
| expressive| 27,919 |
| scenic | 2,146 |
| cute | 2,036 |
| original | 511 | | [
-0.37717193365097046,
-0.23873157799243927,
0.6013359427452087,
0.31519201397895813,
-0.22955596446990967,
-0.11419162899255753,
0.05987326055765152,
0.30207985639572144,
0.6286762952804565,
0.6867657899856567,
-1.1454079151153564,
-0.7145987749099731,
-0.6382791996002197,
0.52057200670242... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/scitldr | dim | 2023-08-31T19:47:53Z | 16 | 0 | null | [
"region:us"
] | 2023-08-31T19:47:53Z | 2023-08-31T19:47:16.000Z | 2023-08-31T19:47:16 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 4016919
num_examples: 3229
download_size: 2222180
dataset_size: 4016919
---
# Dataset Card for "scitldr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4648696780204773,
-0.09076680988073349,
0.16240194439888,
0.2539709806442261,
-0.21467074751853943,
0.17956431210041046,
0.30392584204673767,
-0.16044731438159943,
0.7884267568588257,
0.25472578406333923,
-0.8135718107223511,
-0.7028923034667969,
-0.6024906039237976,
-0.1521585434675216... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yangwang825/audioset | yangwang825 | 2023-09-18T11:19:55Z | 16 | 0 | null | [
"task_categories:audio-classification",
"size_categories:100M<n<1B",
"audioset",
"region:us"
] | 2023-09-18T11:19:55Z | 2023-09-02T12:56:33.000Z | 2023-09-02T12:56:33 | ---
configs:
- config_name: audioset500k
data_files:
- split: train
path: audioset500k.json
- config_name: balanced_train
data_files:
- split: train
path: balanced_train.json
- config_name: eval
data_files:
- split: test
path: eval.json
- config_name: unbalanced_train_part00
data_files: unbalanced_train_part00.json
# dataset_size: 46940
- config_name: unbalanced_train_part01
data_files: unbalanced_train_part01.json
# dataset_size: 47052
- config_name: unbalanced_train_part02
data_files: unbalanced_train_part02.json
# dataset_size: 46923
- config_name: unbalanced_train_part03
data_files: unbalanced_train_part03.json
# dataset_size: 46952
- config_name: unbalanced_train_part04
data_files: unbalanced_train_part04.json
# dataset_size: 46916
- config_name: unbalanced_train_part05
data_files: unbalanced_train_part05.json
# dataset_size: 47011
- config_name: unbalanced_train_part06
data_files: unbalanced_train_part06.json
# dataset_size: 46964
- config_name: unbalanced_train_part07
data_files: unbalanced_train_part07.json
# dataset_size: 46915
- config_name: unbalanced_train_part08
data_files: unbalanced_train_part08.json
# dataset_size: 46927
- config_name: unbalanced_train_part09
data_files: unbalanced_train_part09.json
# dataset_size: 46839
- config_name: unbalanced_train_part10
data_files: unbalanced_train_part10.json
# dataset_size: 46862
- config_name: unbalanced_train_part11
data_files: unbalanced_train_part11.json
# dataset_size: 46836
- config_name: unbalanced_train_part12
data_files: unbalanced_train_part12.json
# dataset_size: 46865
- config_name: unbalanced_train_part13
data_files: unbalanced_train_part13.json
# dataset_size: 46800
- config_name: unbalanced_train_part14
data_files: unbalanced_train_part14.json
# dataset_size: 46837
- config_name: unbalanced_train_part15
data_files: unbalanced_train_part15.json
# dataset_size: 46824
- config_name: unbalanced_train_part16
data_files: unbalanced_train_part16.json
# dataset_size: 46813
- config_name: unbalanced_train_part17
data_files: unbalanced_train_part17.json
# dataset_size: 46771
- config_name: unbalanced_train_part18
data_files: unbalanced_train_part18.json
# dataset_size: 46875
- config_name: unbalanced_train_part19
data_files: unbalanced_train_part19.json
# dataset_size: 46885
- config_name: unbalanced_train_part20
data_files: unbalanced_train_part20.json
# dataset_size: 46884
- config_name: unbalanced_train_part21
data_files: unbalanced_train_part21.json
# dataset_size: 46736
- config_name: unbalanced_train_part22
data_files: unbalanced_train_part22.json
# dataset_size: 46832
- config_name: unbalanced_train_part23
data_files: unbalanced_train_part23.json
# dataset_size: 46823
- config_name: unbalanced_train_part24
data_files: unbalanced_train_part24.json
# dataset_size: 46795
- config_name: unbalanced_train_part25
data_files: unbalanced_train_part25.json
# dataset_size: 46740
- config_name: unbalanced_train_part26
data_files: unbalanced_train_part26.json
# dataset_size: 46765
- config_name: unbalanced_train_part27
data_files: unbalanced_train_part27.json
# dataset_size: 46708
- config_name: unbalanced_train_part28
data_files: unbalanced_train_part28.json
# dataset_size: 46736
- config_name: unbalanced_train_part29
data_files: unbalanced_train_part29.json
# dataset_size: 46819
- config_name: unbalanced_train_part30
data_files: unbalanced_train_part30.json
# dataset_size: 46694
- config_name: unbalanced_train_part31
data_files: unbalanced_train_part31.json
# dataset_size: 46735
- config_name: unbalanced_train_part32
data_files: unbalanced_train_part32.json
# dataset_size: 46731
- config_name: unbalanced_train_part33
data_files: unbalanced_train_part33.json
# dataset_size: 46627
- config_name: unbalanced_train_part34
data_files: unbalanced_train_part34.json
# dataset_size: 46740
- config_name: unbalanced_train_part35
data_files: unbalanced_train_part35.json
# dataset_size: 46866
- config_name: unbalanced_train_part36
data_files: unbalanced_train_part36.json
# dataset_size: 46758
- config_name: unbalanced_train_part37
data_files: unbalanced_train_part37.json
# dataset_size: 46751
- config_name: unbalanced_train_part38
data_files: unbalanced_train_part38.json
# dataset_size: 46750
- config_name: unbalanced_train_part39
data_files: unbalanced_train_part39.json
# dataset_size: 46700
- config_name: unbalanced_train_part40
data_files: unbalanced_train_part40.json
# dataset_size: 39137
task_categories:
- audio-classification
tags:
- audioset
size_categories:
- 100M<n<1B
---
# AudioSet
AudioSet<sup>[1]</sup> consists of an expanding ontology of 527 audio event classes and a collection of 2M human-labelled 10-second sound clips drawn from YouTube.
Some clips are missing on YouTube, so the number of files downloaded is different from time to time.
This repository contains 20550 / 22160 of the balanced train set, 1913637 / 2041789 of the unbalanced train set (separated into 41 parts), and 18887 / 20371 of the evaluation set.
The pre-process script can be found at qiuqiangkong's [github](https://github.com/qiuqiangkong/audioset_tagging_cnn)<sup>[2]</sup>.
To improve training efficiency, we add a slightly more balanced subset AudioSet500K<sup>[3]</sup>.
## References
1. Gemmeke, Jort F., et al., Audio set: An ontology and human-labeled dataset for audio events, 2017
2. Kong, Qiuqiang, et al., Panns: Large-scale pretrained audio neural networks for audio pattern recognition, 2020
3. Nagrani, Arsha, et al., Attention bottlenecks for multimodal fusion, 2021 | [
-0.6199776530265808,
0.11757906526327133,
-0.06635774672031403,
0.3276421129703522,
-0.16534323990345,
-0.14373156428337097,
-0.5594356656074524,
-0.37389233708381653,
0.3277073800563812,
0.3677181601524353,
-0.9863620400428772,
-0.34520843625068665,
-0.5432075262069702,
-0.106500342488288... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
miazhao/prm800k_processed_preference | miazhao | 2023-09-04T00:10:16Z | 16 | 2 | null | [
"region:us"
] | 2023-09-04T00:10:16Z | 2023-09-04T00:10:15.000Z | 2023-09-04T00:10:15 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: responses
sequence: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 23805614
num_examples: 22036
download_size: 9396871
dataset_size: 23805614
---
# Dataset Card for "prm800k_processed_preference"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6875103116035461,
-0.12896738946437836,
0.22282728552818298,
0.22289344668388367,
-0.2528986632823944,
-0.3319651782512665,
0.1046455129981041,
0.08832729607820511,
1.0313456058502197,
0.8331590294837952,
-0.8697345852851868,
-0.6053041219711304,
-0.554417610168457,
-0.1482381969690323,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sdadas/gpt-exams | sdadas | 2023-09-09T12:06:12Z | 16 | 1 | null | [
"task_categories:question-answering",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:pl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-09-09T12:06:12Z | 2023-09-09T11:25:39.000Z | 2023-09-09T11:25:39 | ---
language:
- pl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- question-answering
pretty_name: GPT-exams
dataset_info:
features:
- name: _id
dtype: int32
- name: question
dtype: string
- name: answer
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 17237681
num_examples: 8131
---
# GPT-exams
### Dataset summary
The dataset contains 8131 multi-domain question-answer pairs. It was created semi-automatically using the `gpt-3.5-turbo-0613` model available in the OpenAI API. The process of building the dataset was as follows:
1. We manually prepared a list of 409 university-level courses from various fields. For each course, we instructed the model with the prompt: "Wygeneruj 20 przykładowych pytań na egzamin z [nazwa przedmiotu]" (Generate 20 sample questions for the [course name] exam).
2. We then parsed the outputs of the model to extract individual questions and performed their deduplication.
3. In the next step, we requested the model to generate the answer to each of the collected questions. We used the following prompt: "Odpowiedz na następujące pytanie z dziedziny [nazwa przedmiotu]: [treść pytania]" (Answer the following question from [course name]: [question content]). Along with the prompt, we also sent the following system message: "Jesteś ekspertem w dziedzinie [nazwa przedmiotu]. Udzielasz specjalistycznych i wyczerpujących odpowiedzi na pytania." (You are an expert in [course name]. You provide knowledgeable and comprehensive answers to questions).
4. In the last step, we manually removed from the dataset the cases in which the model refused to answer the question. We searched for occurrences of phrases such as "model języka" (language model), "nie jestem" (I'm not), or "nie mogę" (I can't).
### Data Instances
Example instance:
```
{
"_id": 2338,
"domain": "wzorców projektowych w oprogramowaniu",
"question": "Co to jest dependency injection i jak może być wykorzystane w kontekście wzorców projektowych?",
"answer": "Dependency injection (DI) to technika wstrzykiwania zależności, która polega na dostarczaniu obiektowi (...)"
}
```
### Data Fields
- _id: record id
- question: question text
- answer: answer text
- domain: name of the course / field / domain
| [
-0.7253490090370178,
-1.1091867685317993,
0.5779152512550354,
-0.19256891310214996,
-0.00793653167784214,
-0.21162396669387817,
-0.06757334619760513,
-0.07274271547794342,
-0.08649316430091858,
0.6277714371681213,
-0.7072691917419434,
-0.5348659157752991,
-0.32129815220832825,
0.1848951131... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
minh21/cpgQA-v1.0-unique-context-test-10-percent-validation-10-percent | minh21 | 2023-09-09T11:37:51Z | 16 | 0 | null | [
"region:us"
] | 2023-09-09T11:37:51Z | 2023-09-09T11:37:47.000Z | 2023-09-09T11:37:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: title
dtype: string
- name: id
dtype: int64
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 1176326
num_examples: 884
- name: test
num_bytes: 122341
num_examples: 109
- name: validation
num_bytes: 136762
num_examples: 104
download_size: 200983
dataset_size: 1435429
---
# Dataset Card for "cpgQA-v1.0-unique-context-test-10-percent-validation-10-percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6168691515922546,
-0.4356606900691986,
0.12314707040786743,
0.5793130993843079,
-0.26614347100257874,
-0.1169833168387413,
0.25434184074401855,
0.17751248180866241,
0.4272223711013794,
0.38079318404197693,
-0.8873956799507141,
-0.8469558358192444,
-0.40088969469070435,
-0.18349331617355... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlabonne/MedText | mlabonne | 2023-09-09T16:24:24Z | 16 | 0 | null | [
"region:us"
] | 2023-09-09T16:24:24Z | 2023-09-09T13:00:03.000Z | 2023-09-09T13:00:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 943488
num_examples: 1412
download_size: 0
dataset_size: 943488
---
# Dataset Card for "MedText"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4719330668449402,
-0.2831917405128479,
0.5486354231834412,
0.19681300222873688,
-0.17249608039855957,
-0.13996092975139618,
0.08773622661828995,
-0.34907326102256775,
0.8364270329475403,
0.6584785580635071,
-0.9573791027069092,
-0.8083676099777222,
-0.6787567734718323,
-0.04198657721281... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
asoria/draft-list-column | asoria | 2023-09-11T20:04:38Z | 16 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ru",
"license:apache-2... | 2023-09-11T20:04:38Z | 2023-09-11T20:03:01.000Z | 2023-09-11T20:03:01 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ru
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- multi-label-classification
pretty_name: The Corpus for Emotions Detecting in Russian-language text sentences
(CEDR)
tags:
- emotion-classification
dataset_info:
- config_name: main
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': joy
'1': sadness
'2': surprise
'3': fear
'4': anger
- name: source
dtype: string
splits:
- name: train
num_bytes: 1418355
num_examples: 7528
- name: test
num_bytes: 350275
num_examples: 1882
download_size: 693026
dataset_size: 1768630
- config_name: enriched
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': joy
'1': sadness
'2': surprise
'3': fear
'4': anger
- name: source
dtype: string
- name: sentences
list:
list:
- name: forma
dtype: string
- name: lemma
dtype: string
splits:
- name: train
num_bytes: 4792366
num_examples: 7528
- name: test
num_bytes: 1182343
num_examples: 1882
download_size: 1822522
dataset_size: 5974709
---
# Dataset Card for [cedr]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/sag111/CEDR)
- **Repository:** [GitHub](https://github.com/sag111/CEDR)
- **Paper:** [ScienceDirect](https://www.sciencedirect.com/science/article/pii/S1877050921013247)
- **Leaderboard:**
- **Point of Contact:** [@sag111](mailto:sag111@mail.ru)
### Dataset Summary
The Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger).
Here are 2 dataset configurations:
- "main" - contains "text", "labels", and "source" features;
- "enriched" - includes all "main" features and "sentences".
Dataset with predefined train/test splits.
### Supported Tasks and Leaderboards
This dataset is intended for multi-label emotion classification.
### Languages
The data is in Russian.
## Dataset Structure
### Data Instances
Each instance is a text sentence in Russian from several sources with one or more emotion annotations (or no emotion at all).
An example for an instance from the dataset is shown below:
```
{
'text': 'Забавно как люди в возрасте удивляются входящим звонкам на мобильник)',
'labels': [0],
'source': 'twitter',
'sentences': [
[
{'forma': 'Забавно', 'lemma': 'Забавно'},
{'forma': 'как', 'lemma': 'как'},
{'forma': 'люди', 'lemma': 'человек'},
{'forma': 'в', 'lemma': 'в'},
{'forma': 'возрасте', 'lemma': 'возраст'},
{'forma': 'удивляются', 'lemma': 'удивляться'},
{'forma': 'входящим', 'lemma': 'входить'},
{'forma': 'звонкам', 'lemma': 'звонок'},
{'forma': 'на', 'lemma': 'на'},
{'forma': 'мобильник', 'lemma': 'мобильник'},
{'forma': ')', 'lemma': ')'}
]
]
}
```
Emotion label codes: {0: "joy", 1: "sadness", 2: "surprise", 3: "fear", 4: "anger"}
### Data Fields
The main configuration includes:
- text: the text of the sentence;
- labels: the emotion annotations;
- source: the tag name of the corresponding source
In addition to the above, the raw data includes:
- sentences: text tokenized and lemmatized with [udpipe](https://ufal.mff.cuni.cz/udpipe)
- 'forma': the original word form;
- 'lemma': the lemma of this word
### Data Splits
The dataset includes a set of train/test splits.
with 7528, and 1882 examples respectively.
## Dataset Creation
### Curation Rationale
The formed dataset of examples consists of sentences in Russian from several sources (blogs, microblogs, news), which allows creating methods to analyse various types of texts. The created methodology for building the dataset based on applying a crowdsourcing service can be used to expand the number of examples to improve the accuracy of supervised classifiers.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from several sources: posts of the Live Journal social network, texts of the online news agency Lenta.ru, and Twitter microblog posts.
Only those sentences were selected that contained marker words from the dictionary of [the emotive vocabulary of the Russian language](http://lexrus.ru/default.aspx?p=2876). The authors manually formed a list of marker words for each emotion by choosing words from different categories of the dictionary.
In total, 3069 sentences were selected from LiveJournal posts, 2851 sentences from Lenta.Ru, and 3490 sentencesfrom Twitter. After selection, sentences were offered to annotators for labeling.
#### Who are the source language producers?
Russian-speaking LiveJournal and Tweeter users, and authors of news articles on the site lenta.ru.
### Annotations
#### Annotation process
Annotating sentences with labels of their emotions was performed with the help of [a crowdsourcing platform](https://yandex.ru/support/toloka/index.html?lang=en).
The annotators’ task was: “What emotions did the author express in the sentence?”. The annotators were allowed to put an arbitrary number of the following emotion labels: "joy", "sadness", "anger", "fear", and "surprise".
If the accuracy of an annotator on the control sentences (including the trial run) became less than 70%, or if the accuracy was less than 66% over the last six control samples, the annotator was dismissed.
Sentences were split into tasks and assigned to annotators so that each sentence was annotated at least three times. A label of a specific emotion was assigned to a sentence if put by more than half of the annotators.
#### Who are the annotators?
Only those of the 30% of the best-performing active users (by the platform’s internal rating) who spoke Russian and were over 18 years old were allowed into the annotation process. Moreover, before a platform user could be employed as an annotator, they underwent a training task, after which they were to mark 25 trial samples with more than 80% agreement compared to the annotation that the authors had performed themselves.
### Personal and Sensitive Information
The text of the sentences may contain profanity.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Researchers at AI technology lab at NRC "Kurchatov Institute". See the author [list](https://www.sciencedirect.com/science/article/pii/S1877050921013247).
### Licensing Information
The GitHub repository which houses this dataset has an Apache License 2.0.
### Citation Information
If you have found our results helpful in your work, feel free to cite our publication. This is an updated version of the dataset, the collection and preparation of which is described here:
```
@article{sboev2021data,
title={Data-Driven Model for Emotion Detection in Russian Texts},
author={Sboev, Alexander and Naumov, Aleksandr and Rybka, Roman},
journal={Procedia Computer Science},
volume={190},
pages={637--642},
year={2021},
publisher={Elsevier}
}
```
### Contributions
Thanks to [@naumov-al](https://github.com/naumov-al) for adding this dataset. | [
-0.26716578006744385,
-0.6146878004074097,
0.3026992380619049,
0.2271486520767212,
-0.4374569058418274,
-0.04275735095143318,
-0.4057561159133911,
-0.26656848192214966,
0.42915138602256775,
0.06194279342889786,
-0.6811128854751587,
-1.0851311683654785,
-0.74955153465271,
0.2566643059253692... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dippi9845/arxiv2_with_fragments_clean | Dippi9845 | 2023-09-12T13:35:37Z | 16 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2023-09-12T13:35:37Z | 2023-09-12T13:31:06.000Z | 2023-09-12T13:31:06 | ---
license: cc-by-nc-nd-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BYC-Sophie/samsum-chatgpt-summary | BYC-Sophie | 2023-09-13T04:12:18Z | 16 | 1 | null | [
"region:us"
] | 2023-09-13T04:12:18Z | 2023-09-13T01:35:16.000Z | 2023-09-13T01:35:16 | This dataset is based on the [SAMSum](https://huggingface.co/datasets/samsum) dataset.
The summarization is generated by promoting to OpenAI ChatGPT API (gpt-3.5-turbo) with Temperature of 0.7.
The fine-tuned models outperforms the baselines in multiple metrics, demonstrating ChatGPT’s few-shot learning and summarization ability, and thus the potential to save human labor in summarization annotation.
Fine-tuned models also uploaded to hugging face.
| [
-0.5300148725509644,
-0.3072968125343323,
0.16672389209270477,
0.15166813135147095,
-0.495524525642395,
-0.24726144969463348,
-0.1270037740468979,
-0.36484622955322266,
0.6355317234992981,
0.48246052861213684,
-0.5877746343612671,
-0.4749767482280731,
-0.4682466387748718,
-0.19999632239341... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wuliangfo/Chinese-Pixiv-Novel | wuliangfo | 2023-09-18T11:27:13Z | 16 | 11 | null | [
"license:openrail",
"region:us"
] | 2023-09-18T11:27:13Z | 2023-09-13T02:03:57.000Z | 2023-09-13T02:03:57 | ---
license: openrail
---
这是一个R-18(含R-18G)简体中文小说数据集,来自Pixiv网站
共有145163本,数据截止北京时间2023年9月12日晚7点
存储格式为Pixiv/userID/ID.txt,数据为txt正文,Pixiv/userID/ID-meta.txt,数据为额外信息(包括tag、title、Description等)
数据未经过清洗,可能包含低质量内容。 | [
-0.6246998906135559,
-0.9360314607620239,
0.05754832178354263,
0.25419121980667114,
-0.7689655423164368,
-0.36821144819259644,
-0.02434099093079567,
-0.4812188446521759,
0.44740548729896545,
0.6247958540916443,
-0.7508355379104614,
-0.6451026201248169,
-0.6587746143341064,
0.45313534140586... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/c4_academicbiomedical_2 | zxvix | 2023-09-13T03:58:39Z | 16 | 0 | null | [
"region:us"
] | 2023-09-13T03:58:39Z | 2023-09-13T03:35:41.000Z | 2023-09-13T03:35:41 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 2352052.0
num_examples: 986
download_size: 1376270
dataset_size: 2352052.0
---
# Dataset Card for "c4_academicbiomedical_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.24552905559539795,
-0.16009773313999176,
0.4331332743167877,
0.0938095971941948,
-0.1382426917552948,
0.2281087189912796,
0.394275426864624,
-0.4683294892311096,
0.7924795150756836,
0.3177140951156616,
-0.6838783025741577,
-0.7481006383895874,
-0.732751190662384,
-0.06925332546234131,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingFaceH4/lima_llama2 | HuggingFaceH4 | 2023-09-17T04:03:38Z | 16 | 4 | null | [
"region:us"
] | 2023-09-17T04:03:38Z | 2023-09-17T04:03:27.000Z | 2023-09-17T04:03:27 | ---
dataset_info:
features:
- name: conversations
sequence: string
- name: source
dtype: string
- name: length
dtype: int64
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: meta
struct:
- name: category
dtype: string
- name: source
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8806712
num_examples: 1000
- name: test
num_bytes: 188848
num_examples: 300
download_size: 5237615
dataset_size: 8995560
---
# Dataset Card for "lima_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.38763999938964844,
-0.2317977398633957,
0.4069279134273529,
0.6530135273933411,
-0.6703988313674927,
-0.06330123543739319,
0.4968917667865753,
-0.3391948342323303,
0.9492442011833191,
0.5336843729019165,
-0.7736124396324158,
-0.8362927436828613,
-0.9284524321556091,
-0.1527504026889801,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
danlou/safespace-8877-20230920 | danlou | 2023-09-20T15:10:39Z | 16 | 0 | null | [
"region:us"
] | 2023-09-20T15:10:39Z | 2023-09-20T15:09:45.000Z | 2023-09-20T15:09:45 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/databricks_dolly_15k_en | dim | 2023-09-20T15:47:41Z | 16 | 0 | null | [
"region:us"
] | 2023-09-20T15:47:41Z | 2023-09-20T15:47:37.000Z | 2023-09-20T15:47:37 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 12195589
num_examples: 15011
download_size: 7749182
dataset_size: 12195589
---
# Dataset Card for "databricks-dolly-15k_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4132079482078552,
-0.2802063524723053,
-0.05645737797021866,
0.654731273651123,
-0.34945330023765564,
0.11828728765249252,
0.5328314900398254,
-0.0618172325193882,
0.8573293089866638,
0.4804689884185791,
-0.9514685273170471,
-0.687425434589386,
-0.591847836971283,
0.047430891543626785,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joey234/affixal_negation | joey234 | 2023-10-13T01:33:00Z | 16 | 1 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-10-13T01:33:00Z | 2023-09-21T05:28:43.000Z | 2023-09-21T05:28:43 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: e
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
- This dataset contains a list of affixal negations and their non-negated counterpart (e.g. unintended - intended).
- This dataset is from [van Son et al. (2016)](https://aclanthology.org/W16-5007/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.6124334931373596,
-0.8051000237464905,
-0.02003536932170391,
0.11491820961236954,
-0.2192802131175995,
-0.09845243394374847,
-0.1684771478176117,
-0.3192512094974518,
0.4970879852771759,
0.7398781180381775,
-0.9526987075805664,
-0.974238932132721,
-0.8402039408683777,
0.3159790933132171... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/joke_explaination_prompts | dim | 2023-09-21T19:42:40Z | 16 | 0 | null | [
"region:us"
] | 2023-09-21T19:42:40Z | 2023-09-21T19:42:38.000Z | 2023-09-21T19:42:38 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: explaination
dtype: string
splits:
- name: train
num_bytes: 194768
num_examples: 364
download_size: 110662
dataset_size: 194768
---
# Dataset Card for "joke_explaination_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6364125609397888,
-0.5309644341468811,
0.4392474889755249,
0.43903201818466187,
-0.49991166591644287,
-0.2724236249923706,
0.23405198752880096,
0.10599764436483383,
0.720595121383667,
0.37311166524887085,
-1.1412029266357422,
-0.6405379772186279,
-0.4169371724128723,
-0.1020880341529846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/law_stackexchange_prompts | dim | 2023-09-21T21:00:28Z | 16 | 0 | null | [
"region:us"
] | 2023-09-21T21:00:28Z | 2023-09-21T20:59:57.000Z | 2023-09-21T20:59:57 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 64447591
num_examples: 24343
download_size: 38111723
dataset_size: 64447591
---
# Dataset Card for "law_stackexchange_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.45747804641723633,
-0.20339441299438477,
0.3095663785934448,
0.31918200850486755,
-0.33370643854141235,
-0.23336519300937653,
0.34032994508743286,
0.10328372567892075,
0.725853681564331,
0.5901082158088684,
-0.8560048937797546,
-0.7170655131340027,
-0.40843796730041504,
-0.2932491898536... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/AO3_fandom_chatbot_1to1 | dim | 2023-09-25T17:58:32Z | 16 | 0 | null | [
"region:us"
] | 2023-09-25T17:58:32Z | 2023-09-24T14:35:07.000Z | 2023-09-24T14:35:07 | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1203600
num_examples: 614
download_size: 0
dataset_size: 1203600
---
# Dataset Card for "AO3_fandom_chatbot_1to1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5851610898971558,
-0.4735565781593323,
-0.029274124652147293,
0.3626209795475006,
-0.07170701026916504,
-0.11357108503580093,
0.4920874834060669,
-0.02822864055633545,
0.8634456992149353,
0.7635965943336487,
-0.9307720065116882,
-0.6487780809402466,
-0.647823691368103,
-0.12110085785388... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/habr_prompts_5k | dim | 2023-09-25T18:21:34Z | 16 | 0 | null | [
"region:us"
] | 2023-09-25T18:21:34Z | 2023-09-25T00:25:09.000Z | 2023-09-25T00:25:09 | ---
dataset_info:
features:
- name: solution_short_llama2
dtype: string
- name: id
dtype: int64
- name: language
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text_markdown
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: original_author
dtype: string
- name: original_url
dtype: string
- name: lead_html
dtype: string
- name: lead_markdown
dtype: string
- name: type
dtype: string
- name: time_published
dtype: int64
- name: statistics
struct:
- name: commentsCount
dtype: int64
- name: favoritesCount
dtype: int64
- name: readingCount
dtype: int64
- name: score
dtype: int64
- name: votesCount
dtype: int64
- name: votesCountMinus
dtype: int64
- name: votesCountPlus
dtype: int64
- name: labels
sequence: string
- name: hubs
sequence: string
- name: flows
sequence: string
- name: tags
sequence: string
- name: reading_time
dtype: int64
- name: format
dtype: string
- name: complexity
dtype: string
- name: comments
struct:
- name: author
sequence: string
- name: children
sequence:
sequence: int64
- name: id
sequence: int64
- name: level
sequence: int64
- name: message_html
sequence: string
- name: message_markdown
sequence: string
- name: parent_id
sequence: int64
- name: score
sequence: int64
- name: time_published
sequence: int64
- name: votes
sequence: int64
- name: readingCount
dtype: int64
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1032739347
num_examples: 5000
download_size: 495188038
dataset_size: 1032739347
---
# Dataset Card for "habr_prompts_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7136879563331604,
-0.34761446714401245,
0.10839758813381195,
0.4728841781616211,
-0.25513216853141785,
-0.0108872689306736,
0.4527316689491272,
-0.21080230176448822,
0.8551116585731506,
0.4868053197860718,
-0.8778059482574463,
-0.787077784538269,
-0.366200715303421,
0.005978083703666925... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/competition_math | dim | 2023-09-25T12:10:40Z | 16 | 0 | null | [
"region:us"
] | 2023-09-25T12:10:40Z | 2023-09-25T12:10:37.000Z | 2023-09-25T12:10:37 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5984772
num_examples: 7500
download_size: 2992145
dataset_size: 5984772
---
# Dataset Card for "competition_math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5724526047706604,
-0.254580020904541,
0.09300413727760315,
0.4074781835079193,
-0.097431980073452,
0.07362786680459976,
0.21612875163555145,
0.013056925497949123,
0.72420334815979,
0.2851656377315521,
-0.8458088040351868,
-0.7301149368286133,
-0.5794055461883545,
-0.340025931596756,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/sharegpt_short_en_30k | dim | 2023-09-25T13:16:03Z | 16 | 0 | null | [
"region:us"
] | 2023-09-25T13:16:03Z | 2023-09-25T13:15:28.000Z | 2023-09-25T13:15:28 | ---
dataset_info:
features:
- name: conversation
sequence: string
- name: hash
dtype: string
splits:
- name: train
num_bytes: 88612458
num_examples: 29597
download_size: 44347819
dataset_size: 88612458
---
# Dataset Card for "sharegpt_short_en_30k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.716871976852417,
-0.19936072826385498,
0.15300947427749634,
0.43470969796180725,
-0.45769256353378296,
0.000689086620695889,
0.025396021082997322,
-0.15608921647071838,
0.8287214636802673,
0.26242321729660034,
-0.8872076272964478,
-0.8249901533126831,
-0.797859251499176,
-0.267635107040... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/tldr_17_50k | dim | 2023-09-25T13:49:24Z | 16 | 0 | null | [
"region:us"
] | 2023-09-25T13:49:24Z | 2023-09-25T13:45:30.000Z | 2023-09-25T13:45:30 | ---
dataset_info:
features:
- name: author
dtype: string
- name: body
dtype: string
- name: normalizedBody
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: id
dtype: string
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 246031411.71625096
num_examples: 50000
download_size: 156564697
dataset_size: 246031411.71625096
---
# Dataset Card for "tldr_17_50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4397831857204437,
-0.14767047762870789,
0.027759060263633728,
0.19763655960559845,
-0.36108601093292236,
0.21262316405773163,
0.2231956124305725,
-0.1661708801984787,
0.5694456696510315,
0.48250916600227356,
-0.8130665421485901,
-0.947034478187561,
-0.6037301421165466,
-0.18221761286258... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/tldr_news | dim | 2023-09-25T13:52:00Z | 16 | 0 | null | [
"region:us"
] | 2023-09-25T13:52:00Z | 2023-09-25T13:51:55.000Z | 2023-09-25T13:51:55 | ---
dataset_info:
features:
- name: headline
dtype: string
- name: content
dtype: string
- name: category
dtype:
class_label:
names:
'0': Sponsor
'1': Big Tech & Startups
'2': Science and Futuristic Technology
'3': Programming, Design & Data Science
'4': Miscellaneous
splits:
- name: train
num_bytes: 4000442
num_examples: 7138
download_size: 2554140
dataset_size: 4000442
---
# Dataset Card for "tldr_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3422778844833374,
-0.4219573140144348,
0.2455991953611374,
0.11859537661075592,
-0.4104732573032379,
0.20819799602031708,
0.1096266657114029,
-0.18095727264881134,
0.7934285402297974,
0.4339199662208557,
-0.7235795259475708,
-0.9917798042297363,
-0.6241666674613953,
-0.36359483003616333... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ekshat/text-2-sql-with-context | ekshat | 2023-09-26T07:18:08Z | 16 | 0 | null | [
"region:us"
] | 2023-09-26T07:18:08Z | 2023-09-26T06:50:06.000Z | 2023-09-26T06:50:06 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 32317282.06065388
num_examples: 74648
- name: test
num_bytes: 1700977.939346119
num_examples: 3929
download_size: 8982199
dataset_size: 34018260.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "text-2-sql-with-context"
This dataset is prepared in Alpaca format introduced by Stanford to train LLMs. This dataset has been used in fine-tuning Chat Llama-2 7B. For more information, Please visit : https://huggingface.co/ekshat/Llama-2-7b-chat-finetune-for-text2sql | [
-0.2825055718421936,
-0.9160107970237732,
0.12804701924324036,
0.5525581240653992,
-0.8916517496109009,
-0.3078698515892029,
-0.04244611784815788,
-0.39803698658943176,
0.5877272486686707,
0.8682697415351868,
-0.8966525197029114,
-0.5002812147140503,
-0.5328522324562073,
-0.010967756621539... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/hoasa | SEACrowd | 2023-09-26T12:29:28Z | 16 | 0 | null | [
"language:ind",
"aspect-based-sentiment-analysis",
"region:us"
] | 2023-09-26T12:29:28Z | 2023-09-26T11:13:28.000Z | 2023-09-26T11:13:28 | ---
tags:
- aspect-based-sentiment-analysis
language:
- ind
---
# hoasa
HoASA: An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel aggregator platform, AiryRooms.
The dataset covers ten different aspects of hotel quality. Similar to the CASA dataset, each review is labeled with a single sentiment label for each aspect.
There are four possible sentiment classes for each sentiment label:
positive, negative, neutral, and positive-negative.
The positivenegative label is given to a review that contains multiple sentiments of the same aspect but for different objects (e.g., cleanliness of bed and toilet).
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{azhar2019multi,
title={Multi-label Aspect Categorization with Convolutional Neural Networks and Extreme Gradient Boosting},
author={A. N. Azhar, M. L. Khodra, and A. P. Sutiono}
booktitle={Proceedings of the 2019 International Conference on Electrical Engineering and Informatics (ICEEI)},
pages={35--40},
year={2019}
}
```
## License
CC-BY-SA 4.0
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.7258013486862183,
-0.5833074450492859,
-0.010242586024105549,
0.5214443802833557,
-0.4609818756580353,
-0.059102512896060944,
0.17059189081192017,
-0.3162819743156433,
0.6700907349586487,
0.37836953997612,
-0.2144295871257782,
-0.6423903107643127,
-0.5727869868278503,
0.1033790484070777... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/casa | SEACrowd | 2023-09-26T12:31:48Z | 16 | 0 | null | [
"language:ind",
"aspect-based-sentiment-analysis",
"region:us"
] | 2023-09-26T12:31:48Z | 2023-09-26T11:16:04.000Z | 2023-09-26T11:16:04 | ---
tags:
- aspect-based-sentiment-analysis
language:
- ind
---
# casa
CASA: An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected from multiple Indonesian online automobile platforms (Ilmania et al., 2018).
The dataset covers six aspects of car quality.
We define the task to be a multi-label classification task,
where each label represents a sentiment for a single aspect with three possible values: positive, negative, and neutral.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@INPROCEEDINGS{8629181,
author={Ilmania, Arfinda and Abdurrahman and Cahyawijaya, Samuel and Purwarianti, Ayu},
booktitle={2018 International Conference on Asian Language Processing (IALP)},
title={Aspect Detection and Sentiment Classification Using Deep Neural Network for Indonesian Aspect-Based Sentiment Analysis},
year={2018},
volume={},
number={},
pages={62-67},
doi={10.1109/IALP.2018.8629181
}
```
## License
CC-BY-SA 4.0
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.8517451882362366,
-0.5750206708908081,
0.26902109384536743,
0.5906273722648621,
-0.4340158700942993,
-0.0494275763630867,
-0.051479607820510864,
-0.4180547893047333,
0.4745408594608307,
0.5486233830451965,
-0.44985431432724,
-0.870597779750824,
-0.5703853964805603,
0.09536362439393997,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Thaweewat/oasst1_th | Thaweewat | 2023-10-08T07:13:36Z | 16 | 0 | null | [
"language:th",
"region:us"
] | 2023-10-08T07:13:36Z | 2023-09-28T09:52:59.000Z | 2023-09-28T09:52:59 | ---
dataset_info:
- config_name: default
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: text_th
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
splits:
- name: train
num_bytes: 10381992
num_examples: 4401
download_size: 0
dataset_size: 10381992
- config_name: train
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: text_th
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 200135278
num_examples: 84437
download_size: 75167235
dataset_size: 200135278
- config_name: val
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: text_th
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
splits:
- name: train
num_bytes: 10381992
num_examples: 4401
download_size: 3907352
dataset_size: 10381992
configs:
- config_name: train
data_files:
- split: train
path: train/train-*
- config_name: val
data_files:
- split: train
path: val/train-*
language:
- th
---
# Dataset Card for "oasst1_th"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.40095949172973633,
-0.35505056381225586,
0.277143269777298,
-0.039125796407461166,
-0.23689880967140198,
-0.1045931875705719,
0.5690630674362183,
-0.09367094933986664,
0.9030525088310242,
0.4409770667552948,
-0.8504210114479065,
-0.7565197348594666,
-0.6809448003768921,
-0.4027140736579... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nicolas-BZRD/CNIL_opendata | Nicolas-BZRD | 2023-09-28T10:59:20Z | 16 | 0 | null | [
"size_categories:10K<n<100K",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | 2023-09-28T10:59:20Z | 2023-09-28T10:49:15.000Z | 2023-09-28T10:49:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 132353121
num_examples: 18108
download_size: 49594572
dataset_size: 132353121
license: odc-by
language:
- fr
tags:
- legal
size_categories:
- 10K<n<100K
pretty_name: CNIL
---
# CNIL (Commission nationale de l'informatique et des libertés)
All [CNIL](https://echanges.dila.gouv.fr/OPENDATA/CNIL/) decisions (opinions, recommendations, simplified standards, authorizations, etc.), since 2012, integration of authorization decisions (data processing, medical research) since the creation of the institution in 1978. | [
-0.3702048361301422,
-0.33449020981788635,
0.7552237510681152,
0.4616955518722534,
-0.28643345832824707,
-0.2226879894733429,
0.06198909133672714,
-0.35073378682136536,
0.3006438910961151,
0.7789376378059387,
-0.5547592043876648,
-0.818572998046875,
-0.40158140659332275,
-0.056162811815738... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TrainingDataPro/people-with-guns-segmentation-and-detection | TrainingDataPro | 2023-10-12T07:07:40Z | 16 | 1 | null | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"legal",
"region:us"
] | 2023-10-12T07:07:40Z | 2023-10-03T14:47:31.000Z | 2023-10-03T14:47:31 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
- object-detection
tags:
- code
- finance
- legal
dataset_info:
config_name: people-with-guns-segmentation-and-detection
features:
- name: id
dtype: int32
- name: name
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: width
dtype: uint16
- name: height
dtype: uint16
- name: shapes
sequence:
- name: label
dtype:
class_label:
names:
'0': person
'1': gun
- name: type
dtype: string
- name: points
sequence:
sequence: float32
- name: rotation
dtype: float32
- name: occluded
dtype: uint8
- name: z_order
dtype: int16
- name: attributes
sequence:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42149
num_examples: 11
download_size: 69561417
dataset_size: 42149
---
# People with Guns Segmentation & Detection Dataset
The dataset consists of photos depicting **individuals holding guns**. It specifically focuses on the **segmentation** of guns within these images and the **detection** of people holding guns.
Each image in the dataset presents a different scenario, capturing individuals from various *backgrounds, genders, and age groups in different poses* while holding guns.
The dataset is an essential resource for the development and evaluation of computer vision models and algorithms in fields related to *firearms recognition, security systems, law enforcement, and safety analysis*.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=people-with-guns-segmentation-and-detection) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
- **images** - contains of original images with people holding guns
- **labels** - includes visualized labeling created for the original images
- **annotations.xml** - contains coordinates of the polygons and bounding boxes, created for the original photo
# Data Format
Each image from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes and polygons. For each point, the x and y coordinates are provided.
### Сlasses:
- **person**: person, who holds the gun, detected with a bounding box,
- **gun**: gun, labeled with a polygon
# Example of XML file structure

# People with Guns Segmentation & Detection might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=people-with-guns-segmentation-and-detection)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** | [
-0.6565435528755188,
-0.2630854845046997,
0.4491538107395172,
-0.23051327466964722,
-0.43609997630119324,
0.2688656747341156,
0.08825115859508514,
-0.41836389899253845,
0.05162420868873596,
0.5962918996810913,
-0.49458006024360657,
-1.0868474245071411,
-0.9056042432785034,
-0.1051430329680... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/synpre_set_1M | tyzhu | 2023-10-04T13:26:19Z | 16 | 0 | null | [
"region:us"
] | 2023-10-04T13:26:19Z | 2023-10-04T13:12:37.000Z | 2023-10-04T13:12:37 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1218382220
num_examples: 1000000
- name: validation
num_bytes: 12163626
num_examples: 10000
download_size: 8496414
dataset_size: 1230545846
---
# Dataset Card for "synpre_set_1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6022335290908813,
-0.1377972662448883,
-0.004732506349682808,
0.322351336479187,
-0.3204241394996643,
-0.12443932145833969,
0.1002926379442215,
-0.15992321074008942,
1.0680327415466309,
0.5432806015014648,
-1.0170986652374268,
-0.798283576965332,
-0.6257630586624146,
-0.2120571583509445... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hack90/ncbi_genbank_part_0 | Hack90 | 2023-10-04T19:45:14Z | 16 | 0 | null | [
"region:us"
] | 2023-10-04T19:45:14Z | 2023-10-04T18:59:55.000Z | 2023-10-04T18:59:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 257341428
num_examples: 156
download_size: 118952731
dataset_size: 257341428
---
# Dataset Card for "ncbi_genbank_part_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6312952041625977,
-0.32651951909065247,
0.2924630641937256,
0.14597278833389282,
-0.36510059237480164,
0.2622425854206085,
0.5655718445777893,
-0.05986079201102257,
0.9614318013191223,
0.5337428450584412,
-0.7480394840240479,
-0.9444479942321777,
-0.38435444235801697,
-0.057255279272794... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HamdanXI/paradetox_with_editOps | HamdanXI | 2023-10-06T12:21:19Z | 16 | 0 | null | [
"region:us"
] | 2023-10-06T12:21:19Z | 2023-10-06T12:21:17.000Z | 2023-10-06T12:21:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: en_toxic_comment
dtype: string
- name: en_neutral_comment
dtype: string
- name: edit_ops
sequence:
sequence: string
splits:
- name: train
num_bytes: 4067285
num_examples: 19744
download_size: 1996316
dataset_size: 4067285
---
# Dataset Card for "difference_analysis_data_structure"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5609139204025269,
-0.35229116678237915,
0.06939861923456192,
0.3197326064109802,
-0.1042649894952774,
0.054547443985939026,
0.4414255619049072,
-0.2403540462255478,
0.974382758140564,
0.07048895955085754,
-0.75408935546875,
-0.8171530961990356,
-0.7761616110801697,
-0.15134114027023315,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xenova/cmu-arctic-xvectors-extracted | Xenova | 2023-10-06T14:59:01Z | 16 | 1 | null | [
"region:us"
] | 2023-10-06T14:59:01Z | 2023-10-06T14:49:55.000Z | 2023-10-06T14:49:55 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mychen76/openwebtext-100k | mychen76 | 2023-10-09T13:37:50Z | 16 | 0 | null | [
"region:us"
] | 2023-10-09T13:37:50Z | 2023-10-09T13:32:49.000Z | 2023-10-09T13:32:49 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 497257202
num_examples: 100000
download_size: 302557845
dataset_size: 497257202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openwebtext-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7467809915542603,
-0.17216025292873383,
0.008592234924435616,
0.23717425763607025,
-0.260434627532959,
-0.1606084257364273,
0.07763572037220001,
-0.14873294532299042,
0.757995069026947,
0.3910261392593384,
-0.7314001321792603,
-0.7415788173675537,
-0.4788530170917511,
-0.248768731951713... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ck46/hendrycks_math | ck46 | 2023-10-19T17:48:20Z | 16 | 0 | null | [
"region:us"
] | 2023-10-19T17:48:20Z | 2023-10-19T17:48:13.000Z | 2023-10-19T17:48:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5984772
num_examples: 7500
- name: test
num_bytes: 3732833
num_examples: 5000
download_size: 4848007
dataset_size: 9717605
---
# Dataset Card for "hendryks_math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6221407055854797,
-0.1575150191783905,
0.04992440715432167,
0.4212111830711365,
-0.1423754245042801,
-0.21685166656970978,
0.10036174207925797,
-0.08059801161289215,
0.7811664342880249,
0.43274667859077454,
-0.9603842496871948,
-0.7660467028617859,
-0.4290282130241394,
-0.26525470614433... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lavita/MedQuAD | lavita | 2023-10-19T22:37:54Z | 16 | 0 | null | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"medical",
"region:us"
] | 2023-10-19T22:37:54Z | 2023-10-19T19:39:05.000Z | 2023-10-19T19:39:05 | ---
dataset_info:
features:
- name: document_id
dtype: string
- name: document_source
dtype: string
- name: document_url
dtype: string
- name: category
dtype: string
- name: umls_cui
dtype: string
- name: umls_semantic_types
dtype: string
- name: umls_semantic_group
dtype: string
- name: synonyms
dtype: string
- name: question_id
dtype: string
- name: question_focus
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 34989308
num_examples: 47441
download_size: 10718159
dataset_size: 34989308
task_categories:
- question-answering
language:
- en
tags:
- medical
size_categories:
- 10K<n<100K
---
# Dataset Card for "MedQuAD"
This dataset is the converted version of [MedQuAD](https://github.com/abachaa/MedQuAD/tree/master). Some notes about the data:
* Multiple values in the `umls_cui`, `umls_semantic_types`, `synonyms` columns are separated by `|` character.
* Answers for [`GARD`, `MPlusHerbsSupplements`, `ADAM`, `MPlusDrugs`] sources (31,034 records) are removed from the original dataset to respect the MedlinePlus copyright.
* UMLS (`umls`): Unified Medical Language System
* CUI (`cui`): Concept Unique Identifier
## Reference
If you use MedQuAD, please cite the original paper:
```
@ARTICLE{BenAbacha-BMC-2019,
author = {Asma {Ben Abacha} and Dina Demner{-}Fushman},
title = {A Question-Entailment Approach to Question Answering},
journal = {{BMC} Bioinform.},
volume = {20},
number = {1},
pages = {511:1--511:23},
year = {2019},
url = {https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-3119-4}
}
``` | [
-0.36168763041496277,
-0.8551240563392639,
0.2921102046966553,
-0.2706468403339386,
-0.4429796040058136,
0.02493450976908207,
-0.09276951104402542,
-0.07996591925621033,
0.390047162771225,
0.619005560874939,
-0.7291406393051147,
-0.6323719620704651,
-0.2989129424095154,
0.33189234137535095... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lajavaness/SICK-fr | Lajavaness | 2023-10-19T23:04:50Z | 16 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-19T23:04:50Z | 2023-10-19T23:03:09.000Z | 2023-10-19T23:03:09 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietlegalqa/tvpl_21_10_2023 | vietlegalqa | 2023-10-21T08:33:33Z | 16 | 0 | null | [
"region:us"
] | 2023-10-21T08:33:33Z | 2023-10-21T08:32:32.000Z | 2023-10-21T08:32:32 | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: context_title_question
sequence: string
- name: title_question
sequence: string
- name: questions
sequence: string
- name: documents
sequence: string
- name: answers
sequence: string
splits:
- name: train
num_bytes: 481979406
num_examples: 151879
- name: val
num_bytes: 25933189
num_examples: 3504
download_size: 140293166
dataset_size: 507912595
---
# Dataset Card for "tvpl_21_10_2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.847457766532898,
-0.2085980474948883,
-0.07856873422861099,
0.4096454977989197,
-0.3457862138748169,
0.03856796771287918,
0.4214867651462555,
-0.07979318499565125,
0.5276088118553162,
0.8208481669425964,
-0.8745049238204956,
-0.42097559571266174,
-0.6300015449523926,
-0.2332273125648498... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Phando/llava-filtered-cc3m-595k | Phando | 2023-10-29T02:15:17Z | 16 | 0 | null | [
"region:us"
] | 2023-10-29T02:15:17Z | 2023-10-22T09:31:05.000Z | 2023-10-22T09:31:05 | Dataset transformed to the image-caption format from https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K | [
-0.07475996017456055,
-0.1389535367488861,
0.4192204177379608,
0.41460007429122925,
-0.8616535663604736,
-0.10597390681505203,
-0.05553630366921425,
-0.2556754946708679,
0.6338145732879639,
0.9161900877952576,
-0.870814859867096,
-0.4307064116001129,
-0.6050515174865723,
0.1477181762456894... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.