id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
ura-hcmut/synthetic_reasoning_natural | 2023-09-19T02:35:59.000Z | [
"task_categories:text2text-generation",
"language:vi",
"license:cc-by-nc-sa-4.0",
"region:us"
] | ura-hcmut | null | null | 0 | 79 | 2023-09-19T02:34:10 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text2text-generation
language:
- vi
configs:
- config_name: easy_gcp
data_files:
- split: train
path: synthetic_reasoning_gcp_natural_training.csv
- split: test
path: synthetic_reasoning_gcp_natural.csv
- config_name: easy_azr
data_files:
- split: train
path: synthetic_reasoning_azr_natural_training.csv
- split: test
path: synthetic_reasoning_azr_natural.csv
---
# Synthetic reasoning dataset
Original version:
- https://huggingface.co/datasets/lighteval/synthetic_reasoning_natural
Translation source code: https://github.com/martinakaduc/ura-llama/tree/main/dataset_scripts/custom_datasets | 712 | [
[
0.00010472536087036133,
-0.045867919921875,
0.0404052734375,
0.02301025390625,
-0.0268096923828125,
0.0174713134765625,
-0.01399993896484375,
-0.0167999267578125,
0.0418701171875,
0.0552978515625,
-0.0703125,
-0.055389404296875,
-0.01198577880859375,
0.02539... |
DarrenLo/ygo_tcg_test | 2023-11-01T10:52:07.000Z | [
"region:us"
] | DarrenLo | null | null | 0 | 79 | 2023-09-21T18:33:07 | configs:
- config_name: default
data_files:
- split: train
path: "strain.json"
- split: test
path: "stets.json"
- split: validation
path: "sval.json" | 169 | [
[
-0.049407958984375,
-0.0400390625,
0.01800537109375,
0.01496124267578125,
-0.044830322265625,
0.0208740234375,
-0.0018491744995117188,
0.01282501220703125,
0.0226898193359375,
0.038299560546875,
-0.051910400390625,
-0.029144287109375,
-0.039215087890625,
0.0... |
jdabello/yahoo_answers_topics | 2023-09-29T01:11:57.000Z | [
"region:us"
] | jdabello | null | null | 0 | 79 | 2023-09-29T01:10:17 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: topic
dtype: string
- name: question_title
dtype: string
- name: question_content
dtype: string
- name: best_answer
dtype: string
splits:
- name: train
num_bytes: 778905695
num_examples: 1400000
download_size: 511657090
dataset_size: 778905695
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "yahoo_answers_topics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 616 | [
[
-0.056640625,
-0.048797607421875,
0.01068115234375,
0.008087158203125,
-0.015777587890625,
-0.0005297660827636719,
0.0197296142578125,
0.004199981689453125,
0.0628662109375,
0.03204345703125,
-0.06964111328125,
-0.044036865234375,
-0.023223876953125,
-0.0206... |
skaltenp/textworld_cooking_go1 | 2023-10-12T15:08:29.000Z | [
"region:us"
] | skaltenp | null | null | 0 | 79 | 2023-10-11T15:42:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: moves
dtype: int64
- name: json_path
dtype: string
- name: id
dtype: string
- name: score
dtype: int64
- name: demonstration
sequence:
sequence: string
- name: z8_path
dtype: string
- name: ulx_path
dtype: string
splits:
- name: train
num_bytes: 9978344
num_examples: 2400
- name: valid
num_bytes: 976492
num_examples: 240
- name: test
num_bytes: 963522
num_examples: 240
download_size: 1667300
dataset_size: 11918358
---
# Dataset Card for "textworld_cooking_go1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 883 | [
[
-0.015869140625,
-0.028900146484375,
0.00807952880859375,
0.01165008544921875,
0.0037250518798828125,
-0.0196075439453125,
0.00927734375,
0.0013790130615234375,
0.0546875,
0.052490234375,
-0.0645751953125,
-0.05767822265625,
-0.044464111328125,
-0.0127716064... |
gilkeyio/CommonVoice_Pitch | 2023-10-27T14:22:53.000Z | [
"region:us"
] | gilkeyio | null | null | 0 | 79 | 2023-10-26T18:39:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: mean_pitch
dtype: float64
- name: normalized_pitch
dtype: float64
- name: client_id
dtype: string
- name: path
dtype: string
splits:
- name: train
num_bytes: 88854757651.38094
num_examples: 170539
- name: validation
num_bytes: 11106649322.761875
num_examples: 21317
- name: test
num_bytes: 11107170345.857187
num_examples: 21318
download_size: 0
dataset_size: 111068577320.0
---
# Dataset Card for "CommonVoice_Pitch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,065 | [
[
-0.0259246826171875,
-0.00896453857421875,
0.005535125732421875,
0.0306243896484375,
-0.017852783203125,
-0.0126190185546875,
0.0194091796875,
-0.00098419189453125,
0.059112548828125,
0.044097900390625,
-0.058563232421875,
-0.06927490234375,
-0.029083251953125,
... |
Aman279/Multiple_Session_Chat_test | 2023-10-30T21:30:29.000Z | [
"region:us"
] | Aman279 | null | null | 0 | 79 | 2023-10-30T16:07:43 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
holylovenia/recam | 2021-10-18T03:28:53.000Z | [
"region:us"
] | holylovenia | null | null | 0 | 78 | 2022-03-02T23:29:22 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# ReCAM: Reading Comprehension of Abstract Meaning
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
This dataset is from SemEval 2021 Task 4: Reading Comprehension of Abstract Meaning. [Original repository for the dataset and baseline code can be accessed here.](https://github.com/boyuanzheng010/SemEval2021-Reading-Comprehension-of-Abstract-Meaning)
- **Paper:** [SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning in ACL](https://aclanthology.org/2021.semeval-1.4.pdf)
- **Leaderboard:** [CodaLab](https://competitions.codalab.org/competitions/26153#learn_the_details)
### Dataset Summary
Refer to [this page](https://competitions.codalab.org/competitions/26153#learn_the_details).
## Dataset Structure
Refer to [the GitHub](https://github.com/boyuanzheng010/SemEval2021-Reading-Comprehension-of-Abstract-Meaning).
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{zheng-etal-2021-semeval,
title = "{S}em{E}val-2021 Task 4: Reading Comprehension of Abstract Meaning",
author = "Zheng, Boyuan and
Yang, Xiaoyu and
Ruan, Yu-Ping and
Ling, Zhenhua and
Liu, Quan and
Wei, Si and
Zhu, Xiaodan",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.4",
doi = "10.18653/v1/2021.semeval-1.4",
pages = "37--50",
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 3,706 | [
[
-0.04095458984375,
-0.0517578125,
0.03216552734375,
0.0126800537109375,
-0.021575927734375,
-0.004100799560546875,
0.0006093978881835938,
-0.040740966796875,
0.0211029052734375,
0.035858154296875,
-0.046173095703125,
-0.06500244140625,
-0.052398681640625,
0.... |
huggingartists/100-gecs | 2022-10-25T09:20:43.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 78 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/100-gecs"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.182347 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9fd98af9a817af8cd78636f71895b6ad.500x500x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/100-gecs">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">100 gecs</div>
<a href="https://genius.com/artists/100-gecs">
<div style="text-align: center; font-size: 14px;">@100-gecs</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/100-gecs).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/100-gecs")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|140| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/100-gecs")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,162 | [
[
-0.04974365234375,
-0.03948974609375,
0.007213592529296875,
0.0182647705078125,
-0.0154266357421875,
-0.0006985664367675781,
-0.0239715576171875,
-0.033782958984375,
0.06317138671875,
0.0240631103515625,
-0.0673828125,
-0.0643310546875,
-0.040771484375,
0.00... |
huggingartists/50-cent | 2022-10-25T09:21:02.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 78 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/50-cent"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 2.267733 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2aa85f8fdffe5d0552ff319221fc63e4.959x959x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/50-cent">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">50 Cent</div>
<a href="https://genius.com/artists/50-cent">
<div style="text-align: center; font-size: 14px;">@50-cent</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/50-cent).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/50-cent")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|840| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/50-cent")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,154 | [
[
-0.0499267578125,
-0.039459228515625,
0.00643157958984375,
0.018096923828125,
-0.01486968994140625,
0.004718780517578125,
-0.023834228515625,
-0.03131103515625,
0.06488037109375,
0.0253753662109375,
-0.068115234375,
-0.061981201171875,
-0.037689208984375,
0.... |
huggingartists/bob-dylan | 2022-10-25T09:25:08.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 78 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/bob-dylan"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 2.91167 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/22306423b6ad8777d1ed5b33ad8b0d0b.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/bob-dylan">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bob Dylan</div>
<a href="https://genius.com/artists/bob-dylan">
<div style="text-align: center; font-size: 14px;">@bob-dylan</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/bob-dylan).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/bob-dylan")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|2241| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/bob-dylan")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,172 | [
[
-0.046600341796875,
-0.041015625,
0.006061553955078125,
0.0242919921875,
-0.0167999267578125,
0.006687164306640625,
-0.0264434814453125,
-0.033294677734375,
0.06719970703125,
0.0238494873046875,
-0.06884765625,
-0.0611572265625,
-0.039215087890625,
0.0084915... |
huggingartists/oxxxymiron | 2022-10-25T09:40:52.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 78 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/oxxxymiron"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 2.070318 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/57ecbbdaf70c671be2d8b7bd39112db0.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/oxxxymiron">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Oxxxymiron</div>
<a href="https://genius.com/artists/oxxxymiron">
<div style="text-align: center; font-size: 14px;">@oxxxymiron</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/oxxxymiron).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/oxxxymiron")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|210| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/oxxxymiron")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,180 | [
[
-0.045379638671875,
-0.0380859375,
0.00728607177734375,
0.0209197998046875,
-0.0185394287109375,
-0.0020694732666015625,
-0.0208740234375,
-0.036285400390625,
0.06524658203125,
0.0261688232421875,
-0.0675048828125,
-0.061767578125,
-0.041534423828125,
0.0106... |
huggingartists/travis-scott | 2022-10-25T09:48:52.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 1 | 78 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/travis-scott"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.483549 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5d19fecdb3828ca9ec89dda588e2eb7d.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/travis-scott">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Travis Scott</div>
<a href="https://genius.com/artists/travis-scott">
<div style="text-align: center; font-size: 14px;">@travis-scott</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/travis-scott).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/travis-scott")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|761| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/travis-scott")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,196 | [
[
-0.047576904296875,
-0.043975830078125,
0.01078033447265625,
0.0198516845703125,
-0.0192413330078125,
0.0103607177734375,
-0.0287628173828125,
-0.033233642578125,
0.06658935546875,
0.0218658447265625,
-0.07281494140625,
-0.0640869140625,
-0.041839599609375,
... |
imvladikon/leipzig_corpora_collection | 2023-07-16T06:44:42.000Z | [
"language:ar",
"language:en",
"language:he",
"language:de",
"language:it",
"language:fr",
"language:pl",
"language:pt",
"language:ru",
"language:uk",
"region:us"
] | imvladikon | \ | D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012 | 1 | 78 | 2022-06-19T16:03:28 | ---
language:
- ar
- en
- he
- de
- it
- fr
- pl
- pt
- ru
- uk
---
# Dataset
## Leipzig Corpora Collection:
[link](https://wortschatz.uni-leipzig.de/en/download)
## Dataset Usage
Config "links" contains URLs with corresponding language and id.
Links are from the `https://corpora.uni-leipzig.de/`
```python
from datasets import load_dataset
ds = load_dataset("imvladikon/leipzig_corpora_collection", "links")
for row in ds["train"]:
print(row)
```
```
{'id': '0', 'data_id': '0', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/ara_news_2005-2009_10K.tar.gz', 'language': 'Arabic', 'language_short': 'ara', 'year': '2005', 'size': '10K'}
{'id': '1', 'data_id': '1', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/ara_news_2005-2009_30K.tar.gz', 'language': 'Arabic', 'language_short': 'ara', 'year': '2005', 'size': '30K'}
{'id': '2', 'data_id': '2', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/ara_news_2005-2009_100K.tar.gz', 'language': 'Arabic', 'language_short': 'ara', 'year': '2005', 'size': '100K'}
....
```
Need to choose one of the data_ids to load the dataset, and paste this data_id as the corresponding config
```python
dataset_he = load_dataset("imvladikon/leipzig_corpora_collection", "heb_wikipedia_2021_1M", split="train")
for row in dataset_he:
print(row)
```
Example for English:
```python
dataset_en = load_dataset("imvladikon/leipzig_corpora_collection", "eng-simple_wikipedia_2021_300K", split="train")
for row in dataset_en:
print(row)
```
### Filtering
```python
links = load_dataset("imvladikon/leipzig_corpora_collection", "links", split="train")
english_2019 = links.filter(lambda x: x["language"] == "English" and x["year"] == "2019")
for sample in english_2019:
print(sample)
```
```
{'id': '277', 'data_id': 'eng_news_2019_10K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_10K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '10K'}
{'id': '278', 'data_id': 'eng_news_2019_30K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_30K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '30K'}
{'id': '279', 'data_id': 'eng_news_2019_100K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_100K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '100K'}
{'id': '280', 'data_id': 'eng_news_2019_300K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_300K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '300K'}
{'id': '281', 'data_id': 'eng_news_2019_1M', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_1M.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '1M'}
{'id': '541', 'data_id': 'eng-za_web_2019_10K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_10K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '10K'}
{'id': '542', 'data_id': 'eng-za_web_2019_30K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_30K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '30K'}
{'id': '543', 'data_id': 'eng-za_web_2019_100K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_100K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '100K'}
{'id': '544', 'data_id': 'eng-za_web_2019_300K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_300K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '300K'}
{'id': '545', 'data_id': 'eng-za_web_2019_1M', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_1M.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '1M'}
```
## Dataset Description
The Leipzig Corpora Collection presents corpora in different languages using the same format and comparable sources. All data are available as plain text files and can be imported into a MySQL database by using the provided import script. They are intended both for scientific use by corpus linguists as well as for applications such as knowledge extraction programs.
The corpora are identical in format and similar in size and content. They contain randomly selected sentences in the language of the corpus and are available in sizes from 10,000 sentences up to 1 million sentences. The sources are either newspaper texts or texts randomly collected from the web. The texts are split into sentences. Non-sentences and foreign language material was removed. Because word co-occurrence information is useful for many applications, these data are precomputed and included as well. For each word, the most significant words appearing as immediate left or right neighbor or appearing anywhere within the same sentence are given. More information about the format and content of these files can be found [here](https://wortschatz.uni-leipzig.de/en/download).
The corpora are automatically collected from carefully selected public sources without considering in detail the content of the contained text. No responsibility is taken for the content of the data. In particular, the views and opinions expressed in specific parts of the data remain exclusively with the authors.
If you use one of these corpora in your work, please, to cite [this paper](http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf):
```
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
```
```
@online{leipzig_corpora_collection_download_page,
url = {https://corpora.uni-leipzig.de/},
urldate = {},
date = {},
author = {{Leipzig Corpora Collection download page}},
}
``` | 5,985 | [
[
-0.057098388671875,
-0.03265380859375,
0.0087432861328125,
0.0001970529556274414,
-0.03131103515625,
0.01080322265625,
-0.033294677734375,
-0.030120849609375,
0.03289794921875,
0.0029888153076171875,
-0.025482177734375,
-0.0679931640625,
-0.025665283203125,
... |
clarin-pl/poquad | 2023-07-04T10:50:43.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-4.0",
"region:us"
... | clarin-pl | PoQuaD description | Tuora, R., Zawadzka-Paluektau, N., Klamra, C., Zwierzchowska, A., Kobyliński, Ł. (2022).
Towards a Polish Question Answering Dataset (PoQuAD).
In: Tseng, YH., Katsurai, M., Nguyen, H.N. (eds) From Born-Physical to Born-Virtual: Augmenting Intelligence in Digital Libraries. ICADL 2022.
Lecture Notes in Computer Science, vol 13636. Springer, Cham.
https://doi.org/10.1007/978-3-031-21756-2_16 | 1 | 78 | 2023-02-28T09:46:17 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pl
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: PoQuaD
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
---
PoQuaD dataset | 317 | [
[
-0.01467132568359375,
-0.03619384765625,
-0.002498626708984375,
0.0135650634765625,
-0.01320648193359375,
-0.0118255615234375,
0.0300140380859375,
0.032135009765625,
0.02032470703125,
0.0592041015625,
-0.0021953582763671875,
-0.044586181640625,
-0.01882934570312... |
Francesco/chess-pieces-mjzgj | 2023-03-30T09:31:59.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | 2 | 78 | 2023-03-30T09:31:44 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': chess-pieces
'1': bishop
'2': black-bishop
'3': black-king
'4': black-knight
'5': black-pawn
'6': black-queen
'7': black-rook
'8': white-bishop
'9': white-king
'10': white-knight
'11': white-pawn
'12': white-queen
'13': white-rook
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: chess-pieces-mjzgj
tags:
- rf100
---
# Dataset Card for chess-pieces-mjzgj
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/chess-pieces-mjzgj
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
chess-pieces-mjzgj
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/chess-pieces-mjzgj
### Citation Information
```
@misc{ chess-pieces-mjzgj,
title = { chess pieces mjzgj Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/chess-pieces-mjzgj } },
url = { https://universe.roboflow.com/object-detection/chess-pieces-mjzgj },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 3,743 | [
[
-0.04681396484375,
-0.03924560546875,
0.0153350830078125,
-0.00753021240234375,
-0.039276123046875,
-0.010711669921875,
-0.01107025146484375,
-0.037689208984375,
0.026641845703125,
0.0313720703125,
-0.05224609375,
-0.07904052734375,
-0.03985595703125,
0.0135... |
BuffetFS/BUFFET | 2023-08-13T06:26:30.000Z | [
"license:mit",
"region:us"
] | BuffetFS | null | null | 4 | 78 | 2023-05-24T00:53:16 | ---
license: mit
---
# BUFFET: Benchmarking Large Language Models for Cross-lingual Few-shot Transfer
- Project page: [buffetfs.github.io/](https://buffetfs.github.io/) ([Paper](https://buffetfs.github.io/static/files/buffet_paper.pdf))
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
## Dataset Description
- **Homepage:** https://buffetfs.github.io/
- **Repository:** https://github.com/AkariAsai/BUFFET
- **Paper:** https://buffetfs.github.io/static/files/buffet_paper.pdf
- **Point of Contact:** akari@cs.washigton.edu
### Dataset Summary
<b>BUFFET</b> unifies 15 diverse NLP datasets in typologically diverse 54 languages. The list of the datasets is available below.
We are currently working on Dataset summary, and will update the descriptions shortly! | 966 | [
[
-0.03094482421875,
-0.022064208984375,
-0.0105743408203125,
0.0399169921875,
-0.00958251953125,
-0.008819580078125,
-0.033172607421875,
-0.03802490234375,
0.00977325439453125,
0.041839599609375,
-0.0355224609375,
-0.0518798828125,
-0.035003662109375,
0.01608... |
clarin-knext/trec-covid-pl | 2023-06-07T08:12:18.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 78 | 2023-06-06T22:34:50 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.015380859375,
-0.0628662109375,
0.035491943359375,
0.016357421875,
-0.0221405029296875,
-0.0103607177734375,
-0.01158905029296875,
-0.034515380859375,
-0.001312255859375,
0.028656005859375,
-0.03826904296875,
-0.048126220703125,
-0.02899169921875,
-0.0033... |
nampdn-ai/tiny-webtext | 2023-08-27T00:23:37.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"source_datasets:tiiuae/falcon-refinedweb",
"language:en",
"license:mit",
"arxiv:2306.01116",
"doi:10.57967/hf/1024",
"region:us"
] | nampdn-ai | null | null | 19 | 78 | 2023-06-27T01:13:25 | ---
license: mit
viewer: true
task_categories:
- text-generation
language:
- en
pretty_name: Tiny WebText
size_categories:
- 1M<n<10M
source_datasets:
- tiiuae/falcon-refinedweb
---
# Tiny WebText
The Tiny WebText dataset is designed to help models learn about perception on web text while neutralizing the bias of the source text using critical thinking methods. By providing a rich and diverse set of texts, I aim to improve the ability of models to understand and analyze information in a more objective and unbiased manner.
This dataset can be used to train and evaluate natural language processing and machine learning models, with the goal of improving their perception and critical thinking skills. It is a valuable resource for researchers and developers, especially those working in the fields of machine learning and data engineering.
The dataset is augmented using subset of [Falcon-RefinedWeb](https://arxiv.org/abs/2306.01116), which provides additional augmented text using [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) model to improve the quality and diversity of the texts.
I welcome any feedback or contributions. Thank you for your interest in my work! | 1,202 | [
[
-0.043365478515625,
-0.051910400390625,
0.0203857421875,
0.007663726806640625,
-0.0224761962890625,
-0.00629425048828125,
-0.0232086181640625,
-0.0472412109375,
-0.005382537841796875,
0.038482666015625,
-0.044189453125,
-0.024169921875,
-0.0265960693359375,
... |
open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v0 | 2023-10-17T19:27:06.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 78 | 2023-08-18T11:50:55 | ---
pretty_name: Evaluation run of togethercomputer/GPT-JT-6B-v0
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [togethercomputer/GPT-JT-6B-v0](https://huggingface.co/togethercomputer/GPT-JT-6B-v0)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v0\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T19:26:54.220051](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v0/blob/main/results_2023-10-17T19-26-54.220051.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n\
\ \"em_stderr\": 0.0003314581465219154,\n \"f1\": 0.043061031879194765,\n\
\ \"f1_stderr\": 0.0011437900819203201,\n \"acc\": 0.330058886781919,\n\
\ \"acc_stderr\": 0.008219084533910332\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.0003314581465219154,\n\
\ \"f1\": 0.043061031879194765,\n \"f1_stderr\": 0.0011437900819203201\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.012130401819560273,\n \
\ \"acc_stderr\": 0.003015294242890946\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6479873717442778,\n \"acc_stderr\": 0.013422874824929718\n\
\ }\n}\n```"
repo_url: https://huggingface.co/togethercomputer/GPT-JT-6B-v0
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T19_26_54.220051
path:
- '**/details_harness|drop|3_2023-10-17T19-26-54.220051.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T19-26-54.220051.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T19_26_54.220051
path:
- '**/details_harness|gsm8k|5_2023-10-17T19-26-54.220051.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T19-26-54.220051.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:42:14.994932.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:42:14.994932.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:42:14.994932.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T19_26_54.220051
path:
- '**/details_harness|winogrande|5_2023-10-17T19-26-54.220051.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T19-26-54.220051.parquet'
- config_name: results
data_files:
- split: 2023_07_19T15_42_14.994932
path:
- results_2023-07-19T15:42:14.994932.parquet
- split: 2023_10_17T19_26_54.220051
path:
- results_2023-10-17T19-26-54.220051.parquet
- split: latest
path:
- results_2023-10-17T19-26-54.220051.parquet
---
# Dataset Card for Evaluation run of togethercomputer/GPT-JT-6B-v0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/togethercomputer/GPT-JT-6B-v0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [togethercomputer/GPT-JT-6B-v0](https://huggingface.co/togethercomputer/GPT-JT-6B-v0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T19:26:54.220051](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v0/blob/main/results_2023-10-17T19-26-54.220051.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0010486577181208054,
"em_stderr": 0.0003314581465219154,
"f1": 0.043061031879194765,
"f1_stderr": 0.0011437900819203201,
"acc": 0.330058886781919,
"acc_stderr": 0.008219084533910332
},
"harness|drop|3": {
"em": 0.0010486577181208054,
"em_stderr": 0.0003314581465219154,
"f1": 0.043061031879194765,
"f1_stderr": 0.0011437900819203201
},
"harness|gsm8k|5": {
"acc": 0.012130401819560273,
"acc_stderr": 0.003015294242890946
},
"harness|winogrande|5": {
"acc": 0.6479873717442778,
"acc_stderr": 0.013422874824929718
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 38,680 | [
[
-0.033203125,
-0.053680419921875,
0.0215911865234375,
0.0131988525390625,
-0.0135650634765625,
0.0114898681640625,
-0.0293121337890625,
-0.013702392578125,
0.0281829833984375,
0.036224365234375,
-0.049957275390625,
-0.06524658203125,
-0.052734375,
0.00967407... |
akjindal53244/Arithmo-Data | 2023-10-22T18:14:49.000Z | [
"license:apache-2.0",
"region:us"
] | akjindal53244 | null | null | 21 | 78 | 2023-10-13T08:30:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: combined_MathInstruct_MetaMathQA_LilaOOD_train.json
- split: test
path: combined_MathInstruct_MetaMathQA_LilaOOD_test.json
license: apache-2.0
---
Arithmo dataset is prepared as combination of [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), and [lila ood](https://huggingface.co/datasets/allenai/lila/viewer/ood). Refer to [Model Training Data](https://github.com/akjindal53244/Arithmo-Mistral-7B#model-training-data) section in Arithmo-Mistral-7B project GitHub page for more details.
<h2 id="References">References</h2>
```
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
@article{Yue2023mammoth,
title={MAmmoTH: Building math generalist models through hybrid instruction tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
@article{mishra2022lila,
title={Lila: A unified benchmark for mathematical reasoning},
author={Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan},
journal={arXiv preprint arXiv:2210.17517},
year={2022}
}
```
| 1,683 | [
[
-0.030059814453125,
-0.041534423828125,
0.012054443359375,
0.0028133392333984375,
-0.006320953369140625,
-0.02349853515625,
-0.0083160400390625,
-0.006591796875,
-0.020294189453125,
0.041961669921875,
-0.05084228515625,
-0.032501220703125,
-0.0262451171875,
... |
sayan1101/reward_model_ranking_dataset_RLHF | 2023-10-25T11:04:51.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 78 | 2023-10-25T10:40:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid1
path: data/valid1-*
- split: valid2
path: data/valid2-*
dataset_info:
features:
- name: prompt
dtype: string
- name: summary_3
dtype: string
- name: summary_4
dtype: string
- name: summary_1
dtype: string
- name: summary_2
dtype: string
splits:
- name: train
num_bytes: 260765
num_examples: 350
- name: test
num_bytes: 260765
num_examples: 350
- name: valid1
num_bytes: 260765
num_examples: 350
- name: valid2
num_bytes: 260765
num_examples: 350
download_size: 0
dataset_size: 1043060
---
# Dataset Card for "reward_model_ranking_dataset_RLHF"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 910 | [
[
-0.0222930908203125,
-0.01800537109375,
0.01042938232421875,
0.002777099609375,
-0.00726318359375,
0.0011310577392578125,
0.022186279296875,
-0.008331298828125,
0.057586669921875,
0.04071044921875,
-0.06512451171875,
-0.051177978515625,
-0.051666259765625,
-... |
jxm/dbpedia | 2023-10-25T17:44:20.000Z | [
"region:us"
] | jxm | null | null | 0 | 78 | 2023-10-25T17:44:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 14782633
num_examples: 49999
- name: test
num_bytes: 20641120
num_examples: 70000
- name: dev
num_bytes: 74007
num_examples: 256
download_size: 21721890
dataset_size: 35497760
---
# Dataset Card for "dbpedia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 670 | [
[
-0.05029296875,
-0.02197265625,
0.015045166015625,
0.01361846923828125,
-0.010406494140625,
-0.00658416748046875,
0.009765625,
-0.0159759521484375,
0.06427001953125,
0.0291595458984375,
-0.068603515625,
-0.052001953125,
-0.0166168212890625,
-0.01309967041015... |
atmallen/qm_alice_easy_2_1.0e_eval | 2023-10-31T19:47:22.000Z | [
"region:us"
] | atmallen | null | null | 0 | 78 | 2023-10-27T05:43:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: summand1
dtype: int64
- name: summand2
dtype: int64
- name: character
dtype: string
- name: sum
dtype: int64
- name: sum_words
dtype: string
- name: summand1_words
dtype: string
- name: summand2_words
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: alice_label
dtype: int64
- name: bob_label
dtype: int64
- name: row_id
dtype: int64
splits:
- name: train
num_bytes: 78533530.34504
num_examples: 467816
- name: validation
num_bytes: 7563753.53255
num_examples: 44164
- name: test
num_bytes: 7479503.3722
num_examples: 43592
download_size: 17050461
dataset_size: 93576787.24979
---
# Dataset Card for "qm_alice_easy_2_1.0e_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,147 | [
[
-0.020660400390625,
-0.0338134765625,
0.02154541015625,
0.0150299072265625,
-0.0124053955078125,
-0.004306793212890625,
0.031463623046875,
0.0019235610961914062,
0.035675048828125,
0.0247955322265625,
-0.048095703125,
-0.050994873046875,
-0.036651611328125,
... |
sagot/lefff_morpho | 2022-07-23T15:52:46.000Z | [
"license:lgpl-lr",
"region:us"
] | sagot | The lefff-morpho dataset gives access to the morphological information, in both its original format and the UniMorph format. | @inproceedings{sagot:inria-00521242,
TITLE = {{The Lefff, a freely available and large-coverage morphological and syntactic lexicon for French}},
AUTHOR = {Sagot, Beno{\^i}t},
URL = {https://hal.inria.fr/inria-00521242},
BOOKTITLE = {{7th international conference on Language Resources and Evaluation (LREC 2010)}},
ADDRESS = {Valletta, Malta},
YEAR = {2010},
MONTH = May,
PDF = {https://hal.inria.fr/inria-00521242/file/lrec10lefff.pdf},
HAL_ID = {inria-00521242},
HAL_VERSION = {v1},
} | 0 | 77 | 2022-06-12T19:19:49 | ---
license: lgpl-lr
---
# Dataset Card for lefff morpho
## Dataset Description
- **Homepage:** [http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html](http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html)
- **Repository:** [https://gitlab.inria.fr/almanach/alexina/lefff](https://gitlab.inria.fr/almanach/alexina/lefff)
- **Paper:** [http://www.lrec-conf.org/proceedings/lrec2010/pdf/701_Paper.pdf](http://www.lrec-conf.org/proceedings/lrec2010/pdf/701_Paper.pdf)
- **Point of Contact:** [Benoît Sagot](benoit.sagot@inria.fr)
### Dataset Summary
The Lefff, currently in its 3.5 version, is one of the main morphological and syntactic lexicons for French. This Hugging Face dataset provides an easy access to the extensional morphological information in the Lefff, i.e. to the 4-uples (form, lemma, category, morphosyntactic features) and to the amalgams (e.g. _aux_ = _à_ + _les_) it contains. Category and morphosyntactic features are provided both in the original Lefff format and following the UniMorph guidelines.
### Languages
French
## Dataset Creation
The main author of the resource is Benoît Sagot (Inria, France).
Please refer to the main paper and other Lefff-related papers for details.
## Additional Information
### Licensing Information
The dataset, as the whole Lefff, is distributed under the LGPL-LR licence.
### Citation Information
The main paper regarding the Lefff can be found [here](https://aclanthology.org/L10-1487/). Here is the BibTeX entry for the paper:
```
@inproceedings{sagot:inria-00521242,
TITLE = {{The Lefff, a freely available and large-coverage morphological and syntactic lexicon for French}},
AUTHOR = {Sagot, Beno{\^i}t},
URL = {https://hal.inria.fr/inria-00521242},
BOOKTITLE = {{7th international conference on Language Resources and Evaluation (LREC 2010)}},
ADDRESS = {Valletta, Malta},
YEAR = {2010},
MONTH = May,
PDF = {https://hal.inria.fr/inria-00521242/file/lrec10lefff.pdf},
HAL_ID = {inria-00521242},
HAL_VERSION = {v1},
}
```
For specific parts of speech or other parts of the lexicon, please cite the corresponding papers whenever relevant.
| 2,169 | [
[
-0.038116455078125,
-0.035919189453125,
0.012664794921875,
0.0172882080078125,
-0.0303955078125,
-0.03271484375,
0.00118255615234375,
-0.030853271484375,
0.044219970703125,
0.0418701171875,
-0.0482177734375,
-0.043975830078125,
-0.037841796875,
0.02618408203... |
LLukas22/NLQuAD | 2022-12-23T13:04:58.000Z | [
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-3.0",
"region:us"
] | LLukas22 | null | null | 1 | 77 | 2022-12-15T15:05:57 | ---
pretty_name: NLQuAD
language:
- en
license:
- cc-by-3.0
size_categories:
- 10K<n<100K
multilinguality:
- monolingual
task_ids:
- extractive-qa
dataset_info:
features:
- name: title
dtype: string
- name: date
dtype: string
- name: paragraphs
list:
- name: context
dtype: string
- name: qas
list:
- name: answers
list:
- name: answer_end
dtype: int64
- name: answer_start
dtype: int64
- name: text
dtype: string
- name: id
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 72036724
num_examples: 10259
- name: test
num_bytes: 9045482
num_examples: 1280
- name: validation
num_bytes: 8876137
num_examples: 1280
download_size: 0
dataset_size: 89958343
---
# Dataset Card for "NLQuAD"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/ASoleimaniB/NLQuAD](https://github.com/ASoleimaniB/NLQuAD)
- **Paper: https://aclanthology.org/2021.eacl-main.106/**
- **Size of the generated dataset:** 89.95 MB
### Dataset Summary
This is a copy of the original NLQuAD dataset distributed via [Github](https://github.com/ASoleimaniB/NLQuAD).
NLQuAD is a non-factoid long question answering dataset from BBC news articles.
NLQuAD’s question types and the long length of its context documents as well as answers, make it a challenging real-world task.
NLQuAD consists of news articles as context documents, interrogative sub-headings in the articles as questions, and body paragraphs corresponding to the sub-headings as contiguous answers to the questions.
NLQuAD contains 31k non-factoid questions and long answers collected from 13k BBC news articles.
See example articles in BBC [1](https://www.bbc.com/news/world-asia-china-51230011), [2](https://www.bbc.com/news/world-55709428).
We automatically extract target answers because annotating for non-factoid long QA is extremely challenging and costly.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"title": "Khashoggi murder: Body 'dissolved in acid'",
"date": "2 November 2018",
"paragraphs":[
{
"context": "A top Turkish official, presidential adviser Yasin Aktay, has said ....",
"qas":[
{
"question":"What was said in the crown prince's alleged phone call?",
"id":"0_0",
"answers":[
{
"text":"During the call with President Donald Trump\'s son-in-law Jared Kushner and national ....",
"answer_start":1352,
"answer_end": 2108,
}
]
},
{
"question":"What has the investigation found so far?",
"id":"0_1",
"answers":[
{
"text":"There is still no consensus on how Khashoggi died. He entered ....",
"answer_start":2109,
"answer_end": 3128,
}
]
},
]
}
]
}
```
### Data Fields
The data fields are the same among all splits.
- `title`: a `string` feature.
- `date`: a `string` feature.
- `paragraphs`: a list feature containing dictionaries:
- `context`: a `string` feature.
- `qas`: a list feature containing dictionaries:
- `question`: a `string` feature.
- `id`: a `string` feature.
- `answers`: a list feature containing dictionaries:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `answer_end`: a `int32` feature
### Data Splits
| name |train|test|validation|
|----------|----:|----:|---------:|
| |10259| 1280| 1280|
## Additional Information
### Licensing Information
This dataset is distributed under the [CC BY-NC](https://creativecommons.org/licenses/by-nc/3.0/) licence providing free access for non-commercial and academic usage.
### Citation Information
BibTeX:
```json
@inproceedings{soleimani-etal-2021-nlquad,
title = "{NLQ}u{AD}: A Non-Factoid Long Question Answering Data Set",
author = "Soleimani, Amir and
Monz, Christof and
Worring, Marcel",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-main.106",
doi = "10.18653/v1/2021.eacl-main.106",
pages = "1245--1255",
abstract = "We introduce NLQuAD, the first data set with baseline methods for non-factoid long question answering, a task requiring document-level language understanding. In contrast to existing span detection question answering data sets, NLQuAD has non-factoid questions that are not answerable by a short span of text and demanding multiple-sentence descriptive answers and opinions. We show the limitation of the F1 score for evaluation of long answers and introduce Intersection over Union (IoU), which measures position-sensitive overlap between the predicted and the target answer spans. To establish baseline performances, we compare BERT, RoBERTa, and Longformer models. Experimental results and human evaluations show that Longformer outperforms the other architectures, but results are still far behind a human upper bound, leaving substantial room for improvements. NLQuAD{'}s samples exceed the input limitation of most pre-trained Transformer-based models, encouraging future research on long sequence language models.",
}
``` | 6,234 | [
[
-0.05194091796875,
-0.07275390625,
0.0216827392578125,
0.006160736083984375,
-0.0197601318359375,
0.00408935546875,
-0.013885498046875,
-0.035614013671875,
0.03228759765625,
0.0293121337890625,
-0.039337158203125,
-0.04498291015625,
-0.0195159912109375,
0.03... |
jonathan-roberts1/EuroSAT | 2023-03-31T15:49:24.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:mit",
"region:us"
] | jonathan-roberts1 | null | null | 1 | 77 | 2023-03-24T20:42:32 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': annual crop
'1': forest
'2': herbaceous vegetation
'3': highway
'4': industrial
'5': pasture
'6': permanent crop
'7': residential
'8': river
'9': sea or lake
splits:
- name: train
num_bytes: 88391109
num_examples: 27000
download_size: 88591771
dataset_size: 88391109
license: mit
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "EuroSAT2"
## Dataset Description
- **Paper** [Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification](https://ieeexplore.ieee.org/iel7/4609443/8789745/08736785.pdf)
- **Paper** [Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification](https://ieeexplore.ieee.org/iel7/8496405/8517275/08519248.pdf)
- **GitHub** [EuroSAT](https://github.com/phelber/EuroSAT)
- **Data** [Zenodo](https://zenodo.org/record/7711810#.ZCcA9uzMLJx)
### Licensing Information
MIT.
## Citation Information
[Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification](https://ieeexplore.ieee.org/iel7/4609443/8789745/08736785.pdf)
[Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification](https://ieeexplore.ieee.org/iel7/8496405/8517275/08519248.pdf)
```
@article{helber2019eurosat,
title = {Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification},
author = {Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian},
year = 2019,
journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
publisher = {IEEE}
}
@inproceedings{helber2018introducing,
title = {Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification},
author = {Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian},
year = 2018,
booktitle = {IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium},
pages = {204--207},
organization = {IEEE}
}
``` | 2,372 | [
[
-0.05853271484375,
-0.019195556640625,
0.02044677734375,
-0.01441192626953125,
-0.01203155517578125,
0.007373809814453125,
-0.015106201171875,
-0.050140380859375,
-0.0261688232421875,
0.033721923828125,
-0.019195556640625,
-0.047271728515625,
-0.032989501953125,... |
RyokoAI/ShareGPT52K | 2023-04-02T13:16:51.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:es",
"language:de",
"language:multilingual",
"license:cc0-1.0",
"conversation",
"rlhf",
"chatgpt",
"gpt-3.5",
"region:us"
] | RyokoAI | null | null | 238 | 77 | 2023-04-01T19:59:59 | ---
license: cc0-1.0
language:
- en
- es
- de
- multilingual
pretty_name: ShareGPT 90K Conversations
task_categories:
- text-generation
tags:
- conversation
- rlhf
- chatgpt
- gpt-3.5
size_categories:
- 10K<n<100K
---
# Dataset Card for ShareGPT~~52K~~90K
## Dataset Description
- **Homepage: N/A**
- **Repository: N/A**
- **Paper: N/A**
- **Leaderboard: N/A**
- **Point of Contact:** Ronsor Labs <ronsor@ronsor.com>
### Dataset Summary
This dataset is a collection of approximately ~~52,000~~**90,000** conversations scraped via the ShareGPT API before it was shut down.
These conversations include both user prompts and responses from OpenAI's ChatGPT.
This repository now contains the new 90K conversations version. The previous 52K may
be found in the `old/` directory.
### Supported Tasks and Leaderboards
* text-generation
### Languages
This dataset is expected to primarily consist of messages in English and other Western languages.
## Dataset Structure
### Data Instances
```json
{
"id": "2wtLdZm",
"conversations": [
{
"from": "human",
"value": "Are you familiar with the SAS programming language?"
},
{
"from": "gpt",
"value": "<div class=\"markdown prose w-full break-words dark:prose-invert light\"><p>Yes, I am familiar with the SAS programming language. SAS is a statistical software suite that is widely used in data analysis, data management, and business intelligence applications. It has its own programming language, which is used to manipulate and analyze data, generate reports, and automate data-driven workflows. The language is known for its ease of use and powerful capabilities, and it is commonly used in industries such as healthcare, finance, and government.</p></div>"
},
{
"from": "human",
"value": "Can you write a SAS proc that imports a dataset from a CSV saved on my local machine?"
}
]
}
```
### Data Fields
* **id**: conversation id
* **conversations**: conversation session array
* **from**: ChatGPT ("gpt") or the user ("human")
* **value**: message contents as raw HTML
### Data Splits
N/A
## Dataset Creation
### Curation Rationale
This is a decently large dataset of realistic human-AI conversations which I believe should be released
to the research community.
### Source Data
#### Initial Data Collection and Normalization
This data was collected using the ShareGPT API.
#### Who are the source language producers?
ShareGPT users and OpenAI ChatGPT.
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
This dataset *may* contain personal information, if ShareGPT users were sending such information to
ChatGPT. ChatGPT warns users not to submit personal information to it, however, so without further
evaluation, we believe that this dataset should contain little or no personal information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset may be used to train models that are competitive with OpenAI's ChatGPT. Please filter
this dataset first, as it may contain canned responses, raw HTML, and other undesirable information.
### Discussion of Biases
This dataset exhibits all the biases of OpenAI's ChatGPT models (GPT-3.5 and GPT-4) as well as the
biases of the users who uploaded the conversations.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
None.
### Licensing Information
**CC0: No Rights Reserved.**
The output of machine learning algorithms is uncopyrightable in the United States and other jurisdictions.
**Additionally, the OpenAI terms of service do not apply to this dataset as users of this dataset
are not accessing the OpenAI service.**
### Citation Information
TODO
### Contributions
These conversations were allegedly scraped by an anonymous user on 4chan.
The 90K version was sourced from [this post](https://boards.4channel.org/g/thread/92487155/lmg-local-models-general-snail-edition#p92490887).
Thanks, anon!
| 4,095 | [
[
-0.0282135009765625,
-0.0589599609375,
0.003993988037109375,
0.017333984375,
-0.0164642333984375,
0.004512786865234375,
-0.024261474609375,
-0.0362548828125,
0.0273284912109375,
0.03240966796875,
-0.058746337890625,
-0.048858642578125,
-0.04669189453125,
-0.... |
OllieStanley/oa_dolly_15k | 2023-05-02T14:27:18.000Z | [
"region:us"
] | OllieStanley | null | null | 2 | 77 | 2023-04-12T15:14:10 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
struct:
- name: CATEGORY
dtype: string
- name: CONTEXT
dtype: string
splits:
- name: train
num_bytes: 12686692
num_examples: 15015
download_size: 7872978
dataset_size: 12686692
---
# oa_dolly_15k
Dolly 15k dataset converted to OpenAssistant QA format. | 453 | [
[
0.0025234222412109375,
-0.02667236328125,
0.0096588134765625,
0.0272674560546875,
-0.033050537109375,
-0.022918701171875,
0.04119873046875,
0.0028839111328125,
0.018951416015625,
0.07177734375,
-0.036895751953125,
-0.049346923828125,
-0.019195556640625,
-0.0... |
minoosh/IEMOCAP_Text_dataset | 2023-05-16T11:37:14.000Z | [
"region:us"
] | minoosh | null | null | 0 | 77 | 2023-05-16T11:36:42 | ---
dataset_info:
features:
- name: TURN_NAME
dtype: string
- name: emotion
dtype:
class_label:
names:
'0': ang
'1': hap
'2': neu
'3': sad
- name: transcription
dtype: string
splits:
- name: Session1
num_bytes: 100099
num_examples: 1085
- name: Session2
num_bytes: 105421
num_examples: 1023
- name: Session3
num_bytes: 104680
num_examples: 1151
- name: Session4
num_bytes: 99515
num_examples: 1031
- name: Session5
num_bytes: 121589
num_examples: 1241
download_size: 261307
dataset_size: 531304
---
# Dataset Card for "IEMOCAP_Text_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 803 | [
[
-0.033660888671875,
-0.0189666748046875,
0.0002796649932861328,
0.019775390625,
-0.004665374755859375,
-0.00260162353515625,
0.01226806640625,
-0.02874755859375,
0.06298828125,
0.0290374755859375,
-0.055877685546875,
-0.049102783203125,
-0.0511474609375,
0.0... |
LEAP/subsampled_low_res | 2023-10-09T16:42:18.000Z | [
"arxiv:2306.08754",
"region:us"
] | LEAP | null | null | 1 | 77 | 2023-08-18T20:31:09 | Inputs and targets in this dataset are pre-normalized and scaled with .nc files found on the GitHub repo:
https://github.com/leap-stc/ClimSim/tree/main/preprocessing/normalizations
Read more: https://arxiv.org/abs/2306.08754. | 227 | [
[
-0.032440185546875,
-0.0221710205078125,
0.0182647705078125,
-0.0023670196533203125,
-0.01409149169921875,
-0.014190673828125,
-0.0194091796875,
0.0000731348991394043,
0.0193023681640625,
0.052947998046875,
-0.0638427734375,
-0.05816650390625,
-0.05108642578125,... |
lhallee/uniref_small | 2023-10-04T03:12:15.000Z | [
"region:us"
] | lhallee | null | null | 0 | 77 | 2023-10-04T03:12:13 | ---
dataset_info:
features:
- name: uniref
dtype: string
splits:
- name: train
num_bytes: 20739509
num_examples: 100000
download_size: 20824692
dataset_size: 20739509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uniref_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 447 | [
[
-0.0394287109375,
-0.017242431640625,
0.01348876953125,
-0.0010957717895507812,
-0.0306854248046875,
-0.0092315673828125,
-0.0035552978515625,
0.0010528564453125,
0.05743408203125,
0.0391845703125,
-0.0579833984375,
-0.044189453125,
-0.032501220703125,
-0.01... |
sibozhu/paddington_cn | 2023-10-04T16:34:25.000Z | [
"region:us"
] | sibozhu | null | null | 0 | 77 | 2023-10-04T16:20:40 | Entry not found | 15 | [
[
-0.0213623046875,
-0.0149383544921875,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.03790283... |
NbAiLab/nbnn_language_detection | 2023-10-12T13:21:41.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:nb",
"language:no",
"language:nn",
"license:apache-2.0",
"region:us"
] | NbAiLab | null | null | 0 | 77 | 2023-10-10T12:49:36 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- nb
- 'no'
- nn
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path: "train.jsonl"
- split: train_a
path: "trainA.jsonl"
- split: train_b
path: "trainB.jsonl"
- split: train_nordic
path: "train_nordic.jsonl"
- split: train_cleaned
path: "train_cleaned.jsonl"
- split: dev
path: "dev.jsonl"
- split: dev_nordic
path: "dev_nordic.jsonl"
- split: test
path: "test.jsonl"
- split: test_nordiv
path: "test_nordic.jsonl"
---
# Dataset Card for Bokmål-Nynorsk Language Detection (main_train_split)
## Dataset Summary
This dataset is intended for language detection for Bokmål to Nynorsk and vice versa. It contains 800,000 sentence pairs, sourced from Språkbanken and pruned to avoid overlap with the NorBench dataset. The data comes from translations of news text from Norsk telegrambyrå (NTB), performed by Nynorsk pressekontor (NPK). In addition the dev and test set has 1000 entries.
## Data Collection
- **Period**: February 2011 to December 2022
- **Source**: [Omsetjingsminne Nynorsk Pressekontor - Språkbanken](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-80/)
- **Size**: 800,000 sentence pairs
- **Format**: JSON-lines (with `text` , `language` fields)
### Processing Steps
1. Pruned to avoid overlap with NorBench
2. Deduplicated
3. Shuffled with a fixed seed (42)
## Usage
Intended for training Bokmål-Nynorsk detection models. For more details, refer to the repository where the dataset preparation script and the actual dataset reside.
| 1,638 | [
[
-0.026611328125,
-0.04534912109375,
0.0011777877807617188,
0.0196380615234375,
-0.046173095703125,
-0.00818634033203125,
-0.040313720703125,
-0.03509521484375,
0.01348876953125,
0.04742431640625,
-0.037139892578125,
-0.052764892578125,
-0.038482666015625,
0.... |
CollectiveCognition/chats-data-2023-10-16 | 2023-10-16T13:07:57.000Z | [
"license:mit",
"region:us"
] | CollectiveCognition | null | null | 12 | 77 | 2023-10-16T13:06:41 | ---
license: mit
---
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@collectivecognition.ai
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT | 6,119 | [
[
-0.027740478515625,
-0.07232666015625,
0.01480865478515625,
0.0293731689453125,
-0.0026092529296875,
0.00954437255859375,
-0.016021728515625,
-0.019561767578125,
0.0214385986328125,
0.037139892578125,
-0.051605224609375,
-0.05224609375,
-0.053863525390625,
-... |
coastalcph/fm_classifier_mutable-1-n | 2023-10-24T13:24:24.000Z | [
"region:us"
] | coastalcph | null | null | 0 | 77 | 2023-10-23T15:13:51 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1608732.147303521
num_examples: 8977
- name: all_fm
num_bytes: 30017653.417646818
num_examples: 157125
- name: validation
num_bytes: 1016408.1453548166
num_examples: 5916
- name: test
num_bytes: 1125889.2970730583
num_examples: 5724
download_size: 7539663
dataset_size: 33768683.00737821
---
# Dataset Card for "fm_classifier_mutable-1-n"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 893 | [
[
-0.04058837890625,
-0.0183258056640625,
0.01041412353515625,
0.01364898681640625,
-0.0157928466796875,
-0.005191802978515625,
0.01297760009765625,
-0.006168365478515625,
0.046112060546875,
0.019805908203125,
-0.05621337890625,
-0.039764404296875,
-0.050048828125... |
manu/french-bench-grammar-vocab-reading | 2023-10-26T14:19:28.000Z | [
"region:us"
] | manu | null | null | 0 | 77 | 2023-10-25T22:03:38 | ---
configs:
- config_name: default
data_files:
- split: Grammar
path: data/Grammar-*
- split: Vocabulary
path: data/Vocabulary-*
- split: Reading
path: data/Reading-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: difficulty
dtype: string
- name: language
dtype: string
- name: subject
dtype: string
- name: url
dtype: string
- name: answerA
dtype: string
- name: answerB
dtype: string
- name: answerC
dtype: string
- name: answerD
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: Grammar
num_bytes: 29094
num_examples: 119
- name: Vocabulary
num_bytes: 30944
num_examples: 119
- name: Reading
num_bytes: 115507
num_examples: 71
download_size: 0
dataset_size: 175545
---
# Dataset Card for "french-bench-grammar-vocab-reading"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,085 | [
[
-0.03656005859375,
-0.032623291015625,
0.0197296142578125,
0.024627685546875,
-0.006031036376953125,
-0.001422882080078125,
0.006053924560546875,
-0.0082855224609375,
0.038360595703125,
0.04608154296875,
-0.0482177734375,
-0.055938720703125,
-0.046539306640625,
... |
salma-remyx/hf_test | 2023-10-29T23:10:58.000Z | [
"region:us"
] | salma-remyx | null | null | 0 | 77 | 2023-10-28T03:23:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': cat
'1': dog
splits:
- name: train
num_bytes: 7544483.0
num_examples: 16
download_size: 7547089
dataset_size: 7544483.0
---
# Dataset Card for "hf_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 536 | [
[
-0.050018310546875,
-0.039886474609375,
0.005474090576171875,
0.02239990234375,
-0.006397247314453125,
-0.004619598388671875,
0.0190887451171875,
-0.005962371826171875,
0.051055908203125,
0.027099609375,
-0.057525634765625,
-0.04595947265625,
-0.032379150390625,... |
transformersbook/codeparrot-valid | 2022-02-05T16:23:18.000Z | [
"region:us"
] | transformersbook | null | null | 0 | 76 | 2022-03-02T23:29:22 | # CodeParrot Dataset
This is the validation split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
See the [full dataset](https://huggingface.co/datasets/transformersbook/codeparrot) for more information. | 588 | [
[
-0.0474853515625,
-0.0203857421875,
-0.021728515625,
0.0034275054931640625,
-0.01152801513671875,
0.0321044921875,
0.0021038055419921875,
0.00936126708984375,
0.005523681640625,
0.0487060546875,
-0.06634521484375,
-0.0221405029296875,
-0.0244598388671875,
0.... |
USC-MOLA-Lab/MFRC | 2022-08-26T00:36:03.000Z | [
"arxiv:2208.05545",
"region:us"
] | USC-MOLA-Lab | null | null | 4 | 76 | 2022-08-10T15:11:55 | # Dataset Card for MFRC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Reddit posts annotated for moral foundations
### Supported Tasks and Leaderboards
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- text
- subreddit
- bucket
- annotator
- annotation
- confidence
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
cc-by-4.0
### Citation Information
```bibtex
@misc{trager2022moral,
title={The Moral Foundations Reddit Corpus},
author={Jackson Trager and Alireza S. Ziabari and Aida Mostafazadeh Davani and Preni Golazazian and Farzan Karimi-Malekabadi and Ali Omrani and Zhihe Li and Brendan Kennedy and Nils Karl Reimer and Melissa Reyes and Kelsey Cheng and Mellow Wei and Christina Merrifield and Arta Khosravi and Evans Alvarez and Morteza Dehghani},
year={2022},
eprint={2208.05545},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
| 2,564 | [
[
-0.041412353515625,
-0.02880859375,
0.0080718994140625,
0.01305389404296875,
-0.026824951171875,
0.004611968994140625,
-0.008636474609375,
-0.0190887451171875,
0.025238037109375,
0.0288848876953125,
-0.06451416015625,
-0.05975341796875,
-0.052337646484375,
0... |
sjyhne/mapai_training_data | 2022-09-21T19:30:02.000Z | [
"task_categories:image-segmentation",
"task_ids:semantic-segmentation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"size_categories:10K<n<100K",
"license:mit",
"building-segmentation",
"region:us"
] | sjyhne | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | 1 | 76 | 2022-09-11T08:40:17 | ---
annotations_creators:
- expert-generated
language: []
language_creators:
- expert-generated
license:
- mit
multilinguality: []
pretty_name: 'MapAI: Precision in Building Segmentation Dataset'
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- building-segmentation
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
# Dataset Card for MapAI: Precision in Building Segmentation Training Dataset
Training data for the MapAI Competition arranged by the Norwegian Mapping Authority, Centre for Artificial Intelligence Research at the University of Agder (CAIR), Norwegian Artificial Intelligence Research Consortium (NORA), AI:Hub, Norkart, and the Danish Agency for Data Supply and Infrastructure.
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nora.ai/competition/mapai-precision-in-building-segmentation/index.html
- **Repository:** https://github.com/Sjyhne/MapAI-Competition
- **Paper:** https://journals.uio.no/NMI/article/view/9849
- **Leaderboard:**
- **Point of Contact:** sander.jyhne@kartverket.no
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/Sjyhne) for adding this dataset. | 3,370 | [
[
-0.03704833984375,
-0.032562255859375,
0.00969696044921875,
0.00913238525390625,
-0.0209503173828125,
-0.006305694580078125,
-0.0267333984375,
-0.02984619140625,
0.0222625732421875,
0.0199737548828125,
-0.039947509765625,
-0.07080078125,
-0.056854248046875,
... |
viola77data/recycling-dataset | 2022-09-13T13:17:15.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"recycling",
"image-classification",
"region:us"
] | viola77data | null | null | 5 | 76 | 2022-09-13T12:43:15 | ---
annotations_creators: []
language:
- en
language_creators:
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: recycling-dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- recycling
- image-classification
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for recycling-dataset
### Dataset Summary
This is a recycling dataset that can be used for image classification. It has 11 categories:
- aluminium
- batteries
- cardboard
- disposable plates
- glass
- hard plastic
- paper
- paper towel
- polystyrene
- soft plastics
- takeaway cups
It was scrapped from DuckDuckGo using this tool: https://pypi.org/project/jmd-imagescraper/
| 743 | [
[
-0.046051025390625,
0.0092620849609375,
0.005535125732421875,
-0.0252532958984375,
-0.025177001953125,
0.005367279052734375,
0.032928466796875,
0.021331787109375,
0.0116424560546875,
0.041412353515625,
-0.033599853515625,
-0.027191162109375,
-0.020904541015625,
... |
eeshclusive/captionary-dataset | 2023-02-27T19:40:17.000Z | [
"region:us"
] | eeshclusive | null | null | 0 | 76 | 2023-02-27T18:46:46 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 55042409.0
num_examples: 162
- name: test
num_bytes: 16034324.0
num_examples: 51
download_size: 14804192
dataset_size: 71076733.0
---
# Dataset Card for "captionary-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 461 | [
[
-0.04217529296875,
0.00113677978515625,
0.0147857666015625,
0.0166778564453125,
-0.0289306640625,
0.0169525146484375,
0.01428985595703125,
-0.00310516357421875,
0.056671142578125,
0.0506591796875,
-0.058349609375,
-0.0467529296875,
-0.0445556640625,
-0.00406... |
luigisaetta/atco2_atcosim | 2023-03-02T09:09:43.000Z | [
"region:us"
] | luigisaetta | null | null | 0 | 76 | 2023-03-02T08:58:50 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2049253684.428
num_examples: 8142
- name: test
num_bytes: 483912622.003
num_examples: 1957
download_size: 2521597292
dataset_size: 2533166306.4309998
---
# Dataset Card for "atco2_atcosim"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 547 | [
[
-0.023345947265625,
-0.004184722900390625,
0.014251708984375,
0.00823974609375,
-0.0215911865234375,
0.0176239013671875,
0.0252838134765625,
-0.0194549560546875,
0.06787109375,
0.051239013671875,
-0.0374755859375,
-0.057952880859375,
-0.04718017578125,
-0.02... |
rgres/AerialDreams | 2023-08-22T18:34:03.000Z | [
"size_categories:10K<n<100K",
"IGN",
"art",
"StableDiffusion",
"region:us"
] | rgres | null | null | 1 | 76 | 2023-04-27T12:28:28 | ---
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: segmentation
dtype: image
- name: metadata
struct:
- name: address
struct:
- name: ISO3166-2-lvl4
dtype: string
- name: ISO3166-2-lvl6
dtype: string
- name: ISO3166-2-lvl7
dtype: string
- name: aerialway
dtype: string
- name: aeroway
dtype: string
- name: allotments
dtype: string
- name: amenity
dtype: string
- name: bridge
dtype: string
- name: building
dtype: string
- name: city
dtype: string
- name: city_district
dtype: string
- name: club
dtype: string
- name: commercial
dtype: string
- name: country
dtype: string
- name: country_code
dtype: string
- name: county
dtype: string
- name: craft
dtype: string
- name: emergency
dtype: string
- name: farm
dtype: string
- name: farmyard
dtype: string
- name: hamlet
dtype: string
- name: healthcare
dtype: string
- name: highway
dtype: string
- name: historic
dtype: string
- name: house_number
dtype: string
- name: industrial
dtype: string
- name: isolated_dwelling
dtype: string
- name: junction
dtype: string
- name: landuse
dtype: string
- name: leisure
dtype: string
- name: locality
dtype: string
- name: man_made
dtype: string
- name: military
dtype: string
- name: municipality
dtype: string
- name: natural
dtype: string
- name: neighbourhood
dtype: string
- name: office
dtype: string
- name: place
dtype: string
- name: postcode
dtype: string
- name: province
dtype: string
- name: quarter
dtype: string
- name: railway
dtype: string
- name: region
dtype: string
- name: residential
dtype: string
- name: retail
dtype: string
- name: road
dtype: string
- name: shop
dtype: string
- name: square
dtype: string
- name: state
dtype: string
- name: state_district
dtype: string
- name: suburb
dtype: string
- name: tourism
dtype: string
- name: town
dtype: string
- name: village
dtype: string
- name: boundingbox
sequence: string
- name: camera
dtype: string
- name: code
dtype: string
- name: date
dtype: string
- name: dimensions
sequence: int64
- name: display_name
dtype: string
- name: domain
dtype: string
- name: lat
dtype: string
- name: latlong
sequence: float64
- name: licence
dtype: string
- name: lon
dtype: string
- name: origin
sequence: float64
- name: osm_id
dtype: int64
- name: osm_type
dtype: string
- name: patch_centroid_x
dtype: float64
- name: patch_centroid_y
dtype: float64
- name: patch_centroid_z
dtype: float64
- name: place_id
dtype: int64
- name: time
dtype: string
- name: unit_system
dtype: string
- name: zone
dtype: string
- name: class_rep
struct:
- name: class_repartition
struct:
- name: agricultural land
dtype: float64
- name: bare soil
dtype: float64
- name: brushwood
dtype: float64
- name: building
dtype: float64
- name: clear cut
dtype: float64
- name: coniferous
dtype: float64
- name: deciduous
dtype: float64
- name: greenhouse
dtype: float64
- name: herbaceous vegetation
dtype: float64
- name: impervious surface
dtype: float64
- name: ligneous
dtype: float64
- name: mixed
dtype: float64
- name: other
dtype: float64
- name: pervious surface
dtype: float64
- name: plowed land
dtype: float64
- name: snow
dtype: float64
- name: swimming pool
dtype: float64
- name: vineyard
dtype: float64
- name: water
dtype: float64
- name: prompt
dtype: string
- name: clip_label
dtype: string
splits:
- name: train
num_bytes: 26842989610.0
num_examples: 61712
download_size: 26786210818
dataset_size: 26842989610.0
tags:
- IGN
- art
- StableDiffusion
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "AerialDreams"
Dataset for the AerialDreams model: https://github.com/RubenGres/AerialDreams
<!---
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
---> | 5,098 | [
[
-0.035369873046875,
-0.006221771240234375,
-0.00508880615234375,
0.0134429931640625,
-0.0072479248046875,
-0.004669189453125,
0.0283355712890625,
-0.01422119140625,
0.06390380859375,
0.051055908203125,
-0.0684814453125,
-0.052490234375,
-0.0438232421875,
-0.... |
nicholasKluge/toxic-aira-dataset | 2023-06-28T16:15:24.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:pt",
"language:en",
"license:apache-2.0",
"toxicity",
"harm",
"region:us"
] | nicholasKluge | null | null | 2 | 76 | 2023-06-07T19:08:36 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- pt
- en
tags:
- toxicity
- harm
pretty_name: Toxic-Aira Dataset
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: non_toxic_response
dtype: string
- name: toxic_response
dtype: string
splits:
- name: portuguese
num_bytes: 5544823
num_examples: 16730
- name: english
num_bytes: 19577715
num_examples: 41843
download_size: 14914871
dataset_size: 25122538
---
# Dataset (`Toxic-Aira Dataset`)
### Overview
This dataset contains a collection of texts containing harmful/toxic and harmless/non-toxic conversations and messages. All demonstrations are separated into two classes (`non_toxic_response` and `toxic_response`). This dataset was created from the Anthropic [helpful-harmless-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset, the AllenAI [prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog) dataset, the [real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) dataset (also from AllenAI), and the [Toxic Comment Classification](https://github.com/tianqwang/Toxic-Comment-Classification-Challenge) dataset.
The Portuguese version has translated copies from the above mentioned datasets ([helpful-harmless-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), [prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog), [real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts)), plus the [hatecheck-portuguese](https://huggingface.co/datasets/Paul/hatecheck-portuguese) and [told-br](https://huggingface.co/datasets/told-br) datasets.
### Dataset Details
- **Dataset Name:** Toxic-Aira Dataset
- **Language:** Portuguese and English
- **Total Size:** Over 40,000 demonstrations (Portuguese 16,730, English 41,843)
- **Sources:**
- Anthropic [helpful-harmless-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset.
- AllenAI [prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog) dataset.
- AllenAI [real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) dataset.
- The [Toxic Comment Classification](https://github.com/tianqwang/Toxic-Comment-Classification-Challenge).dataset.
- The [hatecheck-portuguese](https://huggingface.co/datasets/Paul/hatecheck-portuguese) dataset.
- The [told-br](https://huggingface.co/datasets/told-br) dataset.
### Contents
⚠️ THE EXAMPLES IN THIS DATASET CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
The dataset consists of data frames with the following columns:
- **non_toxic_response:** Text that was evaluated as non-toxic.
- **toxic_response:** Text evaluated as toxic.
```python
{
"non_toxic_response": "I prefer to see it as the study of change.",
"toxic_response":"So to my secret santa in the words of Jessie Pinkman, THANKS BITCH",
}
```
All demonstrations of examples are less than 350 tokens (measured using the `BERT` tokenizer).
### Use Cases
`Toxic-Aira Dataset` can be utilized to train models to detect harmful/toxic text.
## How to use
Available splits are `portuguese` and `english`.
```python
from datasets import load_dataset
dataset = load_dataset("nicholasKluge/toxic-aira-dataset")
```
### Dataset License
The `Toxic-Aira Dataset` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
### Disclaimer
This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use. | 3,789 | [
[
-0.007366180419921875,
-0.05419921875,
0.00406646728515625,
0.024169921875,
-0.0185546875,
-0.024566650390625,
-0.0011205673217773438,
-0.037750244140625,
0.028839111328125,
0.02923583984375,
-0.04986572265625,
-0.044891357421875,
-0.042572021484375,
0.02124... |
open-llm-leaderboard/details_mosaicml__mpt-7b-chat | 2023-10-17T09:38:39.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 76 | 2023-08-17T23:59:59 | ---
pretty_name: Evaluation run of mosaicml/mpt-7b-chat
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mosaicml__mpt-7b-chat\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T09:38:22.163645](https://huggingface.co/datasets/open-llm-leaderboard/details_mosaicml__mpt-7b-chat/blob/main/results_2023-10-17T09-38-22.163645.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.06952600671140939,\n\
\ \"em_stderr\": 0.002604746204517829,\n \"f1\": 0.12196937919463072,\n\
\ \"f1_stderr\": 0.002840521979064293,\n \"acc\": 0.3626168565432783,\n\
\ \"acc_stderr\": 0.009260585769647573\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.06952600671140939,\n \"em_stderr\": 0.002604746204517829,\n\
\ \"f1\": 0.12196937919463072,\n \"f1_stderr\": 0.002840521979064293\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04094010614101592,\n \
\ \"acc_stderr\": 0.005458076796294338\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6842936069455406,\n \"acc_stderr\": 0.01306309474300081\n\
\ }\n}\n```"
repo_url: https://huggingface.co/mosaicml/mpt-7b-chat
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|arc:challenge|25_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T09_38_22.163645
path:
- '**/details_harness|drop|3_2023-10-17T09-38-22.163645.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T09-38-22.163645.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T09_38_22.163645
path:
- '**/details_harness|gsm8k|5_2023-10-17T09-38-22.163645.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T09-38-22.163645.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hellaswag|10_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-20T10:00:41.356813.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-20T10:00:41.356813.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-20T10:00:41.356813.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T09_38_22.163645
path:
- '**/details_harness|winogrande|5_2023-10-17T09-38-22.163645.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T09-38-22.163645.parquet'
- config_name: results
data_files:
- split: 2023_07_20T10_00_41.356813
path:
- results_2023-07-20T10:00:41.356813.parquet
- split: 2023_10_17T09_38_22.163645
path:
- results_2023-10-17T09-38-22.163645.parquet
- split: latest
path:
- results_2023-10-17T09-38-22.163645.parquet
---
# Dataset Card for Evaluation run of mosaicml/mpt-7b-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mosaicml/mpt-7b-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mosaicml__mpt-7b-chat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T09:38:22.163645](https://huggingface.co/datasets/open-llm-leaderboard/details_mosaicml__mpt-7b-chat/blob/main/results_2023-10-17T09-38-22.163645.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.06952600671140939,
"em_stderr": 0.002604746204517829,
"f1": 0.12196937919463072,
"f1_stderr": 0.002840521979064293,
"acc": 0.3626168565432783,
"acc_stderr": 0.009260585769647573
},
"harness|drop|3": {
"em": 0.06952600671140939,
"em_stderr": 0.002604746204517829,
"f1": 0.12196937919463072,
"f1_stderr": 0.002840521979064293
},
"harness|gsm8k|5": {
"acc": 0.04094010614101592,
"acc_stderr": 0.005458076796294338
},
"harness|winogrande|5": {
"acc": 0.6842936069455406,
"acc_stderr": 0.01306309474300081
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 38,550 | [
[
-0.0360107421875,
-0.044525146484375,
0.01544952392578125,
0.0311431884765625,
-0.015289306640625,
0.012969970703125,
-0.0298614501953125,
-0.004962921142578125,
0.023284912109375,
0.038055419921875,
-0.05126953125,
-0.06927490234375,
-0.05804443359375,
0.01... |
mychen76/ds_receipts_v2_train | 2023-09-20T21:38:03.000Z | [
"region:us"
] | mychen76 | null | null | 0 | 76 | 2023-09-20T08:56:43 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 102670815.483
num_examples: 1137
download_size: 102731891
dataset_size: 102670815.483
---
# Dataset Card for "ds_receipts_v2_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 409 | [
[
-0.023651123046875,
0.0020351409912109375,
0.0228729248046875,
0.014801025390625,
-0.02203369140625,
-0.016143798828125,
0.03326416015625,
-0.01323699951171875,
0.056121826171875,
0.040252685546875,
-0.059967041015625,
-0.02691650390625,
-0.05426025390625,
-... |
CJ-gyuwonpark/merge-data-v33 | 2023-10-22T02:32:42.000Z | [
"region:us"
] | CJ-gyuwonpark | null | null | 0 | 76 | 2023-09-21T04:04:12 | ## Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
``` | 250 | [
[
-0.02410888671875,
-0.0109710693359375,
0.040374755859375,
0.00463104248046875,
-0.03564453125,
0.0203857421875,
0.00582122802734375,
-0.01454925537109375,
0.044342041015625,
0.0159912109375,
-0.0182647705078125,
-0.0340576171875,
-0.0186920166015625,
0.0284... |
mychen76/invoices-and-receipts_ocr_v1 | 2023-09-22T20:07:54.000Z | [
"region:us"
] | mychen76 | null | null | 2 | 76 | 2023-09-22T20:06:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: parsed_data
dtype: string
- name: raw_data
dtype: string
splits:
- name: train
num_bytes: 465061949.289
num_examples: 2043
- name: test
num_bytes: 23808463.0
num_examples: 125
- name: valid
num_bytes: 22325731.0
num_examples: 70
download_size: 281665599
dataset_size: 511196143.289
---
# Dataset Card for "invoices-and-receipts_ocr_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 782 | [
[
-0.01540374755859375,
-0.00205230712890625,
0.020721435546875,
-0.004974365234375,
-0.0236663818359375,
-0.0159454345703125,
0.037384033203125,
-0.029510498046875,
0.0455322265625,
0.060333251953125,
-0.03948974609375,
-0.045440673828125,
-0.042999267578125,
... |
Shengcao1006/MMHal-Bench | 2023-11-01T03:48:38.000Z | [
"task_categories:visual-question-answering",
"task_categories:image-to-text",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | Shengcao1006 | MMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents. | @article{2023llavarlhf,
author = {Zhiqing Sun and Sheng Shen and Shengcao Cao and Haotian Liu and Chunyuan Li and Yikang Shen and Chuang Gan and Liang-Yan Gui and Yu-Xiong Wang and Yiming Yang and Kurt Keutzer and Trevor Darrell},
title = {Aligning Large Multimodal Models with Factually Augmented RLHF},
publisher = {arXiv:2309.14525},
year = {2023}
} | 5 | 76 | 2023-09-25T04:27:58 | ---
arxiv: 2309.14525
license: apache-2.0
task_categories:
- visual-question-answering
- image-to-text
language:
- en
pretty_name: MMHal-Bench
size_categories:
- n<1K
---
### Overview
MMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents.
You may check `response_template.json` for more details. In the folder `responses` we have included some example responses from representative LMMs.
### Usage
To evaluate your own model on MMHal-Bench, first generate model responses to the image-question pairs. You may check the template `get_response.py` about how to read and write to the response file.
After that, you may let GPT-4 rate your model's responses automatically. You will need package `openai` installed and an API key. Then, run `eval_gpt4.py`:
```
python eval_gpt4.py \
--response [JSON file with model responses] \
--evaluation [JSON file with GPT-4 evaluation to be saved] \
--api-key [your OpenAI API key, starting with 'sk-'] \
--gpt-model [GPT model to be used, or 'gpt-4-0314' by default]
```
Please note that the GPT-4 API calls are not free. Depending on your model response lengths, evaluating each question may use 1.5k-2k tokens. Also, GPT-4 responses are not deterministic, so you may get different results with the same responses.
At the end of the outputs, you can see the evaluation results like this:
```
Average score: 2.05
Hallucination rate: 0.61
Average score for each question type: 2.33,1.25,2,2.5,1.5,3.33,2.33,1.17
``` | 1,672 | [
[
-0.047882080078125,
-0.0667724609375,
0.04058837890625,
0.01238250732421875,
-0.0213165283203125,
-0.036224365234375,
-0.0072784423828125,
-0.03045654296875,
0.006504058837890625,
0.00925445556640625,
-0.043243408203125,
-0.0396728515625,
-0.02252197265625,
... |
llama2d/llama2d-mind2web | 2023-10-08T06:44:55.000Z | [
"region:us"
] | llama2d | null | null | 0 | 76 | 2023-09-25T22:51:49 | ---
dataset_info:
features:
- name: input_ids
sequence: float32
- name: coords
sequence:
sequence: float32
- name: labels
sequence: float32
- name: attention_mask
sequence: float32
splits:
- name: train
num_bytes: 106211392
num_examples: 2212
download_size: 12910313
dataset_size: 106211392
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2d-mind2web"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 599 | [
[
-0.046234130859375,
-0.01361846923828125,
0.0237579345703125,
0.03338623046875,
-0.0272674560546875,
0.0021266937255859375,
0.0231781005859375,
-0.0204925537109375,
0.059539794921875,
0.031494140625,
-0.0670166015625,
-0.048736572265625,
-0.04266357421875,
-... |
carnival13/sur_test | 2023-10-08T06:27:14.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 76 | 2023-10-08T06:26:32 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1297540140
num_examples: 900000
download_size: 298907283
dataset_size: 1297540140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sur_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 497 | [
[
-0.057830810546875,
-0.0302734375,
0.002452850341796875,
0.01560211181640625,
-0.01448822021484375,
0.0005440711975097656,
0.0260009765625,
-0.0005292892456054688,
0.056549072265625,
0.044464111328125,
-0.058685302734375,
-0.04669189453125,
-0.0261993408203125,
... |
ContextualAI/tiny-lambada | 2023-10-09T19:41:05.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 76 | 2023-10-08T17:27:51 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
dtype: string
splits:
- name: dev
num_bytes: 34989
num_examples: 100
download_size: 26234
dataset_size: 34989
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Dataset Card for "tiny-lambada"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 472 | [
[
-0.045989990234375,
-0.016845703125,
0.027618408203125,
0.004795074462890625,
-0.0196685791015625,
-0.014556884765625,
0.0102691650390625,
-0.013031005859375,
0.061126708984375,
0.014862060546875,
-0.0421142578125,
-0.041168212890625,
-0.0284881591796875,
-0... |
kietzmannlab/ecoset | 2023-10-14T20:10:10.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-classification",
"task_ids:multi-class-image-classification",
"source_datasets:original",
"license:cc",
"other-image-classification",
"image-classification",
"region:us"
] | kietzmannlab | Tired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images
from 565 basic level categories, chosen to be both (i) frequent in linguistic usage,
and (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’
is not). Here we collect resources associated with ecoset. This includes the dataset,
trained deep neural network models, code to interact with them, and published papers
using it. | @article{mehrer2021ecologically,
title={An ecologically motivated image dataset for deep learning yields better models of human vision},
author={Mehrer, Johannes and Spoerer, Courtney J and Jones, Emer C and Kriegeskorte, Nikolaus and Kietzmann, Tim C},
journal={Proceedings of the National Academy of Sciences},
volume={118},
number={8},
year={2021},
publisher={National Acad Sciences}
} | 7 | 75 | 2022-07-21T07:33:50 | ---
license: cc
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-classification
- multi-class-image-classification
paperswithcode_id: ecoset
pretty_name: Ecoset
tags:
- other-image-classification
- image-classification
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Installation](#installation)
- [Install requirements](#install-requirements)
- [Download settings](#download-settings)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.kietzmannlab.org/ecoset](https://www.kietzmannlab.org/ecoset/)
- **Repository:** [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/9570390/tree/v1)
- **Paper:** [https://www.pnas.org/doi/full/10.1073/pnas.2011417118](https://doi.org/10.1073/pnas.2011417118)
- **Point of Contact:** [tim.kietzmann@uni-osnabrueck.de](tim.kietzmann@uni-osnabrueck.de)
### Dataset Summary
Tired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images
from 565 basic level categories, chosen to be both (i) frequent in linguistic usage,
and (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’
is not).
Ecoset is a typical image recognition dataset, combining images of objects with appropriate
labels (one label per image). Importantly, ecoset is intended to provide higher ecological
validity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content.
For more information on the dataset, consider reading the [original publication](https://doi.org/10.1073/pnas.2011417118).
Ecoset consists of a train, test, and validation subset which all are openly available to the user.
### Supported Tasks and Leaderboards
Ecoset is a large multi-class single-label object recognition image dataset (similar to ImageNet).
## Installation
### Install Requirements
In order to work with ecoset, please make sure to install huggingface datasets:
```bash
pip install datasets
```
If you want to work with the dataset in `Huggingface.datasets`, you might also want to make sure to install PIL (`pip install Pillow`) in order to work with image input. However, downloading the dataset will work despite not having installed PIL.
### Download Settings
Please set `verification_mode=no_checks`. when downloading this dataset, else the download will result in an error, additionally you may need to
install defusedxml via pip to avoid Permission Errors required by _generate_examples method:
```python
from datasets import load_dataset
dataset = load_dataset("kietzmannlab/ecoset", verification_mode=no_checks)
```
optionally a cache_dir can bde specified where the zip file will be downloaded and extracted
```python
from datasets import load_dataset
dataset = load_dataset("kietzmannlab/ecoset", verification_mode=no_checks, cache_dir='/path/to/dir')
```
| NOTE: If you get errors like: `FileNotFoundError: [Errno 2] No such file or directory:'<DATASET_PATH>'` this is likely due do having previously downloaded the dataset and then cancelling the download. If this is the case for you, you can fix this error by manually removing the dataset path and reinstalling the dataset. |
| --- |
## Dataset Structure
We show detailed information for all the configurations of the dataset. Currently, there is only one setting (`Full`) available, containing all data.
### Data Instances
#### Full
- **Size of downloaded dataset files:** 155 GB
- **Total amount of disk used:** 311 GB
## Dataset Creation
A total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%.
### Curation Rationale
More information on the curation of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118).
### Source Data
The source data is available under: [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/9570390/tree/v1)
### Annotations
Each ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset.
### Personal and Sensitive Information
The dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
Large-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118).
### Discussion of Biases
Despite best efforts to provide an ecologically valid and overall less biased dataset, ecoset is still likely to contain biased data. The category selection of ecoset was based on human concreteness ratings and word frequencies in a corpus consisting of American television and film subtitles. This undoubtedly biases the category selection toward Western cultures. Image inclusion was based on the availability via Bing/Flickr search results as well as the existence of relevant ImageNet categories. Images depicting people, specifically the categories “man,” “woman,” and “child,” were not sampled according to census distributions (age, ethnicity, gender, etc.).
### Other Known Limitations
In addition to points mentioned in [Discussion of Biases](#discussion-of-biases), ecoset image and category distributions do not reflect the naturalistic, egocentric visual input typically encountered in the everyday life of infant and adults.
## Additional Information
### Dataset Curators
The corpus was put together by Johannes Mehrer, Courtney J. Spoerer, Emer C. Jones, Nikolaus Kriegeskorte, and Tim C. Kietzmann.
### Licensing Information
Ecoset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license (cc-by-nc-sa-2.0).
### Citation Information
```
@article{mehrer2021ecologically,
title={An ecologically motivated image dataset for deep learning yields better models of human vision},
author={Mehrer, Johannes and Spoerer, Courtney J and Jones, Emer C and Kriegeskorte, Nikolaus and Kietzmann, Tim C},
journal={Proceedings of the National Academy of Sciences},
volume={118},
number={8},
pages={e2011417118},
year={2021},
publisher={National Acad Sciences}
}
```
### Contributions
The ecoset dataloader and dataset card was created by [@DiGyt](https://github.com/DiGyt) on behalf of [@kietzmannlab](https://huggingface.co/kietzmannlab).
For questions and suggestions feel free to reach out.
| 8,447 | [
[
-0.047943115234375,
-0.033416748046875,
-0.00592041015625,
0.0159912109375,
-0.0231170654296875,
-0.01611328125,
-0.0190582275390625,
-0.0496826171875,
0.0256805419921875,
0.0308380126953125,
-0.04815673828125,
-0.060394287109375,
-0.048553466796875,
0.03234... |
ajaykarthick/imdb-movie-reviews | 2023-02-08T21:08:35.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:feature-extraction",
"size_categories:10K<n<100K",
"region:us"
] | ajaykarthick | null | null | 3 | 75 | 2023-02-08T18:30:11 | ---
task_categories:
- text-classification
- token-classification
- feature-extraction
pretty_name: Movie-Reviews
size_categories:
- 10K<n<100K
---
# IMDB Movie Reviews

This is a dataset for binary sentiment classification containing substantially huge data. This dataset contains a set of 50,000 highly polar movie reviews for training models for text classification tasks.
The dataset is downloaded from
https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
This data is processed and splitted into training and test datasets (0.2% test split). Training dataset contains 40000 reviews and test dataset contains 10000 reviews.
Equal distribution among the labels in both training and test dataset. in training dataset, there are 20000 records for both positive and negative classes. In test dataset, there are 5000 records both the labels.
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
``` | 1,544 | [
[
-0.06048583984375,
-0.040008544921875,
0.004314422607421875,
0.01261138916015625,
-0.046844482421875,
0.006816864013671875,
-0.0050811767578125,
-0.0179901123046875,
0.0290679931640625,
0.024871826171875,
-0.0386962890625,
-0.050628662109375,
-0.0498046875,
... |
Multimodal-Fatima/COCO_captions_train | 2023-03-17T21:59:22.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | 2 | 75 | 2023-03-14T16:05:25 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: sentences_tokens
list:
list: string
- name: sentences_raw
list: string
- name: sentences_sentid
list: int32
- name: cocoid
dtype: int32
- name: id
dtype: int64
splits:
- name: train
num_bytes: 18595506212.0
num_examples: 113287
download_size: 18500220513
dataset_size: 18595506212.0
---
# Dataset Card for "COCO_captions_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 758 | [
[
-0.038848876953125,
-0.002971649169921875,
0.0027294158935546875,
0.039764404296875,
-0.02496337890625,
0.023284912109375,
0.00342559814453125,
-0.0068817138671875,
0.0509033203125,
0.0391845703125,
-0.056732177734375,
-0.04541015625,
-0.044921875,
-0.008750... |
Francesco/signatures-xc8up | 2023-03-30T09:33:26.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | 1 | 75 | 2023-03-30T09:33:06 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': signatures
'1': signature
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: signatures-xc8up
tags:
- rf100
---
# Dataset Card for signatures-xc8up
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/signatures-xc8up
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
signatures-xc8up
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/signatures-xc8up
### Citation Information
```
@misc{ signatures-xc8up,
title = { signatures xc8up Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/signatures-xc8up } },
url = { https://universe.roboflow.com/object-detection/signatures-xc8up },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 3,376 | [
[
-0.0401611328125,
-0.0145416259765625,
0.017425537109375,
-0.0207672119140625,
-0.03436279296875,
0.0028705596923828125,
-0.005992889404296875,
-0.049102783203125,
0.023590087890625,
0.04278564453125,
-0.048919677734375,
-0.0687255859375,
-0.05126953125,
0.0... |
BelleGroup/train_0.5M_CN | 2023-04-03T08:11:22.000Z | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | BelleGroup | null | null | 73 | 75 | 2023-03-31T10:17:49 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约50万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
| 506 | [
[
-0.0167999267578125,
-0.04083251953125,
0.0178985595703125,
0.050872802734375,
-0.02532958984375,
-0.0248870849609375,
0.0193328857421875,
-0.01314544677734375,
0.0268707275390625,
0.038482666015625,
-0.054443359375,
-0.0701904296875,
-0.050384521484375,
-0.... |
FourthBrainGenAI/MarketMail-AI | 2023-04-26T07:08:28.000Z | [
"region:us"
] | FourthBrainGenAI | null | null | 0 | 75 | 2023-04-26T07:08:24 | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 30474
num_examples: 17
download_size: 31271
dataset_size: 30474
---
# Dataset Card for "cool_new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 437 | [
[
-0.0533447265625,
-0.0220947265625,
0.0018463134765625,
0.00702667236328125,
-0.0219879150390625,
0.007305145263671875,
0.01276397705078125,
-0.004852294921875,
0.07513427734375,
0.029144287109375,
-0.055145263671875,
-0.05767822265625,
-0.0360107421875,
-0.... |
ArtifactAI/arxiv-cs-ml-instruct-tune-50k | 2023-06-21T13:45:31.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"doi:10.57967/hf/0800",
"region:us"... | ArtifactAI | null | null | 3 | 75 | 2023-06-20T06:31:29 | ---
annotations_creators:
- no-annotation
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: arxiv-cs-ml-instruct-tune-50k
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: arxiv-cs-ml-instruct-tune-50k
---
# Dataset Card for "arxiv-cs-ml-instruct-tune-50k"
### Dataset Summary
The "ArtifactAI/arxiv-cs-ml-instruct-tune-50k" dataset consists of question-answer pairs derived from ArXiv abstracts from the following categories:
"cs.CL", "cs.AI", "cs.CV", "cs.HC", "cs.IR", "cs.RO", "cs.NE", "stat.ML". Questions are generated using the [t5-base model](https://huggingface.co/t5-base), while the answers are generated using the [GPT-3.5-turbo model](https://openai.com/chatgpt).
### Languages
English
## Dataset Structure
### Data Instances
#### train
- **Size of downloaded dataset files:** 38.4 MB
An example of 'train' looks as follows.
```
{
"question": "What is multitask learning?",
"answer": "Multitask learning is a technique in machine learning that allows a single model to simultaneously learn multiple related tasks.
Instead of training separate models for each task, multitask learning leverages shared patterns across tasks to improve overall performance.
This can lead to faster training, better accuracy, and more efficient use of computational resources.
For example, a model might be trained to simultaneously predict the sentiment of customer reviews and categorize them by topic.
Both tasks require analyzing natural language, so the shared knowledge can help improve performance on both tasks."
}
```
### Data Fields
The data fields present in the dataset are as follows:
- question: a string feature representing the question.
- answer: a string feature representing the answer.
#### plain_text
- `text`: a `string` feature.
### Data Splits
train: 50,063 question answer pairs
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
Question-answer pairs derived from [ArXiv](https://arxiv.org/) abstracts.
#### Initial Data Collection and Normalization
The "ArtifactAI/arxiv-cs-ml-instruct-tune-50k" dataset consists of question-answer pairs derived from ArXiv abstracts. Questions are generated from ArXiv papers in the following categories:
- cs.CL
- cs.AI
- cs.CV
- cs.HC
- cs.IR
- cs.RO
- cs.NE
- stat.ML
Questions are generated using the [t5-base model](https://huggingface.co/t5-base), while the answers are generated using the [GPT-3.5-turbo model](https://openai.com/chatgpt).
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
None
#### Notice policy
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact us at the following email address: matt at artifactai.com and datasets at huggingface.co
#### Take down policy
The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
Hugging Face will also update this repository accordingly.
### Citation Information
```
@misc{arxiv-cs-ml-instruct-tune-50k,
title={arxiv-cs-ml-instruct-tune-50k},
author={Matthew Kenney},
year={2023}
}
```
| 3,821 | [
[
-0.041778564453125,
-0.0552978515625,
0.0029125213623046875,
0.01276397705078125,
-0.0161285400390625,
-0.0029430389404296875,
-0.0210418701171875,
-0.03759765625,
0.01425933837890625,
0.033782958984375,
-0.043731689453125,
-0.03173828125,
-0.0377197265625,
... |
ImagenHub/Text_Guided_Image_Editing | 2023-10-05T18:34:28.000Z | [
"task_categories:image-to-image",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"arxiv:2310.01596",
"region:us"
] | ImagenHub | null | null | 1 | 75 | 2023-08-06T02:11:53 | ---
language:
- en
license: cc-by-4.0
size_categories:
- n<1K
task_categories:
- image-to-image
dataset_info:
features:
- name: img_id
dtype: string
- name: turn_index
dtype: int32
- name: source_img
dtype: image
- name: mask_img
dtype: image
- name: instruction
dtype: string
- name: source_global_caption
dtype: string
- name: target_global_caption
dtype: string
- name: target_local_caption
dtype: string
- name: target_img
dtype: image
splits:
- name: dev
num_bytes: 1521276668.0
num_examples: 528
- name: filtered
num_bytes: 504007147.0
num_examples: 179
- name: extra
num_bytes: 709468665.0
num_examples: 249
download_size: 2734685875
dataset_size: 2734752480.0
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: filtered
path: data/filtered-*
- split: extra
path: data/extra-*
---
# Dataset Card
Dataset in [ImagenHub](arxiv.org/abs/2310.01596).
# Citation
Please kindly cite our paper if you use our code, data, models or results:
```
@article{ku2023imagenhub,
title={ImagenHub: Standardizing the evaluation of conditional image generation models},
author={Max Ku, Tianle Li, Kai Zhang, Yujie Lu, Xingyu Fu, Wenwen Zhuang, Wenhu Chen},
journal={arXiv preprint arXiv:2310.01596},
year={2023}
}
``` | 1,362 | [
[
-0.021728515625,
-0.0195159912109375,
0.01116943359375,
-0.003650665283203125,
-0.0413818359375,
-0.049896240234375,
-0.0000928640365600586,
-0.0216522216796875,
-0.01409912109375,
0.03546142578125,
-0.0158233642578125,
-0.054168701171875,
-0.03192138671875,
... |
ArmelR/oasst1_guanaco_english | 2023-08-26T01:05:26.000Z | [
"region:us"
] | ArmelR | null | null | 0 | 75 | 2023-08-26T01:05:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 2500171.710492605
num_examples: 2181
- name: test
num_bytes: 278561.0846628625
num_examples: 243
download_size: 1690262
dataset_size: 2778732.7951554675
---
# Dataset Card for "oasst1_guanaco_english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 620 | [
[
-0.018829345703125,
-0.0222930908203125,
0.015350341796875,
0.01464080810546875,
-0.016326904296875,
-0.00505828857421875,
0.005916595458984375,
-0.017608642578125,
0.06085205078125,
0.021087646484375,
-0.04345703125,
-0.06951904296875,
-0.052581787109375,
-... |
YaHi/chata_dataset | 2023-09-22T20:12:56.000Z | [
"region:us"
] | YaHi | null | null | 1 | 75 | 2023-09-19T23:42:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: output
dtype: string
- name: text
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 15929527
num_examples: 9930
- name: test
num_bytes: 1227211
num_examples: 717
download_size: 5933059
dataset_size: 17156738
---
# Dataset Card for "chata_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 616 | [
[
-0.038421630859375,
-0.02581787109375,
0.0018243789672851562,
0.02294921875,
-0.01287841796875,
0.00324249267578125,
0.016754150390625,
-0.008026123046875,
0.0682373046875,
0.03582763671875,
-0.0584716796875,
-0.047210693359375,
-0.037017822265625,
-0.029968... |
umarigan/turkish_wikipedia | 2023-10-03T08:39:01.000Z | [
"region:us"
] | umarigan | null | null | 1 | 75 | 2023-10-03T08:38:29 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 1142404262
num_examples: 524601
download_size: 629924151
dataset_size: 1142404262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "turkish_wikipedia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 519 | [
[
-0.06640625,
-0.0249481201171875,
0.00962066650390625,
0.00742340087890625,
-0.0300140380859375,
-0.02569580078125,
-0.0015687942504882812,
-0.01507568359375,
0.052337646484375,
0.0260467529296875,
-0.05511474609375,
-0.0634765625,
-0.040618896484375,
-0.006... |
skaltenp/textworld_cooking_augmented | 2023-10-10T13:06:02.000Z | [
"region:us"
] | skaltenp | null | null | 0 | 75 | 2023-10-10T11:37:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_demo
path: data/train_demo-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: z8_path
dtype: string
- name: id
dtype: string
- name: ulx_path
dtype: string
- name: score
dtype: int64
- name: demonstration
sequence:
sequence: string
- name: moves
dtype: int64
- name: json_path
dtype: string
splits:
- name: train
num_bytes: 111951142
num_examples: 16272
- name: train_demo
num_bytes: 284608
num_examples: 856
- name: valid
num_bytes: 5418846
num_examples: 872
- name: test
num_bytes: 5613370
num_examples: 868
download_size: 21851355
dataset_size: 123267966
---
# Dataset Card for "textworld_cooking_augmented"
This dataset is a synthetically generated dataset based on the [Textworld Cooking Game](https://github.com/microsoft/TextWorld/tree/main/textworld/challenges/tw_cooking).
As the GitHub-Repository enables the generation of games it is used to generate the games and their walkthroughs.
The training walkthroughs are augmented with additional "human behavior" like checking the inventory between steps and moving to other rooms.
There is an additional split called "train_demo", that will be filled with human demonstration data in future work.
The most important column is the "demonstration" column. It contains the engine and player text of the generated games and can be used to train LLMs playing the game.
The game files linked in the "path" colums can be downloaded [here](https://drive.google.com/file/d/1lDeqTyQqrw06e6dAHDnXXrEC9r7a2weJ/view?usp=sharing).
| 1,745 | [
[
-0.01540374755859375,
-0.04095458984375,
0.00666046142578125,
0.004764556884765625,
0.0018024444580078125,
0.023834228515625,
0.0025787353515625,
-0.022369384765625,
0.0214385986328125,
0.0509033203125,
-0.08050537109375,
-0.052581787109375,
-0.0142974853515625,... |
mozilla-foundation/common_voice_6_0 | 2023-07-29T16:00:06.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | 0 | 74 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
as:
- n<1K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fi:
- 1K<n<10K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
hi:
- n<1K
hsb:
- 1K<n<10K
hu:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 10K<n<100K
it:
- 100K<n<1M
ja:
- 1K<n<10K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lg:
- 1K<n<10K
lt:
- 1K<n<10K
lv:
- 1K<n<10K
mn:
- 10K<n<100K
mt:
- 10K<n<100K
nl:
- 10K<n<100K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 10K<n<100K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 1K<n<10K
ru:
- 10K<n<100K
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 10K<n<100K
ta:
- 10K<n<100K
th:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
uk:
- 10K<n<100K
vi:
- 1K<n<10K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- 10K<n<100K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 6.0
language_bcp47:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 6.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 9261 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 7327 validated hours in 60 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Assamese, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Georgian, German, Greek, Hakha Chin, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_6_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| 10,751 | [
[
-0.040191650390625,
-0.05413818359375,
0.00989532470703125,
0.033111572265625,
-0.0191497802734375,
0.00238800048828125,
-0.042633056640625,
-0.0169677734375,
0.0328369140625,
0.04052734375,
-0.057891845703125,
-0.07183837890625,
-0.033172607421875,
0.018203... |
naver-clova-ix/synthdog-ko | 2022-07-22T06:43:08.000Z | [
"region:us"
] | naver-clova-ix | null | null | 5 | 74 | 2022-07-20T00:45:45 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
gigant/oldbookillustrations | 2022-08-03T17:35:37.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-to-image",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"l... | gigant | null | null | 13 | 74 | 2022-07-28T08:31:19 | ---
annotations_creators:
- expert-generated
language:
- en
- fr
- de
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: Old Book Illustrations
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- lam
- 1800-1900
task_categories:
- text-to-image
- image-to-text
- image-to-image
task_ids:
- image-captioning
---
# Dataset Card for Old Book Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://www.oldbookillustrations.com/)**
### Dataset Summary
The Old Book Illustrations contains 4172 illustrations scanned from old books, this collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/).
The webmaster of Old Book Illustrations kindly allowed us to scrap these information in order to create this dataset for the [BigLAM initiative](https://huggingface.co/biglam).
### Languages
The captions and descriptions are mostly in English but can contain some sentences from other languages such as French or German.
For instance you can find this description that contains a French sentence:
>The caption reads in the original French: Vue de l’aqueduc de Salones qui conduisait l’eau à Spalatro.
## Dataset Structure
Each row contains information gathered from the page of an illustration on the website [Old Book Illustrations](https://www.oldbookillustrations.com/). As of July 2022, there are 4172 illustrations in this dataset.
### Data Fields
* `rawscan`: the image as originally scanned from the book, without further processing
* `1600px`: the cleaned image, resized to a width of 1600 pixels (height can vary)
* `info_url`: URL to the illustration page on oldbookillustrations.com
* `ìnfo_src`: URL to an icon-sized version of the image
* `info_alt`: short description of the image
* `artist_name`: artist name
* `artist_date`: birth date of the artist
* `artist_countries`: list of the countries the artist is from
* `book_title`: original title of the book the illustration is extracted from
* `book_authors`: list of the authors of the book
* `book_publishers`: list of the publishers of the book
* `openlibrary-url`: URL to the openlibrary entry for the book
* `tags`: list of keywords for this illustration on oldbookillustrations.com
* `illustration_source_name`: list of the sources for this illustration
* `illustration_source_url`: list of the URL for these sources
* `illustration_subject`: category of the subject represented in the illustration
* `illustration_format`: category of the format of the illustration
* `image_title`: title of the image
* `image_caption`: caption of the image. Seems to be the caption that appears next to the image in the book, translated to English if in another language
* `image_description`: longer description of the image. If there is one, it also quotes the caption in the original language
* `rawscan_url`: URL to the rawscan image on oldbookillustration.com
* `1600px_url`: URL to the cleaned image on oldbookillustration.com
## Dataset Creation
### Curation Rationale
This collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/).
This version contains all the data that was available on the website as of July 2022, but the website is being actively maintained so if you want more old book illustrations, make sure to check [Old Book Illustrations](https://www.oldbookillustrations.com/).
### Source Data
#### Initial Data Collection and Normalization
Initial data is gathered from the website [Old Book Illustrations](https://www.oldbookillustrations.com/). The sources of the illustration scans are specified for each entry in the columns `illustration_source_name` and `illustration_source_url`.
### Personal and Sensitive Information
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Considerations for Using the Data
### Discussion of Biases
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Additional Information
### Dataset Curators
The Old Book Illustrations collection is curated and maintained by the team of the [Old Book Illustrations website](https://www.oldbookillustrations.com/).
### Licensing Information
[Old Book Illustrations](https://www.oldbookillustrations.com/) website reads:
>We don’t limit the use of the illustrations available on our site, but we accept no responsibility regarding any problem, legal or otherwise, which might result from such use. More specifically, we leave it up to users to make sure that their project complies with the copyright laws of their country of residence. Text content (descriptions, translations, etc.) is published under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The Old Book Illustrations webmaster mentioned that most images are public domain in the US and Europe, but there can be some exceptions. An example are the illustrations from [*Early poems of William Morris*](https://www.oldbookillustrations.com/titles/early-poems-of-william-morris/) as the illustrator died 1955, so her work is not public domain in Europe as of 2022, or [*Under the hill*](https://www.oldbookillustrations.com/titles/under-the-hill/) which was published in the US in 1928 and therefore is not public domain there.
### Citation Information
```bibtex
@misc{old book illustrations_2007,
url={https://www.oldbookillustrations.com/},
journal={Old Book Illustrations}, year={2007}}
```
### Contributions
Thanks to [@gigant](https://huggingface.co/gigant) ([@giganttheo](https://github.com/giganttheo)) for adding this dataset. | 7,304 | [
[
-0.04132080078125,
-0.0201873779296875,
0.0119476318359375,
-0.01514434814453125,
-0.037567138671875,
-0.035003662109375,
0.0180816650390625,
-0.057342529296875,
0.020477294921875,
0.06103515625,
-0.033233642578125,
-0.048553466796875,
-0.03466796875,
0.0234... |
priyank-m/SROIE_2019_text_recognition | 2022-08-27T21:38:24.000Z | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"text-recognition",
"recognition",
"region:us"
] | priyank-m | null | null | 2 | 74 | 2022-08-27T20:56:31 | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: SROIE_2019_text_recognition
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- text-recognition
- recognition
task_categories:
- image-to-text
task_ids:
- image-captioning
---
This dataset we prepared using the Scanned receipts OCR and information extraction(SROIE) dataset.
The SROIE dataset contains 973 scanned receipts in English language.
Cropping the bounding boxes from each of the receipts to generate this text-recognition dataset resulted in 33626 images for train set and 18704 images for the test set.
The text annotations for all the images inside a split are stored in a metadata.jsonl file.
usage:
from dataset import load_dataset
data = load_dataset("priyank-m/SROIE_2019_text_recognition")
source of raw SROIE dataset:
https://www.kaggle.com/datasets/urbikn/sroie-datasetv2 | 924 | [
[
-0.015533447265625,
0.004825592041015625,
0.010284423828125,
0.0008630752563476562,
-0.040802001953125,
-0.01953125,
0.0036373138427734375,
-0.0501708984375,
0.02813720703125,
0.061553955078125,
-0.038909912109375,
-0.0227508544921875,
-0.04437255859375,
0.0... |
jensjorisdecorte/Synthetic-ESCO-skill-sentences | 2023-07-25T21:40:31.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc",
"Skill Extraction",
"Synthetic Data",
"arxiv:2307.10778",
"region:us"
] | jensjorisdecorte | null | null | 2 | 74 | 2023-07-20T10:53:22 | ---
license: cc
task_categories:
- text-classification
language:
- en
tags:
- Skill Extraction
- Synthetic Data
pretty_name: Synthetic ESCO skill sentences
size_categories:
- 100K<n<1M
---
# Synthetic job ads for all ESCO skills
## Dataset Description
- **Homepage:** coming soon
- **Repository:** coming soon
- **Paper:** https://arxiv.org/abs/2307.10778
- **Point of Contact:** jensjoris@techwolf.ai
### Dataset Summary
This dataset contains 10 synthetically generated job ad sentences for almost all (99.5%) skills in ESCO v1.1.0.
### Languages
We use the English version of ESCO, and all generated sentences are in English.
## Dataset Structure
The dataset consists of 138,260 `(sentence, skill)` pairs.
### Citation Information
[More Information Needed] | 768 | [
[
-0.0011548995971679688,
-0.049041748046875,
0.031890869140625,
0.025390625,
0.0011157989501953125,
-0.00688934326171875,
0.001918792724609375,
-0.033447265625,
0.04791259765625,
0.0501708984375,
-0.04461669921875,
-0.06915283203125,
-0.018218994140625,
0.030... |
Maxx0/sexting-nsfw-adultconten | 2023-09-02T15:58:40.000Z | [
"region:us"
] | Maxx0 | null | null | 9 | 74 | 2023-09-02T15:13:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
LeoLM/German_Poems | 2023-09-04T20:04:23.000Z | [
"region:us"
] | LeoLM | null | null | 1 | 74 | 2023-09-04T20:04:20 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: topic
dtype: string
- name: poem
dtype: string
splits:
- name: train
num_bytes: 571127
num_examples: 400
download_size: 327833
dataset_size: 571127
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "german_poems_gpt4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 510 | [
[
-0.037628173828125,
-0.00887298583984375,
0.031768798828125,
0.0299530029296875,
-0.02734375,
-0.0139923095703125,
0.01018524169921875,
-0.01812744140625,
0.040313720703125,
0.02520751953125,
-0.0545654296875,
-0.064208984375,
-0.05645751953125,
-0.007102966... |
yzhuang/autotree_pmlb_100000_phoneme_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T22:51:22.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 74 | 2023-09-07T22:51:02 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 1852400000
num_examples: 100000
- name: validation
num_bytes: 185240000
num_examples: 10000
download_size: 378644599
dataset_size: 2037640000
---
# Dataset Card for "autotree_pmlb_100000_phoneme_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 847 | [
[
-0.02899169921875,
-0.01397705078125,
0.00907135009765625,
0.0308990478515625,
-0.018890380859375,
0.012054443359375,
0.04833984375,
-0.0021457672119140625,
0.05889892578125,
0.026031494140625,
-0.05908203125,
-0.04052734375,
-0.041748046875,
0.0022544860839... |
yzhuang/autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T16:23:06.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 74 | 2023-09-08T16:22:25 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1584979274
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 848 | [
[
-0.0307159423828125,
-0.0137481689453125,
0.0219268798828125,
0.013458251953125,
-0.006832122802734375,
0.01091766357421875,
0.040802001953125,
-0.006702423095703125,
0.05157470703125,
0.0253143310546875,
-0.0491943359375,
-0.046630859375,
-0.04998779296875,
... |
yzhuang/autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T18:21:37.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 74 | 2023-09-08T18:20:52 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1613809341
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 852 | [
[
-0.03668212890625,
-0.0178680419921875,
0.016693115234375,
0.0198974609375,
-0.0153350830078125,
0.007190704345703125,
0.042510986328125,
-0.003246307373046875,
0.05682373046875,
0.025177001953125,
-0.065185546875,
-0.034942626953125,
-0.0467529296875,
0.006... |
yzhuang/autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T18:46:08.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 74 | 2023-09-08T18:45:21 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1611785428
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 849 | [
[
-0.035888671875,
-0.0087432861328125,
0.0207061767578125,
0.018157958984375,
-0.017791748046875,
0.007488250732421875,
0.038482666015625,
-0.00634765625,
0.05865478515625,
0.031463623046875,
-0.0572509765625,
-0.046142578125,
-0.050567626953125,
0.0129241943... |
yzhuang/autotree_automl_100000_heloc_sgosdt_l256_dim10_d3_sd0 | 2023-09-09T04:05:06.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 74 | 2023-09-09T04:04:41 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 447651417
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_heloc_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 847 | [
[
-0.0322265625,
-0.00901031494140625,
0.016571044921875,
0.0225067138671875,
-0.01360321044921875,
0.022796630859375,
0.044586181640625,
-0.004032135009765625,
0.05377197265625,
0.026611328125,
-0.057281494140625,
-0.053375244140625,
-0.054656982421875,
0.004... |
slaqrichi/Cosmic_dataset_V3 | 2023-10-11T16:12:38.000Z | [
"region:us"
] | slaqrichi | null | null | 0 | 74 | 2023-10-11T16:12:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: ID
dtype: int64
- name: requirement
dtype: string
- name: functional process
dtype: string
- name: functional user
dtype: string
- name: sub processes
dtype: string
- name: data groups
dtype: string
splits:
- name: train
num_bytes: 22158.157894736843
num_examples: 17
- name: test
num_bytes: 1303.421052631579
num_examples: 1
- name: valid
num_bytes: 1303.421052631579
num_examples: 1
download_size: 43297
dataset_size: 24765.000000000004
---
# Dataset Card for "Cosmic_dataset_V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 888 | [
[
-0.0380859375,
-0.0135650634765625,
0.031280517578125,
0.02044677734375,
-0.019989013671875,
-0.0093994140625,
0.03363037109375,
-0.0211334228515625,
0.05560302734375,
0.04620361328125,
-0.0667724609375,
-0.050262451171875,
-0.041961669921875,
-0.01429748535... |
chrisgru/commonsense-dialogues | 2023-10-18T18:50:50.000Z | [
"region:us"
] | chrisgru | null | null | 0 | 74 | 2023-10-18T18:49:54 | ---
dataset_info:
features:
- name: system
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 13583083
num_examples: 20176
- name: test
num_bytes: 1415597
num_examples: 2158
- name: validation
num_bytes: 1443693
num_examples: 2157
download_size: 6854218
dataset_size: 16442373
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for "commonsense-dialogues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 745 | [
[
-0.03839111328125,
-0.027099609375,
0.0208740234375,
-0.00014269351959228516,
-0.01177215576171875,
-0.009307861328125,
-0.0073089599609375,
-0.0038394927978515625,
0.046661376953125,
0.0394287109375,
-0.06109619140625,
-0.0504150390625,
-0.029327392578125,
... |
result-kand2-sdxl-wuerst-karlo/4e08d540 | 2023-10-31T12:51:59.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 74 | 2023-10-31T12:51:59 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 207
num_examples: 10
download_size: 1373
dataset_size: 207
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "4e08d540"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.04840087890625,
-0.00366973876953125,
0.0261383056640625,
0.0195159912109375,
-0.0020389556884765625,
-0.00989532470703125,
0.035491943359375,
-0.01751708984375,
0.05902099609375,
0.037994384765625,
-0.05767822265625,
-0.042724609375,
-0.0278167724609375,
... |
result-kand2-sdxl-wuerst-karlo/fc49f34a | 2023-10-31T13:15:14.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 74 | 2023-10-31T13:15:13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 233
num_examples: 10
download_size: 1394
dataset_size: 233
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fc49f34a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.05938720703125,
0.00226593017578125,
0.01454925537109375,
0.0338134765625,
-0.01149749755859375,
0.006320953369140625,
0.0247650146484375,
-0.01119232177734375,
0.051055908203125,
0.0241241455078125,
-0.06951904296875,
-0.03997802734375,
-0.037384033203125,
... |
alexfabbri/answersumm | 2022-12-14T20:18:28.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"query-based-summarization",
"arxiv:2111.06474",
"region:us"
] | alexfabbri | null | null | 3 | 73 | 2022-06-09T14:58:23 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
tags:
- query-based-summarization
---
# Dataset Card for answersumm
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/Alex-Fabbri/AnswerSumm
- **Paper:** [AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization](https://arxiv.org/abs/2111.06474)
- **Point of Contact:** [Alex Fabbri](mailto:afabbri@salesforce.com)
### Dataset Summary
The AnswerSumm dataset is an English-language dataset of questions and answers collected from a [StackExchange data dump](https://archive.org/details/stackexchange). The dataset was created to support the task of query-focused answer summarization with an emphasis on multi-perspective answers.
The dataset consists of over 4200 such question-answer threads annotated by professional linguists and includes over 8700 summaries. We decompose the task into several annotation stages, including sentence selection, sentence clustering, cluster summarization, and overall summarization. For each thread, the annotator writes two summaries, one in which the annotator is asked to mark sentences that are included in the final summary and instructed to more closely use the words in these sentences rather than abstract. We have multiple annotators for a subset of the examples in the test set.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data point comprises a question with a `title` field containing the overview of the question and a `question` that elaborates on the title. The answers are sentence tokenized and contain relevance labels, labels for inclusion in the final summary, and cluster labels. We include cluster summaries, overall summaries, and additional metadata.
An example from the AnswerSumm test set looks as follows:
```json
{
"example_id": 9_24,
"annotator_id": [1],
"question": {
"author": "gaming.stackexchange.com/users/11/Jeffrey",
"forum": "gaming.stackexchange.com",
"link": "gaming.stackexchange.com/questions/1",
"question": "Now that the Engineer update has come, there will be lots of Engineers building up everywhere. How should this best be handled?",
"question_tags": "\<team-fortress-2\>",
"title": "What is a good strategy to deal with lots of engineers turtling on the other team?"
},
"answers": [
{
"answer_details": {
"author": "gaming.stackexchange.com/users/44/Corv1nus",
"score": 49
}
"sents": [
"text": "Lots of medics with lots of ubers on high-damage-dealing classes."
"label": [0],
"label_summ": [0],
"cluster_id": [[-1]],
]
...
},
...
]
"summaries": [
[
"Demomen usually work best against a sentry farm. Heavies or pyros can also be effective. Medics should be in the frontline to absorb the shock. Build a teleporter to help your team through.",
"Demomen are best against a sentry farm. Heavies or pyros can also be effective. The medic should lead the uber combo. ..."
]
]
"cluster_summaries":[
"Demomen are best against a sentry farm.",
"Heavies or pyros can also be effective.",
...
]
}
```
### Data Fields
- question: contains metadata about the question and forum
- question: the body of the question post
- title: the title of the question post
- question_tags: user-provided question tags
- link: link to the original question
- author: link to the author's user page (as requested by StackExchange's attribution policy)
- answers: list of sentence-tokenized answers
- answer_details: dictionary consisting of link to answer author's user page (author) and community-assigned score (score)
- sents: sentences that compose the answer
- text: the sentence text
- label: a list (to generalize to multi-annotator scenarios) of whether the sentence is labeled as relevant or not for answering the question.
- label_summ: a list of whether the sentence was used to write the first annotator-created summary (that is the first summary in `summaries`)
- cluster_id: a list of lists (potentially multiple annotators and a sentence can be in potentially multiple clusters) of the clusters a sentence belongs to. -1 implies no cluster. This label can be used to aggregate sentences into clusters across answers.
- summaries: list of list of summaries. Each annotator wrote two summaries. The first in the list is the summary in which the instructor was told to mark sentences relevant for inclusion in the summary and then closely use the words of these sentences, while for the second summary the annotator was asked to paraphrase and condense the cluster summaries but was not asked to reduce abstraction.
- annotator_id: a list of the ids of the annotator(s) who completed all tasks related to that thread.
- mismatch_info: a dict of any issues in processing the excel files on which annotations were completed.
- rel_sent_not_in_cluster: list of booleans indicating whether there are sentences that are labeled as relevant but were not included in a cluster.
- cluster_sents_not_matched: list of sentences that were found in a cluster but which our processing script didn't automatically match to sentences in the source answers. If cluster summarization is of interest to the user you may want to process these examples separately using clusters_orig.
### Data Splits
The data is split into training, validation, and test sets using stratified sampling on the source forums. There are 2783, 500, and 1000 train/validation/test threads, respectively.
## Dataset Creation
### Curation Rationale
AnswerSumm was built to provide a testbed for query-focused summarization of multi-perspective answers. The data collection was designed to tackle multiple subtasks including sentence selection, clustering, cluster summarization, and overall summarization.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering examples based on a whitelist of forums from StackExchange which we believed would be able to be summarized by a lay person. We describe. We asked annotators to remove examples which required technical knowledge or additional context beyond what was present in the answers.
#### Who are the source language producers?
The language producers are the users of the StackExchange forums sampled.
### Annotations
#### Annotation process
Please see our [paper](https://arxiv.org/pdf/2111.06474.pdf) for additional annotation details. We began with a pre-pilot of 50 examples, followed by a pilot of 500 and a final annotation of 5000 examples. This release contains the results of the final data collection. We will release the instructions used in data collection.
#### Who are the annotators?
The annotators are professional linguists who were obtained through an internal contractor.
### Personal and Sensitive Information
We did not anonymize the data. We followed the specifications from StackExchange [here](https://archive.org/details/stackexchange) to include author information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop systems that automatically summarize multi-perspective answers. A system that succeeds at this task would be able to summarize many perspectives present in an answer and not limit itself to a single perspective.
### Discussion of Biases
While StackExchange allows for the exchange of information and ideas, hate and harassment may exist on this site. While our annotators did not flag examples in this process, we encourage users of the dataset to reach out with concerns.
We also note that this dataset is limited in its monolingual coverage.
## Additional Information
### Dataset Curators
The dataset was collected by Alex Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, and Mona Diab during work done at Facebook.
### Licensing Information
The data is released under cc-by-sa 4.0 following the original StackExchange [release](https://archive.org/details/stackexchange).
### Citation Information
```bibtex
@misc{fabbri-etal-2022-answersumm,
title={AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization},
author={Alexander R. Fabbri and Xiaojian Wu and Srini Iyer and Haoran Li and Mona Diab },
year={2022},
eprint={2111.06474},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2111.06474}
}
```
| 9,736 | [
[
-0.05908203125,
-0.07476806640625,
0.0249176025390625,
0.01052093505859375,
-0.01337432861328125,
0.0019292831420898438,
-0.01641845703125,
-0.02557373046875,
0.0296173095703125,
0.0455322265625,
-0.043426513671875,
-0.0487060546875,
-0.038543701171875,
0.02... |
Jzuluaga/atcosim_corpus | 2022-12-05T11:14:57.000Z | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"robust-speech-recognition",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2203.16822",
"region:us"
] | Jzuluaga | null | null | 0 | 73 | 2022-11-16T09:04:42 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: segment_start_time
dtype: float32
- name: segment_end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: test
num_bytes: 471628915.76
num_examples: 1901
- name: train
num_bytes: 1934757106.88
num_examples: 7638
download_size: 0
dataset_size: 2406386022.6400003
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- robust-speech-recognition
- noisy-speech-recognition
- speech-recognition
task_categories:
- automatic-speech-recognition
language:
- en
multilinguality:
- monolingual
---
# Dataset Card for ATCOSIM corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCOSIM homepage](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)
- **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
- **Paper:** [The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech](https://aclanthology.org/L08-1507/)
- **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
### Dataset Summary
The ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer ([description here](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)).
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
### Languages and other details
The text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [ATCOSIM corpus](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html) creators.
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
Authors of the dataset:
```
@inproceedings{hofbauer-etal-2008-atcosim,
title = "The {ATCOSIM} Corpus of Non-Prompted Clean Air Traffic Control Speech",
author = "Hofbauer, Konrad and
Petrik, Stefan and
Hering, Horst",
booktitle = "Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}'08)",
month = may,
year = "2008",
address = "Marrakech, Morocco",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2008/pdf/545_paper.pdf",
}
```
| 6,026 | [
[
-0.02862548828125,
-0.046051025390625,
-0.0010986328125,
0.0101165771484375,
-0.024505615234375,
0.010040283203125,
-0.0419921875,
-0.039794921875,
0.0146484375,
0.0303802490234375,
-0.0277862548828125,
-0.041107177734375,
-0.049774169921875,
-0.012931823730... |
rahmanfadhil/squad_v2_id | 2023-01-12T11:14:51.000Z | [
"region:us"
] | rahmanfadhil | null | null | 0 | 73 | 2023-01-12T11:01:07 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int32
- name: text
sequence: string
splits:
- name: train
num_bytes: 121632833
num_examples: 130318
- name: validation
num_bytes: 12218827
num_examples: 11858
download_size: 0
dataset_size: 133851660
---
# Dataset Card for "squad_id"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.03656005859375,
-0.0158233642578125,
0.00774383544921875,
0.0277862548828125,
-0.01172637939453125,
0.0231781005859375,
0.02825927734375,
-0.00878143310546875,
0.052947998046875,
0.0107269287109375,
-0.08917236328125,
-0.052825927734375,
-0.0285491943359375,
... |
AnonymousSub/MedQuAD_47441_Question_Answer_Pairs | 2023-03-09T15:02:29.000Z | [
"region:us"
] | AnonymousSub | null | null | 4 | 73 | 2023-03-09T15:02:27 | ---
dataset_info:
features:
- name: Questions
dtype: string
- name: Answers
dtype: string
splits:
- name: train
num_bytes: 24216623
num_examples: 47441
download_size: 9258859
dataset_size: 24216623
---
# Dataset Card for "MedQuAD_47441_Question_Answer_Pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 421 | [
[
-0.044189453125,
-0.0289306640625,
0.0174560546875,
0.00677490234375,
-0.024444580078125,
-0.00662994384765625,
0.021820068359375,
-0.0029315948486328125,
0.0576171875,
0.0433349609375,
-0.05230712890625,
-0.0413818359375,
-0.0285797119140625,
-0.00691604614... |
zjkarina/matreshka | 2023-05-13T15:38:52.000Z | [
"task_categories:conversational",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ru",
"license:cc-by-4.0",
"region:us"
] | zjkarina | null | null | 10 | 73 | 2023-05-07T20:31:03 | ---
dataset_info:
features:
- name: role
sequence: string
- name: dialog
sequence: string
- name: persona
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 7320311
num_examples: 6655
- name: validation
num_bytes: 1806432
num_examples: 1664
download_size: 4092810
dataset_size: 9126743
language:
- ru
pretty_name: matreshka
size_categories:
- 1K<n<10K
task_categories:
- conversational
- summarization
- text-generation
license: cc-by-4.0
---
# Dataset Card for "matreshka"

(image generated by Kandinsky-2.1 neural network)
Russian dialogues, the persona of the first interlocutor, and a summary of the dialogue generated by GPT-3.5, starting with the first phrase given in the prompt.
The matreshka dataset is a multi task datasey, you can use it for the task of summarizing a dialogue or generating a dialogue. Contains life dialogues and is also filled with facts about the world. The dataset was going to give the interlocutor a human manner of communication.
After generation, some data contained a format that did not match the request, so we stripped the data with regular expressions. Next, we checked for the correct data type in each line, and changed to the correct format if necessary.
authors' telegram channels: [@nadlskom](https://t.me/nadlskom), [@lovedeathtransformers](https://t.me/lovedeathtransformers) | 1,518 | [
[
-0.0245819091796875,
-0.03411865234375,
0.018707275390625,
0.005374908447265625,
-0.050933837890625,
0.01190948486328125,
-0.0026569366455078125,
-0.00033974647521972656,
0.03045654296875,
0.04400634765625,
-0.07781982421875,
-0.042205810546875,
-0.0383911132812... |
WiktorS/polish-news | 2023-06-05T20:57:34.000Z | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:pl",
"license:apache-2.0",
"region:us"
] | WiktorS | null | null | 5 | 73 | 2023-05-22T20:14:43 | ---
license: apache-2.0
task_categories:
- text-classification
- summarization
- text-generation
language:
- pl
size_categories:
- 100K<n<1M
---
This dataset contains more than 250k articles obtained from polish news site `tvp.info.pl`.
Main purpouse of collecting the data was to create a transformer-based model for text summarization.
Columns:
* `link` - link to article
* `title` - original title of the article
* `headline` - lead/headline of the article - first paragraph of the article visible directly from the page
* `content` - full textual contents of the article
Link to original repo: https://github.com/WiktorSob/scraper-tvp
Download the data:
```python
from datasets import load_dataset
dataset = load_dataset("WiktorS/polish-news")
``` | 759 | [
[
-0.03680419921875,
-0.020843505859375,
0.013397216796875,
0.0162353515625,
-0.0275115966796875,
-0.01137542724609375,
-0.0106964111328125,
-0.004150390625,
0.0184326171875,
0.045623779296875,
-0.03533935546875,
-0.0289306640625,
-0.037994384765625,
0.0369567... |
RiTA-nlp/ITALIC | 2023-06-29T12:58:56.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:intent-classification",
"annotations_creators:crowdsourced",
"language_creators:Italian",
"license:cc-by-nc-nd-4.0",
"arxiv:2204.08582",
"arxiv:2306.08502",
"region:us"
] | RiTA-nlp | ITALIC is a dataset of Italian audio recordings and contains annotation for utterance transcripts and associated intents.
The ITALIC dataset was created through a custom web platform, utilizing both native and non-native Italian speakers as participants.
The participants were required to record themselves while reading a randomly sampled short text from the MASSIVE dataset. | @article{koudounas2023italic,
title={ITALIC: An Italian Intent Classification Dataset},
author={Koudounas, Alkis and La Quatra, Moreno and Vaiani, Lorenzo and Colomba, Luca and Attanasio, Giuseppe and Pastor, Eliana and Cagliero, Luca and Baralis, Elena},
journal={arXiv preprint arXiv:2306.08502},
year={2023}
} | 2 | 73 | 2023-06-13T21:03:20 | ---
pretty_name: ITALIC
annotations_creators:
- crowdsourced
language_creators:
- Italian
language_bcp47:
- it
size_categories:
it: 10K<n<100K
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- intent-classification
license: cc-by-nc-nd-4.0
---
# Dataset Card for ITALIC: An ITALian Intent Classification Dataset
ITALIC is an intent classification dataset for the Italian language, which is the first of its kind.
It includes spoken and written utterances and is annotated with 60 intents.
The dataset is available on [Zenodo](https://zenodo.org/record/8040649) and connectors ara available for the [HuggingFace Hub](https://huggingface.co/datasets/RiTA-nlp/ITALIC).
### Latest Updates
- **June 15th, 2023**: ITALIC dataset has been released on [Zenodo](https://zenodo.org/record/8040649): https://zenodo.org/record/8040649.
## Table of Contents
- [Data collection](#data-collection)
- [Dataset](#dataset)
- [Usage](#usage)
- [Models used in the paper](#models-used-in-the-paper)
- [SLU intent classification](#slu-intent-classification)
- [ASR](#asr)
- [NLU intent classification](#nlu-intent-classification)
- [Citation](#citation)
- [License](#license)
## Data collection
The data collection follows the MASSIVE NLU dataset which contains an annotated textual dataset for 60 intents. The data collection process is described in the paper [Massive Natural Language Understanding](https://arxiv.org/abs/2204.08582).
Following the MASSIVE NLU dataset, a pool of 70+ volunteers has been recruited to annotate the dataset. The volunteers were asked to record their voice while reading the utterances (the original text is available on MASSIVE dataset). Together with the audio, the volunteers were asked to provide a self-annotated description of the recording conditions (e.g., background noise, recording device). The audio recordings have also been validated and, in case of errors, re-recorded by the volunteers.
All the audio recordings included in the dataset have received a validation from at least two volunteers. All the audio recordings have been validated by native italian speakers (self-annotated).
## Dataset
The dataset is available on the [Zenodo](https://zenodo.org/record/8040649). It is composed of 3 different splits:
- `massive`: all the utterances are randomly shuffled and divided into 3 splits (train, validation, test).
- `hard_speaker`: the utterances are divided into 3 splits (train, validation, test) based on the speaker. Each split only contains utterances from a pool of speakers that do not overlap with the other splits.
- `hard_noisy`: the utterances are divided into 3 splits (train, validation, test) based on the recording conditions. The test split only contains utterances with the highest level of noise.
Each split contains the following annotations:
- `utt`: the original text of the utterance.
- `audio`: the audio recording of the utterance.
- `intent`: the intent of the utterance.
- `speaker`: the speaker of the utterance. The speaker is identified by a unique identifier and has been anonymized.
- `age`: the age of the speaker.
- `is_native`: whether the speaker is a native italian speaker or not.
- `gender`: the gender of the speaker (self-annotated).
- `region`: the region of the speaker (self-annotated).
- `nationality`: the nationality of the speaker (self-annotated).
- `lisp`: any kind of lisp of the speaker (self-annotated). It can be empty in case of no lisp.
- `education`: the education level of the speaker (self-annotated).
- `environment`: the environment of the recording (self-annotated).
- `device`: the device used for the recording (self-annotated).
## Usage
The dataset can be loaded using the `datasets` library. You need to install the following dependencies:
```bash
pip install datasets
pip install librosa
pip install soundfile
```
Then, you can load the dataset as follows:
```python
from datasets import load_dataset
# Please be sure to use use_auth_token=True and to set the access token
# using huggingface-cli login
# or follow https://huggingface.co/docs/hub/security-tokens
# configs "hard_speaker" and "hard_noisy" are also available (to substitute "massive")
italic = load_dataset("RiTA-nlp/ITALIC", "massive", use_auth_token=True)
italic_train = italic["train"]
italic_valid = italic["validation"]
italic_test = italic["test"]
```
The dataset has been designed for intent classification tasks. The `intent` column can be used as the label. However, the dataset can be used for other tasks as well.
- **Intent classification**: the `intent` column can be used as the label.
- **Speaker identification**: the `speaker` column can be used as the label.
- **Automatic speech recognition**: the `utt` column can be used as the label.
- **Accent identification**: the `region` column can be used as the label.
For more information about the dataset, please refer to the [paper](https://arxiv.org/abs/2306.08502).
## Models used in the paper
### Hardware settings
All experiments were conducted on a private workstation with Intel Core i9-10980XE CPU, 1 $\times$ NVIDIA RTX A6000 GPU, 64 GB of RAM running Ubuntu 22.04 LTS.
### Parameter settings
The parameters used for the training of the models are set to allow a fair comparison between the different models and to follow the recommendations of the related literature. The parameters are summarized in the following table:
| Model | Task | Parameters | Learning rate | Batch size | Max epochs | Warmup | Weight decay | Avg. training time | Avg. inference time |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| facebook/wav2vec2-xls-r-300m | SLU | 300M | 1e-4 | 128 | 30 | 0.1 ratio | 0.01 | 9m 35s per epoch | 13ms per sample |
| facebook/wav2vec2-xls-r-1b | SLU | 1B | 1e-4 | 32 | 30 | 0.1 ratio | 0.01 | 21m 30s per epoch | 29ms per sample |
| jonatasgrosman/wav2vec2-large-xlsr-53-italian | SLU | 300M | 1e-4 | 128 | 30 | 0.1 ratio | 0.01 | 9m 35s per epoch | 13ms per sample |
| jonatasgrosman/wav2vec2-xls-r-1b-italian | SLU | 1B | 1e-4 | 32 | 30 | 0.1 ratio | 0.01 | 21m 30s per epoch | 29ms per sample |
| ALM/whisper-it-small-augmented | ASR | 224M | 1e-5 | 8 | 5 | 500 steps | 0.01 | 26m 30s per epoch | 25ms per sample |
| EdoAbati/whisper-medium-it-2 | ASR | 769M | 1e-5 | 8 | 5 | 500 steps | 0.01 | 49m per epoch | 94ms per sample |
| EdoAbati/whisper-large-v2-it | ASR | 1.5B | 1e-5 | 8 | 5 | 500 steps | 0.01 | 1h 17m per epoch | 238ms per sample |
| bert-base-multilingual-uncased | NLU | 167M | 5e-5 | 8 | 5 | 500 steps | 0.01 | 1m 22s per epoch | 1.5ms per sample |
| facebook/mbart-large-cc25 | NLU | 611M | 5e-5 | 8 | 5 | 500 steps | 0.01 | 7m 53s per epoch | 4.7ms per sample |
| dbmdz/bert-base-italian-xxl-uncased | NLU | 110M | 5e-5 | 8 | 5 | 500 steps | 0.01 | 1m 30s per epoch | 1.4ms per sample |
| morenolq/bart-it | NLU | 141M | 5e-5 | 8 | 5 | 500 steps | 0.01 | 1m 54s per epoch | 1.9 ms per sample |
In all cases, we opted for the AdamW optimizer. All experiments were run on a single NVIDIA A6000 GPU.
### SLU intent classification
The models used in the paper are available on the [Hugging Face Hub](https://huggingface.co/models).
- 🌍 [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
- 🌍 [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b)
- 🇮🇹 [jonatasgrosman/wav2vec2-xls-r-1b-italian](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-italian)
- 🇮🇹 [jonatasgrosman/wav2vec2-large-xlsr-53-italian](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian)
### ASR
The models used in the paper are available on the [Hugging Face Hub](https://huggingface.co/models).
- 🌍 Whisper large (zero-shot ASR): [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2)
- 🇮🇹 Whisper small: [ALM/whisper-it-small-augmented](https://huggingface.co/ALM/whisper-it-small-augmented)
- 🇮🇹 Whisper medium: [EdoAbati/whisper-medium-it-2](https://huggingface.co/EdoAbati/whisper-medium-it-2)
- 🇮🇹 Whisper large: [EdoAbati/whisper-large-v2-it](https://huggingface.co/EdoAbati/whisper-large-v2-it)
### NLU intent classification
The models used in the paper are available on the [Hugging Face Hub](https://huggingface.co/models).
- 🌍 [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)
- 🌍 [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
- 🇮🇹 [dbmdz/bert-base-italian-xxl-uncased](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased)
- 🇮🇹 [morenolq/bart-it](https://huggingface.co/morenolq/bart-it)
## Citation
If you use this dataset in your research, please cite the following paper (**interspeech 2023** version is coming soon after the proceedings are published):
```bibtex
@article{koudounas2023italic,
title={ITALIC: An Italian Intent Classification Dataset},
author={Koudounas, Alkis and La Quatra, Moreno and Vaiani, Lorenzo and Colomba, Luca and Attanasio, Giuseppe and Pastor, Eliana and Cagliero, Luca and Baralis, Elena},
journal={arXiv preprint arXiv:2306.08502},
year={2023}
}
```
## License
The dataset is licensed under the [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
- [Paper describing the dataset and initial experiments](https://arxiv.org/abs/2306.08502)
- [Dataset on Zenodo](https://zenodo.org/record/8040649)
- [https://creativecommons.org/licenses/by-nc-sa/4.0/](https://creativecommons.org/licenses/by/4.0/)https://creativecommons.org/licenses/by/4.0/
| 9,572 | [
[
-0.033294677734375,
-0.043212890625,
0.0219879150390625,
0.0236663818359375,
-0.01244354248046875,
-0.026611328125,
-0.033294677734375,
-0.044158935546875,
0.032470703125,
0.0228729248046875,
-0.05181884765625,
-0.05999755859375,
-0.034149169921875,
0.007354... |
maloyan/wikipedia-22-12-en-embeddings-all-MiniLM-L6-v2 | 2023-07-19T02:57:15.000Z | [
"region:us"
] | maloyan | null | null | 0 | 73 | 2023-07-19T01:17:00 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: wiki_id
dtype: int32
- name: views
dtype: float32
- name: paragraph_id
dtype: int32
- name: langs
dtype: int32
- name: emb
sequence: float32
splits:
- name: train
num_bytes: 72128274660
num_examples: 35167920
download_size: 85877047254
dataset_size: 72128274660
---
# Dataset Card for "wikipedia-22-12-en-embeddings-all-MiniLM-L6-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 679 | [
[
-0.042938232421875,
-0.0260162353515625,
0.017578125,
-0.0014371871948242188,
-0.027008056640625,
-0.0162353515625,
0.01496124267578125,
-0.00904083251953125,
0.05377197265625,
0.01532745361328125,
-0.063232421875,
-0.0518798828125,
-0.040802001953125,
-0.00... |
katielink/healthsearchqa | 2023-08-24T21:40:08.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"medical",
"arxiv:2212.13138",
"region:us"
] | katielink | null | null | 1 | 73 | 2023-08-04T15:30:30 | ---
license: unknown
task_categories:
- question-answering
language:
- en
tags:
- medical
configs:
- config_name: all_data
data_files: all.csv
- config_name: 140_question_subset
data_files: multimedqa140_subset.csv
size_categories:
- 1K<n<10K
---
# HealthSearchQA
Dataset of consumer health questions released by Google for the Med-PaLM paper ([arXiv preprint](https://arxiv.org/abs/2212.13138)).
From the [paper](https://www.nature.com/articles/s41586-023-06291-2):
We curated our own additional dataset consisting of 3,173 commonly searched consumer questions,
referred to as HealthSearchQA. The dataset was curated using seed medical conditions and their
associated symptoms. We used the seed data to retrieve publicly-available commonly searched questions
generated by a search engine, which were displayed to all users entering the seed terms. We publish the
dataset as an open benchmark for answering medical questions from consumers and hope this will be a useful
resource for the community, as a dataset reflecting real-world consumer concerns.
**Format:** Question only, free text response, open domain.
**Size:** 3,173.
**Example question:** How serious is atrial fibrillation?
**Example question:** What kind of cough comes with Covid?
**Example question:** Is blood in phlegm serious? | 1,310 | [
[
-0.026611328125,
-0.0557861328125,
0.04547119140625,
-0.0089263916015625,
-0.017547607421875,
-0.0206451416015625,
0.01367950439453125,
-0.053680419921875,
0.0484619140625,
0.059356689453125,
-0.044891357421875,
-0.0301971435546875,
-0.041900634765625,
-0.00... |
indiejoseph/yue-zh-translation | 2023-10-08T20:52:38.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:yue",
"language:zh",
"license:cc-by-4.0",
"region:us"
] | indiejoseph | null | null | 1 | 73 | 2023-08-28T10:19:35 | ---
language:
- yue
- zh
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- translation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: translation
struct:
- name: yue
dtype: string
- name: zh
dtype: string
splits:
- name: train
num_bytes: 16446012
num_examples: 169949
- name: test
num_bytes: 4107525
num_examples: 42361
download_size: 15755469
dataset_size: 20553537
---
This dataset is comprised of:
1. Crawled content that is machine translated from Cantonese to Simplified Chinese.
2. machine translated articlse from zh-yue.wikipedia.org
3. [botisan-ai/cantonese-mandarin-translations](https://huggingface.co/datasets/botisan-ai/cantonese-mandarin-translations)
4. [AlienKevin/LIHKG](https://huggingface.co/datasets/AlienKevin/LIHKG)
| 921 | [
[
-0.00933837890625,
-0.0307464599609375,
0.0163116455078125,
0.0219268798828125,
-0.011260986328125,
-0.0122222900390625,
-0.00867462158203125,
-0.024383544921875,
0.0408935546875,
0.056121826171875,
-0.0538330078125,
-0.050140380859375,
-0.032928466796875,
0... |
jphme/ger_micro_benchmark | 2023-09-18T21:15:10.000Z | [
"region:us"
] | jphme | null | null | 0 | 73 | 2023-09-18T20:55:08 | ---
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: subject
dtype: string
splits:
- name: eval
num_bytes: 69430
num_examples: 200
download_size: 39957
dataset_size: 69430
---
# Dataset Card for "ger_micro_benchmark"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 509 | [
[
-0.0631103515625,
-0.0265960693359375,
0.0168304443359375,
-0.005207061767578125,
-0.00865936279296875,
-0.0084075927734375,
0.0177764892578125,
-0.006420135498046875,
0.045684814453125,
0.01032257080078125,
-0.050445556640625,
-0.051025390625,
-0.03010559082031... |
SonMide/Cbuddy | 2023-11-02T10:40:11.000Z | [
"region:us"
] | SonMide | null | null | 0 | 73 | 2023-10-13T09:52:08 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
ostapeno/platy_icl5_maxD1000000_maxC1000000_prmt10_1 | 2023-10-13T11:32:12.000Z | [
"region:us"
] | ostapeno | null | null | 0 | 73 | 2023-10-13T11:32:02 | ## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 1000000
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 1
## inverse_template: 0
| 341 | [
[
-0.0382080078125,
-0.025299072265625,
0.02630615234375,
0.034423828125,
-0.0280914306640625,
-0.0214996337890625,
-0.0043182373046875,
0.01434326171875,
-0.00934600830078125,
0.033935546875,
-0.06329345703125,
-0.0382080078125,
-0.0272064208984375,
0.0182342... |
Rewcifer/radio-llama2-resp_tag_90pct | 2023-10-21T01:46:59.000Z | [
"region:us"
] | Rewcifer | null | null | 0 | 73 | 2023-10-21T01:46:42 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1109388970
num_examples: 222141
download_size: 255573571
dataset_size: 1109388970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "radio-llama2-resp_tag_90pct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 465 | [
[
-0.04058837890625,
0.0110321044921875,
0.022705078125,
0.03082275390625,
-0.0284576416015625,
0.00634002685546875,
0.007205963134765625,
-0.01385498046875,
0.060638427734375,
0.0189971923828125,
-0.061553955078125,
-0.043792724609375,
-0.034088134765625,
-0.... |
chiayewken/bamboogle | 2023-10-27T09:22:40.000Z | [
"arxiv:2210.03350",
"region:us"
] | chiayewken | null | null | 0 | 73 | 2023-10-27T08:15:25 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
splits:
- name: test
num_bytes: 10747
num_examples: 125
download_size: 8383
dataset_size: 10747
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Bamboogle
This repo contains the data for ["Measuring and Narrowing the Compositionality Gap in Language Models" paper](https://arxiv.org/abs/2210.03350).
The original data link is here: https://docs.google.com/spreadsheets/d/1jwcsA5kE4TObr9YHn9Gc-wQHYjTbLhDGx6tmIzMhl_U/edit?usp=sharing
This dataset is distributed with the MIT license. | 645 | [
[
-0.01113128662109375,
-0.059844970703125,
0.03448486328125,
0.010833740234375,
-0.0007085800170898438,
0.0096588134765625,
-0.015472412109375,
-0.02166748046875,
0.0032634735107421875,
0.040191650390625,
-0.05389404296875,
-0.05535888671875,
-0.0474853515625,
... |
ProjectXMP/aiwingman-c-programming-v2-mistral | 2023-10-31T16:37:06.000Z | [
"region:us"
] | ProjectXMP | null | null | 0 | 73 | 2023-10-31T16:36:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
metaeval/recast | 2023-06-02T14:40:17.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"nli",
"natural-lan... | metaeval | A diverse collection of tasks recasted as natural language inference tasks. | null | 0 | 72 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'recast_nli'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
tags:
- nli
- natural-language-inference
---
http://decomp.io/ | 370 | [
[
-0.031982421875,
-0.0440673828125,
0.039154052734375,
0.005199432373046875,
-0.040130615234375,
0.01416015625,
0.0180816650390625,
-0.01358795166015625,
0.046783447265625,
0.041534423828125,
-0.060455322265625,
-0.04852294921875,
-0.0064849853515625,
-0.0048... |
strombergnlp/nordic_langid | 2022-10-25T21:42:02.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:da",
"language:nn",
"language:nb",
"language:fo",
"language:is",
"language:sv",
"license:cc-by-sa... | strombergnlp | Automatic language identification is a challenging problem. Discriminating
between closely related languages is especially difficult. This paper presents
a machine learning approach for automatic language identification for the
Nordic languages, which often suffer miscategorisation by existing
state-of-the-art tools. Concretely we will focus on discrimination between six
Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål),
Faroese and Icelandic.
This is the data for the tasks. Two variants are provided: 10K and 50K, with
holding 10,000 and 50,000 examples for each language respectively. | @inproceedings{haas-derczynski-2021-discriminating,
title = "Discriminating Between Similar Nordic Languages",
author = "Haas, Ren{\'e} and
Derczynski, Leon",
booktitle = "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.vardial-1.8",
pages = "67--75",
} | 3 | 72 | 2022-05-10T17:27:03 | ---
annotations_creators:
- found
language_creators:
- found
language:
- da
- nn
- nb
- fo
- is
- sv
license:
- cc-by-sa-3.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: nordic-langid
pretty_name: Nordic Language ID for Distinguishing between Similar Languages
tags:
- language-identification
---
# Dataset Card for nordic_langid
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/NordicDSL](https://github.com/StrombergNLP/NordicDSL)
- **Repository:** [https://github.com/StrombergNLP/NordicDSL](https://github.com/StrombergNLP/NordicDSL)
- **Paper:** [https://aclanthology.org/2021.vardial-1.8/](https://aclanthology.org/2021.vardial-1.8/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [René Haas](mailto:renha@itu.dk)
### Dataset Summary
Automatic language identification is a challenging problem. Discriminating
between closely related languages is especially difficult. This paper presents
a machine learning approach for automatic language identification for the
Nordic languages, which often suffer miscategorisation by existing
state-of-the-art tools. Concretely we will focus on discrimination between six
Nordic language: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål),
Faroese and Icelandic.
This is the data for the tasks. Two variants are provided: 10K and 50K, with
holding 10,000 and 50,000 examples for each language respectively.
For more info, see the paper: [Discriminating Between Similar Nordic Languages](https://aclanthology.org/2021.vardial-1.8/).
### Supported Tasks and Leaderboards
*
### Languages
This dataset is in six similar Nordic language:
- Danish, `da`
- Faroese, `fo`
- Icelandic, `is`
- Norwegian Bokmål, `nb`
- Norwegian Nynorsk, `nn`
- Swedish, `sv`
## Dataset Structure
The dataset has two parts, one with 10K samples per language and another with 50K per language.
The original splits and data allocation used in the paper is presented here.
### Data Instances
[Needs More Information]
### Data Fields
- `id`: the sentence's unique identifier, `string`
- `sentence`: the test to be classifier, a `string`
- `language`: the class, one of `da`, `fo`, `is`, `nb`, `no`, `sv`.
### Data Splits
Train and Test splits are provided, divided using the code provided with the paper.
## Dataset Creation
### Curation Rationale
Data is taken from Wikipedia and Tatoeba from each of these six languages.
### Source Data
#### Initial Data Collection and Normalization
**Data collection** Data was scraped from Wikipedia. We downloaded summaries for randomly chosen Wikipedia
articles in each of the languages, saved as raw text
to six .txt files of about 10MB each.
The 50K section is extended with Tatoeba data, which provides a different register to Wikipedia text, and then topped up with more Wikipedia data.
**Extracting Sentences** The first pass in sentence
tokenisation is splitting by line breaks. We then extract shorter sentences with the sentence tokenizer
(sent_tokenize) function from NLTK (Loper
and Bird, 2002). This does a better job than just
splitting by ’.’ due to the fact that abbreviations,
which can appear in a legitimate sentence, typically
include a period symbol.
**Cleaning characters** The initial data set has
many characters that do not belong to the alphabets of the languages we work with. Often the
Wikipedia pages for people or places contain names
in foreign languages. For example a summary
might contain Chinese or Russian characters which
are not strong signals for the purpose of discriminating between the target languages.
Further, it can be that some characters in the
target languages are mis-encoded. These misencodings are also not likely to be intrinsically
strong or stable signals.
To simplify feature extraction, and to reduce the
size of the vocabulary, the raw data is converted
to lowercase and stripped of all characters which
are not part of the standard alphabet of the six
languages using a character whitelist.
#### Who are the source language producers?
The source language is from Wikipedia contributors and Tatoeba contributors.
### Annotations
#### Annotation process
The annotations were found.
#### Who are the annotators?
The annotations were found. They are determined by which language section a contributor posts their content to.
### Personal and Sensitive Information
The data hasn't been checked for PII, and is already all public. Tatoeba is is based on translations of synthetic conversational turns and is unlikely to bear personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to help correctly identify content in the languages of six minority languages. Existing systems often confuse these, especially Bokmål and Danish or Icelandic and Faroese. However, some dialects are missed (for example Bornholmsk) and the closed nature of the classification task thus excludes speakers of these languages without recognising their existence.
### Discussion of Biases
The text comes from only two genres, so might not transfer well to other domains.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The data here is licensed CC-BY-SA 3.0. If you use this data, you MUST state its origin.
### Citation Information
````
@inproceedings{haas-derczynski-2021-discriminating,
title = "Discriminating Between Similar Nordic Languages",
author = "Haas, Ren{\'e} and
Derczynski, Leon",
booktitle = "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.vardial-1.8",
pages = "67--75",
}
```
| 7,007 | [
[
-0.04376220703125,
-0.045867919921875,
-0.00004112720489501953,
0.01412200927734375,
-0.038604736328125,
0.00927734375,
-0.037139892578125,
-0.04559326171875,
0.033294677734375,
0.0311126708984375,
-0.032012939453125,
-0.064453125,
-0.039520263671875,
0.0428... |
shivi/cheques_sample_data | 2022-11-05T21:31:01.000Z | [
"region:us"
] | shivi | null | null | 0 | 72 | 2022-10-15T22:25:47 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 7518544.0
num_examples: 400
- name: train
num_bytes: 56481039.4
num_examples: 2800
- name: validation
num_bytes: 15034990.0
num_examples: 800
download_size: 58863727
dataset_size: 79034573.4
---
# Dataset Card for "cheques_sample_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.031982421875,
-0.0125732421875,
0.0128326416015625,
-0.005619049072265625,
-0.0189971923828125,
-0.0016603469848632812,
0.035919189453125,
-0.01194000244140625,
0.056304931640625,
0.0478515625,
-0.06298828125,
-0.049896240234375,
-0.0382080078125,
-0.0149... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.