id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
DFKI-SLT/fabner | 2023-04-05T23:20:21.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"manufacturing",
"2000-2020",
"region:us"
] | DFKI-SLT | FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.
It is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process
science research.
For every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),
Machine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),
Parameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and
BioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:
B=Beginning, I-Intermediate, O=Outside, E=End, S=Single. | @article{DBLP:journals/jim/KumarS22,
author = {Aman Kumar and
Binil Starly},
title = {"FabNER": information extraction from manufacturing process science
domain literature using named entity recognition},
journal = {J. Intell. Manuf.},
volume = {33},
number = {8},
pages = {2393--2407},
year = {2022},
url = {https://doi.org/10.1007/s10845-021-01807-x},
doi = {10.1007/s10845-021-01807-x},
timestamp = {Sun, 13 Nov 2022 17:52:57 +0100},
biburl = {https://dblp.org/rec/journals/jim/KumarS22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 16 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: FabNER is a manufacturing text dataset for Named Entity Recognition.
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- manufacturing
- 2000-2020
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
- config_name: fabner
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-MATE
'2': I-MATE
'3': O-MATE
'4': E-MATE
'5': S-MATE
'6': B-MANP
'7': I-MANP
'8': O-MANP
'9': E-MANP
'10': S-MANP
'11': B-MACEQ
'12': I-MACEQ
'13': O-MACEQ
'14': E-MACEQ
'15': S-MACEQ
'16': B-APPL
'17': I-APPL
'18': O-APPL
'19': E-APPL
'20': S-APPL
'21': B-FEAT
'22': I-FEAT
'23': O-FEAT
'24': E-FEAT
'25': S-FEAT
'26': B-PRO
'27': I-PRO
'28': O-PRO
'29': E-PRO
'30': S-PRO
'31': B-CHAR
'32': I-CHAR
'33': O-CHAR
'34': E-CHAR
'35': S-CHAR
'36': B-PARA
'37': I-PARA
'38': O-PARA
'39': E-PARA
'40': S-PARA
'41': B-ENAT
'42': I-ENAT
'43': O-ENAT
'44': E-ENAT
'45': S-ENAT
'46': B-CONPRI
'47': I-CONPRI
'48': O-CONPRI
'49': E-CONPRI
'50': S-CONPRI
'51': B-MANS
'52': I-MANS
'53': O-MANS
'54': E-MANS
'55': S-MANS
'56': B-BIOP
'57': I-BIOP
'58': O-BIOP
'59': E-BIOP
'60': S-BIOP
splits:
- name: train
num_bytes: 4394010
num_examples: 9435
- name: validation
num_bytes: 934347
num_examples: 2183
- name: test
num_bytes: 940136
num_examples: 2064
download_size: 3793613
dataset_size: 6268493
- config_name: fabner_bio
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-MATE
'2': I-MATE
'3': B-MANP
'4': I-MANP
'5': B-MACEQ
'6': I-MACEQ
'7': B-APPL
'8': I-APPL
'9': B-FEAT
'10': I-FEAT
'11': B-PRO
'12': I-PRO
'13': B-CHAR
'14': I-CHAR
'15': B-PARA
'16': I-PARA
'17': B-ENAT
'18': I-ENAT
'19': B-CONPRI
'20': I-CONPRI
'21': B-MANS
'22': I-MANS
'23': B-BIOP
'24': I-BIOP
splits:
- name: train
num_bytes: 4394010
num_examples: 9435
- name: validation
num_bytes: 934347
num_examples: 2183
- name: test
num_bytes: 940136
num_examples: 2064
download_size: 3793613
dataset_size: 6268493
- config_name: fabner_simple
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': MATE
'2': MANP
'3': MACEQ
'4': APPL
'5': FEAT
'6': PRO
'7': CHAR
'8': PARA
'9': ENAT
'10': CONPRI
'11': MANS
'12': BIOP
splits:
- name: train
num_bytes: 4394010
num_examples: 9435
- name: validation
num_bytes: 934347
num_examples: 2183
- name: test
num_bytes: 940136
num_examples: 2064
download_size: 3793613
dataset_size: 6268493
- config_name: text2tech
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': Technological System
'2': Method
'3': Material
'4': Technical Field
splits:
- name: train
num_bytes: 4394010
num_examples: 9435
- name: validation
num_bytes: 934347
num_examples: 2183
- name: test
num_bytes: 940136
num_examples: 2064
download_size: 3793613
dataset_size: 6268493
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407](https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407)
- **Paper:** ["FabNER": information extraction from manufacturing process science domain literature using named entity recognition](https://par.nsf.gov/servlets/purl/10290810)
- **Size of downloaded dataset files:** 3.79 MB
- **Size of the generated dataset:** 6.27 MB
### Dataset Summary
FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.
It is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process
science research.
For every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),
Machine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),
Parameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and
BioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:
B=Beginning, I-Intermediate, O=Outside, E=End, S=Single.
For details about the dataset, please refer to the paper: ["FabNER": information extraction from manufacturing process science domain literature using named entity recognition](https://par.nsf.gov/servlets/purl/10290810)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 3.79 MB
- **Size of the generated dataset:** 6.27 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["Revealed", "the", "location-specific", "flow", "patterns", "and", "quantified", "the", "speeds", "of", "various", "types", "of", "flow", "."],
"ner_tags": [0, 0, 0, 46, 49, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
#### fabner
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "B-MATE": 1, "I-MATE": 2, "O-MATE": 3, "E-MATE": 4, "S-MATE": 5, "B-MANP": 6, "I-MANP": 7, "O-MANP": 8, "E-MANP": 9, "S-MANP": 10, "B-MACEQ": 11, "I-MACEQ": 12, "O-MACEQ": 13, "E-MACEQ": 14, "S-MACEQ": 15, "B-APPL": 16, "I-APPL": 17, "O-APPL": 18, "E-APPL": 19, "S-APPL": 20, "B-FEAT": 21, "I-FEAT": 22, "O-FEAT": 23, "E-FEAT": 24, "S-FEAT": 25, "B-PRO": 26, "I-PRO": 27, "O-PRO": 28, "E-PRO": 29, "S-PRO": 30, "B-CHAR": 31, "I-CHAR": 32, "O-CHAR": 33, "E-CHAR": 34, "S-CHAR": 35, "B-PARA": 36, "I-PARA": 37, "O-PARA": 38, "E-PARA": 39, "S-PARA": 40, "B-ENAT": 41, "I-ENAT": 42, "O-ENAT": 43, "E-ENAT": 44, "S-ENAT": 45, "B-CONPRI": 46, "I-CONPRI": 47, "O-CONPRI": 48, "E-CONPRI": 49, "S-CONPRI": 50, "B-MANS": 51, "I-MANS": 52, "O-MANS": 53, "E-MANS": 54, "S-MANS": 55, "B-BIOP": 56, "I-BIOP": 57, "O-BIOP": 58, "E-BIOP": 59, "S-BIOP": 60}
```
#### fabner_bio
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "B-MATE": 1, "I-MATE": 2, "B-MANP": 3, "I-MANP": 4, "B-MACEQ": 5, "I-MACEQ": 6, "B-APPL": 7, "I-APPL": 8, "B-FEAT": 9, "I-FEAT": 10, "B-PRO": 11, "I-PRO": 12, "B-CHAR": 13, "I-CHAR": 14, "B-PARA": 15, "I-PARA": 16, "B-ENAT": 17, "I-ENAT": 18, "B-CONPRI": 19, "I-CONPRI": 20, "B-MANS": 21, "I-MANS": 22, "B-BIOP": 23, "I-BIOP": 24}
```
#### fabner_simple
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "MATE": 1, "MANP": 2, "MACEQ": 3, "APPL": 4, "FEAT": 5, "PRO": 6, "CHAR": 7, "PARA": 8, "ENAT": 9, "CONPRI": 10, "MANS": 11, "BIOP": 12}
```
#### text2tech
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "Technological System": 1, "Method": 2, "Material": 3, "Technical Field": 4}
```
### Data Splits
| | Train | Dev | Test |
|--------|-------|------|------|
| fabner | 9435 | 2183 | 2064 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/jim/KumarS22,
author = {Aman Kumar and
Binil Starly},
title = {"FabNER": information extraction from manufacturing process science
domain literature using named entity recognition},
journal = {J. Intell. Manuf.},
volume = {33},
number = {8},
pages = {2393--2407},
year = {2022},
url = {https://doi.org/10.1007/s10845-021-01807-x},
doi = {10.1007/s10845-021-01807-x},
timestamp = {Sun, 13 Nov 2022 17:52:57 +0100},
biburl = {https://dblp.org/rec/journals/jim/KumarS22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. |
keremberke/plane-detection | 2023-01-27T13:46:18.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] | keremberke | null | @misc{ overhead-plane-detector_dataset,
title = { Overhead Plane Detector Dataset },
type = { Open Source Dataset },
author = { SkyBot Cam },
howpublished = { \\url{ https://universe.roboflow.com/skybot-cam/overhead-plane-detector } },
url = { https://universe.roboflow.com/skybot-cam/overhead-plane-detector },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jan },
note = { visited on 2023-01-27 },
} | null | 2 | 16 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="keremberke/plane-detection" src="https://huggingface.co/datasets/keremberke/plane-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['planes']
```
### Number of Images
```json
{'test': 25, 'valid': 50, 'train': 175}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/plane-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4](https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4?ref=roboflow2huggingface)
### Citation
```
@misc{ overhead-plane-detector_dataset,
title = { Overhead Plane Detector Dataset },
type = { Open Source Dataset },
author = { SkyBot Cam },
howpublished = { \\url{ https://universe.roboflow.com/skybot-cam/overhead-plane-detector } },
url = { https://universe.roboflow.com/skybot-cam/overhead-plane-detector },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jan },
note = { visited on 2023-01-27 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 30, 2022 at 3:11 PM GMT
It includes 250 images.
Planes are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
|
keremberke/pcb-defect-segmentation | 2023-01-27T13:45:36.000Z | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"region:us"
] | keremberke | null | @misc{ defects-2q87r_dataset,
title = { Defects Dataset },
type = { Open Source Dataset },
author = { Diplom },
howpublished = { \\url{ https://universe.roboflow.com/diplom-qz7q6/defects-2q87r } },
url = { https://universe.roboflow.com/diplom-qz7q6/defects-2q87r },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-27 },
} | null | 5 | 16 | ---
task_categories:
- image-segmentation
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="keremberke/pcb-defect-segmentation" src="https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['dry_joint', 'incorrect_installation', 'pcb_damage', 'short_circuit']
```
### Number of Images
```json
{'valid': 25, 'train': 128, 'test': 36}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/pcb-defect-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8](https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8?ref=roboflow2huggingface)
### Citation
```
@misc{ defects-2q87r_dataset,
title = { Defects Dataset },
type = { Open Source Dataset },
author = { Diplom },
howpublished = { \\url{ https://universe.roboflow.com/diplom-qz7q6/defects-2q87r } },
url = { https://universe.roboflow.com/diplom-qz7q6/defects-2q87r },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-27 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 27, 2023 at 1:45 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 189 images.
Defect are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
|
galsenai/waxal_dataset | 2023-02-06T12:32:12.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | galsenai | null | null | null | 3 | 16 | ---
license: creativeml-openrail-m
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': A canoon
'1': A cinj
'2': A keen
'3': A lanq
'4': A ñaaƴ
'5': A ñamaak
'6': Alaa
'7': Bacaac
'8': Benn
'9': Bálamuk
'10': Búbaar
'11': Caggal
'12': Ceme
'13': Ci ginnaaw
'14': Ci kanam
'15': Ci kow
'16': Ci suuf
'17': Càmmoñ
'18': Darnde
'19': Dow
'20': Doxal
'21': Déedet
'22': Eey
'23': Esuwa
'24': Eyen
'25': Eé
'26': Fatiya
'27': Fukk
'28': Funoom
'29': Futok
'30': Futok di sibaakiir
'31': Futok di sigaba
'32': Futok di sífeejir
'33': Futok di yákon
'34': Fácul
'35': Garab
'36': Goo
'37': Hani
'38': Jaay
'39': Jeegom
'40': Jeenay
'41': Jeetati
'42': Jeeɗiɗi
'43': Jik
'44': Jiku
'45': Joy
'46': Juni
'47': Junne
'48': Juroom
'49': Juroom-benn
'50': Juroom-ñaar
'51': Juroom-ñeent
'52': Juroom-ñett
'53': Jënd
'54': Kakamben
'55': Kamay
'56': Kanoomen
'57': Kákambul
'58': Kárir
'59': Lal
'60': Lees
'61': Leng
'62': Leɗki
'63': Li
'64': Mbaamir
'65': Mbalndi
'66': Nano
'67': Naxik
'68': Nay
'69': Ndaxar
'70': Ndeyjoor
'71': Ndiga
'72': Ndiiƭ
'73': Njong
'74': O ɓox
'75': Picc
'76': Rawaandu
'77': Sappo
'78': Sibaakiir
'79': Sigaba
'80': Solndu
'81': Soodde
'82': Sífeejir
'83': Tadik
'84': Tati
'85': Taxawal
'86': Teemedere
'87': Teemeed
'88': Tentaam
'89': Tik
'90': Took
'91': Tus
'92': Téemeer
'93': Ub /Tëj
'94': Ub/Tëj
'95': Ubbi /Tijji
'96': Udditde
'97': Uddude
'98': Ujaw
'99': Ujunere
'100': Ujuum
'101': Uñen
'102': Waafulet
'103': Waaw
'104': Weg
'105': Wet
'106': Wúli
'107': Xa-aa
'108': Xaj
'109': Xarɓaxay
'110': Yahdu
'111': Yeeso
'112': Yeeyde
'113': Yákon
'114': Ñaamo
'115': Ñaar
'116': Ñeent
'117': Ñett
'118': Ɗiɗi
'119': Ƥetaa-fo-leng
'120': Ƥetaa-naxak
'121': Ƥetaa-tadak
'122': Ƥetaa-ƭaq
'123': Ƥetik
- name: translation
dtype: string
- name: locale_id
dtype: int64
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 567773923.639
num_examples: 26387
download_size: 546144081
dataset_size: 567773923.639
---
### Dataset Summary
Keyword spotting refers to the task of learning to detect spoken keywords. It interfaces all modern voice-based virtual assistants on the market: Amazon’s Alexa, Apple’s Siri, and the Google Home device. Contrarily to speech recognition models, keyword spotting doesn’t run on the cloud, but directly on the device.
The motivation of this paper is to extend the Speech commands dataset (Warden 2018) with African languages. In particular, we are going to focus on 4 Senegalese languages: Wolof, Pulaar, Serer, Diola.
The choice of these languages is guided, on the one hand, by their status as languages considered to be the languages of the first generation, that is to say, the first codified languages (endowed with a writing system and considered by the state of Senegal as national languages) with decree n ° 68-871 of July 24, 1968. On the other hand, they represent the languages that are most spoken in Senegal.
### Languages
The ID of the languages are the following:
- Wolof: `7`
- Pulaar: `5`
- Serer: `6`
- Diola: `3`
## Dataset Structure
```python
from datasets import load_dataset
dataset = load_dataset("galsenai/waxal_dataset")
DatasetDict({
train: Dataset({
features: ['audio', 'label', 'translation', 'locale_id'],
num_rows: 26387
})
})
```
### Data Fields
- `audio`: Audio file in MP3 format
- `label`: label of the audio file
- `translation` : Translation of the keyword in french
- `locale_id`: ID of the language |
bigcode/the-stack-smol-xs | 2023-02-13T09:05:23.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"region:us"
] | bigcode | \ | \ | null | 2 | 16 | ---
annotations_creators: []
language_creators:
- crowdsourced
language: ["code"]
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
## Dataset Description
A small subset of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, with 87 programming languages, each has 100 random samples from the original dataset for visualization.
## Languages
The dataset contains 87 programming languages:
````
'ada', 'agda', 'alloy', 'antlr', 'applescript', 'assembly', 'augeas', 'awk', 'batchfile', 'bison', 'bluespec', 'c',
'c++', 'c-sharp', 'clojure', 'cmake', 'coffeescript', 'common-lisp', 'css', 'cuda', 'dart', 'dockerfile', 'elixir',
'elm', 'emacs-lisp','erlang', 'f-sharp', 'fortran', 'glsl', 'go', 'groovy', 'haskell','html', 'idris', 'isabelle', 'java',
'java-server-pages', 'javascript', 'julia', 'kotlin', 'lean', 'literate-agda', 'literate-coffeescript', 'literate-haskell',
'lua', 'makefile', 'maple', 'markdown', 'mathematica', 'matlab', 'ocaml', 'pascal', 'perl', 'php', 'powershell', 'prolog',
'protocol-buffer', 'python', 'r', 'racket', 'restructuredtext', 'rmarkdown', 'ruby', 'rust', 'sas', 'scala', 'scheme',
'shell', 'smalltalk', 'solidity', 'sparql', 'sql', 'stan', 'standard-ml', 'stata', 'systemverilog', 'tcl', 'tcsh', 'tex',
'thrift', 'typescript', 'verilog', 'vhdl', 'visual-basic', 'xslt', 'yacc', 'zig'
`````
## Dataset Structure
You can specify which language you want to load, python is loaded by default:
```python
# to load go:
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol-xs", "go")
DatasetDict({
train: Dataset({
features: ['content', 'lang', 'size', 'ext', 'max_stars_count', 'avg_line_length', 'max_line_length', 'alphanum_fraction'],
num_rows: 100
})
})
```
|
jmparejaz/dstc9_GODEL | 2023-02-17T19:57:49.000Z | [
"region:us"
] | jmparejaz | null | null | null | 0 | 16 | Entry not found |
lansinuote/diffusion.1.unconditional | 2023-02-23T10:50:05.000Z | [
"region:us"
] | lansinuote | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 346842007.375
num_examples: 8189
download_size: 0
dataset_size: 346842007.375
---
# Dataset Card for "diffusion.1.unconditional"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CarperAI/pilev2-dev | 2023-03-13T09:19:03.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:n>1T",
"source_datasets:extended|the_pile",
"language:en",
"language:code... | CarperAI | null | null | null | 14 | 16 | ---
annotations_creators:
- no-annotation
language:
- en
- code
language_creators:
- crowdsourced
- machine-generated
license: []
multilinguality:
- multilingual
pretty_name: Pile V2
size_categories:
- n>1T
source_datasets:
- extended|the_pile
tags:
- code
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The PileV2 is a larger and more diverse collection of text data mostly focused on English text. Specifically, it is a collection of roughly 40 different data subsets. This includes the original 22 subsets from the original pile plus a heavy focus on additional software engineering specific data subsets including the newly released "The Stack" from bigcode, various programming competition sources, and a number of programmer oriented discussion groups such as Discourse, programming Subreddits, and Stack Exchange. This portion of the PileV2 we've named the CodePile to hopefully improve language model for the domain of Software Engineering that goes beyond simplily coding.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The PileV2:
* ArXiv - https://arxiv.org/
* PubMed Central - https://www.ncbi.nlm.nih.gov/pmc/
* Books3 - https://the-eye.eu/public/AI/pile_preliminary_components/books3.tar.gz
* Project Gutenberg - https://www.gutenberg.org/
* Free Law Project - https://free.law/
* Wikipedia (en) - https://dumps.wikimedia.org/enwiki/
* EuroParl - https://www.statmt.org/europarl/
* (Hendryks) SEC - https://www.sec.gov/
* (Hendryks) AMPS - https://www.amps.org/
* USPTO - https://www.uspto.gov/
* Hacker News - https://news.ycombinator.com/
* OpenWebText2 - https://skylion007.github.io/OpenWebTextCorpus/
* Pile-CC -
* Pile of Law - https://www.pileoflaw.com/
* Case.Law - https://case.law/
* Multi Session -
* Reddit - https://files.pushshift.io/reddit/
The CodePile:
* The Stack - https://www.bigcode-project.org/docs/about/the-stack/
* Ubuntu IRC - https://irclogs.ubuntu.com/
* Stack Exchange - https://archive.org/details/stackexchange
* DM Mathematics - https://www.kaggle.com/c/learn-ai-bowl-2020/data
* Apache Software Foundation Public Mail Archives - https://mail-archives.apache.org/
* Arduino Forum - https://forum.arduino.cc/
* GitLab - https://gitlab.com/
* Bitbucket diffs - https://bitbucket.org/
* Bitbucket code - https://bitbucket.org/
* Programming Competition Data - https://www.kaggle.com/c/learn-ai-bowl-2020/data
* Discourse - https://meta.discourse.org/t/discourse-data-explorer/112497
* Reddit Programming Subthreads - https://files.pushshift.io/reddit/
* Programming Books - https://www.kaggle.com/gyani95/380000-lyrics-from-metrolyrics
* UseNet - https://archive.org/details/usenet
* Mailing Lists - https://www.kaggle.com/wcukierski/enron-email-dataset
* Gitter Discussions - https://gitter.im/
* Zulip - https://zulipchat.com/
* AI4Code Notebooks -
* LinusTechTips forums - https://linustechtips.com/
* GitHub diffs -
* GitHub Issues -
* Leetcode - https://leetcode.com/
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
CaoHaiNam/summarization_wikilingua_en | 2023-03-01T08:55:32.000Z | [
"region:us"
] | CaoHaiNam | null | null | null | 1 | 16 | ---
dataset_info:
features:
- name: summary
dtype: string
- name: document
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 326708878
num_examples: 141457
download_size: 184148284
dataset_size: 326708878
---
# Dataset Card for "summarization_wikilingua_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
suolyer/pile_freelaw | 2023-03-27T03:04:54.000Z | [
"license:apache-2.0",
"region:us"
] | suolyer | null | null | null | 0 | 16 | ---
license: apache-2.0
---
|
iamwille/igbo-translation | 2023-04-01T14:40:26.000Z | [
"task_categories:translation",
"annotations_creators:found",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:extended|igbo_english_machine_translation",
"language:en",
"l... | iamwille | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: English
dtype: string
- name: Igbo
dtype: string
splits:
- name: train
num_bytes: 1983074.9958306309
num_examples: 8094
- name: test
num_bytes: 661270.004169369
num_examples: 2699
download_size: 1705526
dataset_size: 2644345.0
annotations_creators:
- found
- crowdsourced
language:
- en
- ig
language_creators:
- crowdsourced
- found
license:
- apache-2.0
multilinguality:
- translation
pretty_name: 'Igbo to English language '
size_categories:
- 10K<n<100K
source_datasets:
- extended|igbo_english_machine_translation
tags: []
task_categories:
- translation
task_ids: []
---
# Dataset Card for "igbo-translation"
## Dataset Summary
This data set contains translated data from engllish to igbo language for use in training general purpose translation models
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/haberman | 2023-04-07T13:50:58.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"haberman",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"region:us"
] | mstz | null | @misc{misc_haberman's_survival_43,
author = {Haberman,S.},
title = {{Haberman's Survival}},
year = {1999},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5XK51}}
} | null | 0 | 16 | ---
language:
- en
tags:
- haberman
- tabular_classification
- binary_classification
- multiclass_classification
pretty_name: Haberman
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- survival
license: cc
---
# Haberman
The [Haberman dataset](https://archive.ics.uci.edu/ml/datasets/Haberman) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Has the patient survived surgery?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------|
| sruvival | Binary classification | Has the patient survived surgery? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/haberman", "survival")["train"]
``` |
chrisxx/laion2b-en-10K-subset | 2023-04-25T10:47:57.000Z | [
"size_categories:1K<n<10K",
"laion",
"image-text",
"region:us"
] | chrisxx | null | null | null | 1 | 16 | ---
tags:
- laion
- image-text
size_categories:
- 1K<n<10K
--- |
ehartford/oa_leet10k | 2023-04-15T20:08:10.000Z | [
"license:apache-2.0",
"region:us"
] | ehartford | null | null | null | 14 | 16 | ---
license: apache-2.0
---
|
Ransaka/Sinhala-400M | 2023-04-18T05:53:24.000Z | [
"region:us"
] | Ransaka | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: text
sequence: string
splits:
- name: train
num_bytes: 2802808058.089643
num_examples: 8854185
- name: test
num_bytes: 1201203543.9103568
num_examples: 3794651
download_size: 1826451430
dataset_size: 4004011602.0
---
# Dataset Card for "Sinhala-400M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/nursery | 2023-04-16T17:57:18.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"nursery",
"tabular_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_nursery_76,
author = {Rajkovic,Vladislav},
title = {{Nursery}},
year = {1997},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5P88W}}
} | null | 0 | 16 | ---
language:
- en
tags:
- nursery
- tabular_classification
- UCI
pretty_name: Nursery
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- nursery
- nursery_binary
license: cc
---
# Nursery
The [Nursery dataset](https://archive-beta.ics.uci.edu/dataset/76/nursery) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Should the nursery school accept the student application?
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| nursery | Multiclass classification |
| nursery_binary | Binary classification | |
mstz/landsat | 2023-04-16T17:33:23.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"landsat",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_statlog_(landsat_satellite)_146,
author = {Srinivasan,Ashwin},
title = {{Statlog (Landsat Satellite)}},
year = {1993},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C55887}}
} | null | 0 | 16 | ---
language:
- en
tags:
- landsat
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Landsat
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- landsat
- landsat_binary
license: cc
---
# Landsat
The [Landsat dataset](https://archive-beta.ics.uci.edu/dataset/146/statlog+landsat+satellite) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| landsat | Multiclass classification.| |
| landsat_0 | Binary classification. | Is the image of class 0? |
| landsat_1 | Binary classification. | Is the image of class 1? |
| landsat_2 | Binary classification. | Is the image of class 2? |
| landsat_3 | Binary classification. | Is the image of class 3? |
| landsat_4 | Binary classification. | Is the image of class 4? |
| landsat_5 | Binary classification. | Is the image of class 5? | |
sammyboi1801/lfw-face-transformer-dataset | 2023-04-15T14:13:56.000Z | [
"region:us"
] | sammyboi1801 | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Abdullah_Gul
'1': Adrien_Brody
'2': Alejandro_Toledo
'3': Alvaro_Uribe
'4': Amelie_Mauresmo
'5': Andre_Agassi
'6': Andy_Roddick
'7': Angelina_Jolie
'8': Ann_Veneman
'9': Anna_Kournikova
'10': Ari_Fleischer
'11': Ariel_Sharon
'12': Arnold_Schwarzenegger
'13': Atal_Bihari_Vajpayee
'14': Bill_Clinton
'15': Bill_Gates
'16': Bill_Simon
'17': Britney_Spears
'18': Carlos_Menem
'19': Carlos_Moya
'20': Catherine_Zeta-Jones
'21': Charles_Moose
'22': Colin_Powell
'23': Condoleezza_Rice
'24': David_Beckham
'25': David_Nalbandian
'26': Dick_Cheney
'27': Dominique_de_Villepin
'28': Donald_Rumsfeld
'29': Edmund_Stoiber
'30': Eduardo_Duhalde
'31': Fidel_Castro
'32': George_HW_Bush
'33': George_Robertson
'34': George_W_Bush
'35': Gerhard_Schroeder
'36': Gloria_Macapagal_Arroyo
'37': Gonzalo_Sanchez_de_Lozada
'38': Gordon_Brown
'39': Gray_Davis
'40': Guillermo_Coria
'41': Halle_Berry
'42': Hamid_Karzai
'43': Hans_Blix
'44': Harrison_Ford
'45': Hillary_Clinton
'46': Howard_Dean
'47': Hu_Jintao
'48': Hugo_Chavez
'49': Igor_Ivanov
'50': Jack_Straw
'51': Jackie_Chan
'52': Jacques_Chirac
'53': James_Blake
'54': James_Kelly
'55': Jean_Charest
'56': Jean_Chretien
'57': Jeb_Bush
'58': Jennifer_Aniston
'59': Jennifer_Capriati
'60': Jennifer_Garner
'61': Jennifer_Lopez
'62': Jeremy_Greenstock
'63': Jiang_Zemin
'64': Jiri_Novak
'65': Joe_Lieberman
'66': John_Allen_Muhammad
'67': John_Ashcroft
'68': John_Bolton
'69': John_Howard
'70': John_Kerry
'71': John_Negroponte
'72': John_Paul_II
'73': John_Snow
'74': Joschka_Fischer
'75': Jose_Maria_Aznar
'76': Juan_Carlos_Ferrero
'77': Julianne_Moore
'78': Julie_Gerberding
'79': Junichiro_Koizumi
'80': Keanu_Reeves
'81': Kim_Clijsters
'82': Kim_Ryong-sung
'83': Kofi_Annan
'84': Lance_Armstrong
'85': Laura_Bush
'86': Lindsay_Davenport
'87': Lleyton_Hewitt
'88': Lucio_Gutierrez
'89': Luiz_Inacio_Lula_da_Silva
'90': Mahathir_Mohamad
'91': Mahmoud_Abbas
'92': Mark_Philippoussis
'93': Megawati_Sukarnoputri
'94': Meryl_Streep
'95': Michael_Bloomberg
'96': Michael_Jackson
'97': Michael_Schumacher
'98': Mike_Weir
'99': Mohammed_Al-Douri
'100': Nancy_Pelosi
'101': Naomi_Watts
'102': Nestor_Kirchner
'103': Nicanor_Duarte_Frutos
'104': Nicole_Kidman
'105': Norah_Jones
'106': Paul_Bremer
'107': Paul_Burrell
'108': Pervez_Musharraf
'109': Pete_Sampras
'110': Pierce_Brosnan
'111': Queen_Elizabeth_II
'112': Recep_Tayyip_Erdogan
'113': Renee_Zellweger
'114': Ricardo_Lagos
'115': Richard_Gephardt
'116': Richard_Myers
'117': Roger_Federer
'118': Roh_Moo-hyun
'119': Rubens_Barrichello
'120': Rudolph_Giuliani
'121': Saddam_Hussein
'122': Salma_Hayek
'123': Serena_Williams
'124': Sergey_Lavrov
'125': Sergio_Vieira_De_Mello
'126': Silvio_Berlusconi
'127': Spencer_Abraham
'128': Taha_Yassin_Ramadan
'129': Tang_Jiaxuan
'130': Tiger_Woods
'131': Tim_Henman
'132': Tom_Daschle
'133': Tom_Ridge
'134': Tommy_Franks
'135': Tony_Blair
'136': Trent_Lott
'137': Venus_Williams
'138': Vicente_Fox
'139': Vladimir_Putin
'140': Wen_Jiabao
'141': Winona_Ryder
'142': Yoriko_Kawaguchi
splits:
- name: train
num_bytes: 33550885.462
num_examples: 3846
- name: test
num_bytes: 2362162.0
num_examples: 271
download_size: 35786453
dataset_size: 35913047.462
---
# Dataset Card for "lfw-face-transformer-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ktgiahieu/maccrobat2018_2020 | 2023-05-21T10:39:53.000Z | [
"license:cc-by-4.0",
"region:us"
] | ktgiahieu | null | null | null | 1 | 16 | ---
license: cc-by-4.0
---
Modified dataset from:
Caufield, J. Harry (2019): MACCROBAT. figshare. Dataset. https://doi.org/10.6084/m9.figshare.9764942.v2
Example training notebook: https://colab.research.google.com/drive/1OzCY782KJSF0FBDS0d1CoMhfp3-RtJMV?usp=sharing
Labels:
```
0: B-Activity
1: B-Administration
2: B-Age
3: B-Area
4: B-Biological_attribute
5: B-Biological_structure
6: B-Clinical_event
7: B-Color
8: B-Coreference
9: B-Date
10: B-Detailed_description
11: B-Diagnostic_procedure
12: B-Disease_disorder
13: B-Distance
14: B-Dosage
15: B-Duration
16: B-Family_history
17: B-Frequency
18: B-Height
19: B-History
20: B-Lab_value
21: B-Mass
22: B-Medication
23: B-Nonbiological_location
24: B-Occupation
25: B-Other_entity
26: B-Other_event
27: B-Outcome
28: B-Personal_background
29: B-Qualitative_concept
30: B-Quantitative_concept
31: B-Severity
32: B-Sex
33: B-Shape
34: B-Sign_symptom
35: B-Subject
36: B-Texture
37: B-Therapeutic_procedure
38: B-Time
39: B-Volume
40: B-Weight
41: I-Activity
42: I-Administration
43: I-Age
44: I-Area
45: I-Biological_attribute
46: I-Biological_structure
47: I-Clinical_event
48: I-Color
49: I-Coreference
50: I-Date
51: I-Detailed_description
52: I-Diagnostic_procedure
53: I-Disease_disorder
54: I-Distance
55: I-Dosage
56: I-Duration
57: I-Family_history
58: I-Frequency
59: I-Height
60: I-History
61: I-Lab_value
62: I-Mass
63: I-Medication
64: I-Nonbiological_location
65: I-Occupation
66: I-Other_entity
67: I-Other_event
68: I-Outcome
69: I-Personal_background
70: I-Qualitative_concept
71: I-Quantitative_concept
72: I-Severity
73: I-Shape
74: I-Sign_symptom
75: I-Subject
76: I-Texture
77: I-Therapeutic_procedure
78: I-Time
79: I-Volume
80: I-Weight
81: O
``` |
h2oai/openassistant_oasst1 | 2023-04-19T04:43:13.000Z | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | h2oai | null | null | null | 6 | 16 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `openassistant_oasst1` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `46283`
- Number of columns: `3`
- Column names: `['input', 'prompt_type', 'source']`
## Source
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/45e6183171fb16691ad7d3ab006fad973f971e98/create_data.py#L1253)
|
checkai/instruction-poems | 2023-04-19T03:02:09.000Z | [
"license:cc-by-4.0",
"region:us"
] | checkai | null | null | null | 5 | 16 | ---
license: cc-by-4.0
---
Poem dataset to be used with instruction fine tuning |
ruanchaves/faquad-nli_por_Latn_to_eng_Latn | 2023-04-22T19:13:21.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: document_index
dtype: int32
- name: document_title
dtype: string
- name: paragraph_index
dtype: int32
- name: question
dtype: string
- name: answer
dtype: string
- name: label
dtype: int32
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 826409
num_examples: 3128
- name: validation
num_bytes: 183166
num_examples: 731
- name: test
num_bytes: 191949
num_examples: 650
download_size: 0
dataset_size: 1201524
---
# Dataset Card for "faquad-nli_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vision-CAIR/cc_sbu_align | 2023-04-19T22:21:39.000Z | [
"region:us"
] | Vision-CAIR | null | null | null | 28 | 16 | # MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
[Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), [Xiang Li](https://xiangli.ac.cn), and [Mohamed Elhoseiny](https://www.mohamed-elhoseiny.com/). *Equal Contribution
**King Abdullah University of Science and Technology**
## Online Demo
Click the image to chat with MiniGPT-4 around your images
[](https://minigpt-4.github.io)
## Examples
| | |
:-------------------------:|:-------------------------:
 | 
 | 
More examples can be found in the [project page](https://minigpt-4.github.io).
## Introduction
- MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
- We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavilly impacted.
- To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset.
- The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100.
- MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4.

## Getting Started
### Installation
**1. Prepare the code and the environment**
Git clone our repository, creating a python environment and ativate it via the following command
```bash
git clone https://github.com/Vision-CAIR/MiniGPT-4.git
cd MiniGPT-4
conda env create -f environment.yml
conda activate minigpt4
```
**2. Prepare the pretrained Vicuna weights**
The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
Please refer to our instruction [here](PrepareVicuna.md)
to prepare the Vicuna weights.
The final weights would be in a single folder with the following structure:
```
vicuna_weights
├── config.json
├── generation_config.json
├── pytorch_model.bin.index.json
├── pytorch_model-00001-of-00003.bin
...
```
Then, set the path to the vicuna weight in the model config file
[here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16.
**3. Prepare the pretrained MiniGPT-4 checkpoint**
To play with our pretrained model, download the pretrained checkpoint
[here](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link).
Then, set the path to the pretrained checkpoint in the evaluation config file
in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 11.
### Launching Demo Locally
Try out our demo [demo.py](demo.py) on your local machine by running
```
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
```
Here, we load Vicuna as 8 bit by default to save some GPU memory usage.
Besides, the default beam search width is 1.
Under this setting, the demo cost about 23G GPU memory.
If you have a more powerful GPU with larger GPU memory, you can run the model
in 16 bit by setting low_resource to False in the config file
[minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml) and use a larger beam search width.
### Training
The training of MiniGPT-4 contains two alignment stages.
**1. First pretraining stage**
In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
to align the vision and language model. To download and prepare the datasets, please check
our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
After the first stage, the visual features are mapped and can be understood by the language
model.
To launch the first stage training, run the following command. In our experiments, we use 4 A100.
You can change the save path in the config file
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
```
A MiniGPT-4 checkpoint with only stage one training can be downloaded
[here](https://drive.google.com/file/d/1u9FRRBB3VovP1HxCAlpD9Lw4t4P6-Yq8/view?usp=share_link).
Compared to the model after stage two, this checkpoint generate incomplete and repeated sentences frequently.
**2. Second finetuning stage**
In the second stage, we use a small high quality image-text pair dataset created by ourselves
and convert it to a conversation format to further align MiniGPT-4.
To download and prepare our second stage dataset, please check our
[second stage dataset preparation instruction](dataset/README_2_STAGE.md).
To launch the second stage alignment,
first specify the path to the checkpoint file trained in stage 1 in
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
You can also specify the output path there.
Then, run the following command. In our experiments, we use 1 A100.
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
```
After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
## Acknowledgement
+ [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before!
+ [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis!
+ [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source!
If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
```bibtex
@misc{zhu2022minigpt4,
title={MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models},
author={Deyao Zhu and Jun Chen and Xiaoqian Shen and xiang Li and Mohamed Elhoseiny},
year={2023},
}
```
## License
This repository is under [BSD 3-Clause License](LICENSE.md).
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
BSD 3-Clause License [here](LICENSE_Lavis.md).
|
ruanchaves/rerelem_por_Latn_to_eng_Latn | 2023-04-22T19:12:25.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: docid
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
- name: same_text
dtype: bool
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 1112298
num_examples: 2226
- name: validation
num_bytes: 370560
num_examples: 701
- name: test
num_bytes: 398794
num_examples: 805
download_size: 0
dataset_size: 1881652
---
# Dataset Card for "rerelem_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ruanchaves/porsimplessent_por_Latn_to_eng_Latn | 2023-04-22T19:12:35.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int32
- name: production_id
dtype: int32
- name: level
dtype: string
- name: changed
dtype: string
- name: split
dtype: string
- name: sentence_text_from
dtype: string
- name: sentence_text_to
dtype: string
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 2268564
num_examples: 4976
- name: validation
num_bytes: 645118
num_examples: 1446
- name: test
num_bytes: 765737
num_examples: 1697
download_size: 0
dataset_size: 3679419
---
# Dataset Card for "porsimplessent_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ruanchaves/hatebr_por_Latn_to_eng_Latn | 2023-04-22T19:12:04.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: instagram_comments
dtype: string
- name: offensive_language
dtype: bool
- name: offensiveness_levels
dtype: int32
- name: antisemitism
dtype: bool
- name: apology_for_the_dictatorship
dtype: bool
- name: fatphobia
dtype: bool
- name: homophobia
dtype: bool
- name: partyism
dtype: bool
- name: racism
dtype: bool
- name: religious_intolerance
dtype: bool
- name: sexism
dtype: bool
- name: xenophobia
dtype: bool
- name: offensive_&_non-hate_speech
dtype: bool
- name: non-offensive
dtype: bool
- name: specialist_1_hate_speech
dtype: bool
- name: specialist_2_hate_speech
dtype: bool
- name: specialist_3_hate_speech
dtype: bool
splits:
- name: train
num_bytes: 391589
num_examples: 4480
- name: validation
num_bytes: 86759
num_examples: 1120
- name: test
num_bytes: 111044
num_examples: 1400
download_size: 0
dataset_size: 589392
---
# Dataset Card for "hatebr_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ruanchaves/reli-sa_por_Latn_to_eng_Latn | 2023-04-22T19:12:55.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: source
dtype: string
- name: title
dtype: string
- name: book
dtype: string
- name: review_id
dtype: string
- name: score
dtype: float64
- name: sentence_id
dtype: int64
- name: unique_review_id
dtype: string
- name: sentence
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1780301
num_examples: 7875
- name: validation
num_bytes: 315249
num_examples: 1348
- name: test
num_bytes: 658726
num_examples: 3288
download_size: 0
dataset_size: 2754276
---
# Dataset Card for "reli-sa_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
masakhane/afriqa | 2023-07-07T16:57:28.000Z | [
"task_categories:question-answering",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:bem",
"language:fon",
"language:ha",
"language:ig",
"language:kin",
"language:sw",
"language:wo",
"language:yo",
"language:zu",
"language:tw",
"license:cc-by-sa-4.0",
"cross-ling... | masakhane | AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages
AfriQA is the first cross-lingual question answering (QA) dataset with a focus on African languages.
The dataset includes over 12,000 XOR QA examples across 10 African languages, making it an invaluable resource for developing more equitable QA technology. | \ | null | 5 | 16 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- bem
- fon
- ha
- ig
- kin
- sw
- wo
- yo
- zu
- tw
pretty_name: AfriQA
size_categories:
- 10K<n<100K
multilinguality:
- multilingual
tags:
- cross-lingual
- question-answering
- qa
---
# Dataset Card for AfriQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://github.com/masakhane-io/afriqa)
- **Repository:** [github](https://github.com/masakhane-io/afriqa)
- **Paper:** [paper]()
- **Point of Contact:** [Masakhane](https://www.masakhane.io/) or oogundep@uwaterloo.ca
### Dataset Summary
AfriQA is the first cross-lingual question answering (QA) dataset with a focus on African languages. The dataset includes over 12,000 XOR QA examples across 10 African languages, making it an invaluable resource for developing more equitable QA technology.
The train/validation/test sets are available for all the 10 languages.
### Supported Tasks and Leaderboards
- `question-answering`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better) and [Exact Match Accuracy](https://huggingface.co/spaces/evaluate-metric/exact_match).
### Languages
There are 20 languages available :
- Bemba (bem)
- Fon (fon)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Swahili (swą)
- Twi (twi)
- Wolof (wol)
- Yorùbá (yor)
- Zulu (zul)
## Dataset Structure
### Data Instances
- Data Format:
- id : Question ID
- question : Question in African Language
- translated_question : Question translated into a pivot language (English/French)
- answers : Answer in African Language
- lang : Datapoint Language (African Language) e.g `bem`
- split : Dataset Split
- translated_answer : Answer in Pivot Language
- translation_type : Translation type of question and answers
```bash
{ "id": 0,
"question": "Bushe icaalo ca Egypt caali tekwapo ne caalo cimbi?",
"translated_question": "Has the country of Egypt been colonized before?",
"answers": "['Emukwai']",
"lang": "bem",
"split": "dev",
"translated_answer": "['yes']",
"translation_type": "human_translation"
}
```
### Data Splits
For all languages, there are three splits.
The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| Language | train | dev | test |
|-----------------|------:|-----------:|-----:|
| Bemba | 502 | 503 | 314 |
| Fon | 427 | 428 | 386 |
| Hausa | 435 | 436 | 300 |
| Igbo | 417 | 418 | 409 |
| Kinyarwanda | 407 | 409 | 347 |
| Swahili | 415 | 417 | 302 |
| Twi | 451 | 452 | 490 |
| Wolof | 503 | 504 | 334 |
| Yoruba | 360 | 361 | 332 |
| Zulu | 387 | 388 | 325 |
| <b>Total</b> | <b>4333</b> | <b>4346</b> |<b>3560</b> |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce question-answering resources to 10 languages that were under-served for natural language processing.
[More Information Needed]
### Source Data
...
#### Initial Data Collection and Normalization
...
#### Who are the source language producers?
...
### Annotations
#### Annotation process
Details can be found here ...
#### Who are the annotators?
Annotators were recruited from [Masakhane](https://www.masakhane.io/)
### Personal and Sensitive Information
...
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
### Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@misc{ogundepo2023afriqa,
title={AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages},
author={Odunayo Ogundepo and Tajuddeen R. Gwadabe and Clara E. Rivera and Jonathan H. Clark and Sebastian Ruder and David Ifeoluwa Adelani and Bonaventure F. P. Dossou and Abdou Aziz DIOP and Claytone Sikasote and Gilles Hacheme and Happy Buzaaba and Ignatius Ezeani and Rooweither Mabuya and Salomey Osei and Chris Emezue and Albert Njoroge Kahira and Shamsuddeen H. Muhammad and Akintunde Oladipo and Abraham Toluwase Owodunni and Atnafu Lambebo Tonja and Iyanuoluwa Shode and Akari Asai and Tunde Oluwaseyi Ajayi and Clemencia Siro and Steven Arthur and Mofetoluwa Adeyemi and Orevaoghene Ahia and Aremu Anuoluwapo and Oyinkansola Awosan and Chiamaka Chukwuneke and Bernard Opoku and Awokoya Ayodele and Verrah Otiende and Christine Mwase and Boyd Sinkala and Andre Niyongabo Rubungo and Daniel A. Ajisafe and Emeka Felix Onwuegbuzia and Habib Mbow and Emile Niyomutabazi and Eunice Mukonde and Falalu Ibrahim Lawan and Ibrahim Said Ahmad and Jesujoba O. Alabi and Martin Namukombo and Mbonu Chinedu and Mofya Phiri and Neo Putini and Ndumiso Mngoma and Priscilla A. Amuok and Ruqayya Nasir Iro and Sonia Adhiambo},
year={2023},
eprint={2305.06897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ToluClassics](https://github.com/ToluClassics) for adding this dataset. |
Oniichat/bluemoon_roleplay_chat_data_300k_messages | 2023-04-29T16:06:27.000Z | [
"region:us"
] | Oniichat | null | null | null | 34 | 16 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: thread_title
dtype: string
- name: thread_href
dtype: string
- name: message_timestamp
dtype: string
- name: message_username
dtype: string
- name: message
dtype: string
splits:
- name: train
num_bytes: 266368156
num_examples: 261071
download_size: 138428913
dataset_size: 266368156
---
# Dataset Card for "bluemoon_roleplay_chat_data_300k_messages"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TempoFunk/small | 2023-05-10T03:37:12.000Z | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:agpl-3.0",
"region:us"
] | TempoFunk | null | null | null | 6 | 16 | ---
task_categories:
- text-to-video
- text-to-image
- video-classification
- image-classification
language:
- en
size_categories:
- 1K<n<10K
license: agpl-3.0
---
# TempoFunk Small
7.8k samples of metadata and encoded latents & prompts of random videos.
## Data format
- Video frame latents
- Numpy arrays
- 120 frames, 512x512 source size
- Encoded shape (120, 4, 64, 64)
- CLIP (openai) encoded prompts
- Video description (as seen in metadata)
- Encoded shape (77,768)
- Video metadata as JSON (description, tags, categories, source URL, etc.) |
kunishou/oasst1-89k-ja | 2023-06-04T13:27:25.000Z | [
"license:apache-2.0",
"region:us"
] | kunishou | null | null | null | 11 | 16 | ---
license: apache-2.0
---
This dataset was created by automatically translating "OpenAssistant/oasst1" into Japanese.
The "ng_translation" flag indicates that the translation was not successful, and "1" means that the translation failed.
Therefore, for data with "1", "text" and "text_en" contain the same text.
以下のコードを用いることで、 Instruction と Output (prompterの命令とassistantの回答)の形式に変換することができます。
ファインチューニングで使用する場合はこちらのコードで変換して下さい。
変換コード参考
https://github.com/h2oai/h2o-llmstudio/blob/5ebfd3879e226b4e1afd0a0b45eb632e60412129/app_utils/utils.py#L1888
```python
pip install datasets
```
```python
from datasets import load_dataset
import pandas as pd
import os
import json
# oasst1のオリジナルデータのロード
ds = load_dataset("OpenAssistant/oasst1")
train = ds["train"].to_pandas()
val = ds["validation"].to_pandas()
df_origin = pd.concat([train, val], axis=0).reset_index(drop=True)
# oasst1日本語翻訳データの読み込み
df_ja = pd.read_json("oasst1_ja_89k.json")
# oasst1のオリジナルデータと日本語翻訳データのマージ
df = pd.merge(df_origin, df_ja[["message_id", "text_ja"]], on="message_id", how="left").copy()
df["text"] = df["text_ja"]
df_assistant = df[(df.role == "assistant")].copy()
df_prompter = df[(df.role == "prompter")].copy()
df_prompter = df_prompter.set_index("message_id")
df_assistant["output"] = df_assistant["text"].values
inputs = []
parent_ids = []
for _, row in df_assistant.iterrows():
input = df_prompter.loc[row.parent_id]
inputs.append(input.text)
parent_ids.append(input.parent_id)
df_assistant["instruction"] = inputs
df_assistant["parent_id"] = parent_ids
df_assistant = df_assistant[
["instruction", "output", "message_id", "parent_id", "lang", "rank"]
].rename(columns={"message_id": "id"})
# 翻訳タスクのみデータに異常があるので除外
df_assistant2 = df_assistant[~df_assistant["instruction"].str.contains("翻訳")]
# これ以下でjsonファイルへ書き出し---------------
learn_datas = []
input_list = []
for n in range(len(df_assistant2)):
learn_data = {
"instruction": str(df_assistant2.iloc[n, 0]),
"input": "",
"output": ""
}
input_list.append(df_assistant2.iloc[n, 0])
learn_data["input"] = ""
learn_data["output"] = str(df_assistant2.iloc[n, 1])
learn_datas.append(learn_data)
json_learn_data = json.dumps(learn_datas, indent=4, ensure_ascii=False)
with open('oasst1_ja_converted.json', 'w', encoding="utf-8") as f:
f.write(json_learn_data)
```
oasst1-ja-89k Repository
https://github.com/kunishou/oasst1-89k-ja
OpenAssistant/oasst1
https://huggingface.co/datasets/OpenAssistant/oasst1 |
mehnaazasad/arxiv_astro_co_ga | 2023-05-10T02:47:29.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"arxiv:1905.00075",
"region:us"
] | mehnaazasad | null | null | null | 0 | 16 | ---
license: mit
task_categories:
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for `arxiv_astro_co_ga`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a dataset consisting of titles and abstracts for all Cosmology and Galaxy Astrophysics arXiv articles to date (99,659 papers).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
```
{'title': 'Probing cluster formation under extreme conditions: massive star clusters in blue compact galaxies',
'abstract': ' The numerous and massive young star clusters in blue compact galaxies (BCGs) are used to investigate the properties of their hosts. We test whether BCGs follow claimed relations between cluster populations and their hosts, such as the the fraction of the total luminosity contributed by the clusters as function of the mean star formation rate density; the $V$ band luminosity of the brightest youngest cluster as related to the mean host star formation rate; and the cluster formation efficiency (i.e., the fraction of star formation happening in star clusters) versus the density of the SFR. We find that BCGs follow the trends, supporting a scenario where cluster formation and environmental properties of the host are correlated. They occupy, in all the diagrams, the regions of higher SFRs, as expected by the extreme nature of the starbursts operating in these systems. We find that the star clusters contribute almost to the 20 % of the UV luminosity of the hosts. We suggest that the BCG starburst environment has most likely favoured the compression and collapse of the giant molecular clouds, enhancing the local star formation efficiency, so that massive clusters have been formed. The estimated cluster formation efficiency supports this scenario. BCGs have a cluster formation efficiency comparable to luminous IR galaxies and spiral starburst nuclei (the averaged value is about 35 %) which is much higher than the 8 - 10 % reported for quiescent spirals and dwarf star-forming galaxies. '
}
```
### Data Fields
- `title`: Title of the paper
- `abstract`: The abstract of the paper
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for these splits.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 79,727 |
| Validation | 9966 |
| Test | 9966 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The original dataset from which this subset was constructed can be found here: [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv).
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Various authors.
### Annotations
This dataset contains no annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No author information included in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original data is maintained by ArXiv, huge thanks to the team for building and maintaining that dataset.
### Licensing Information
The arxiv_astro_co_ga dataset version 1.0.0 is released under the [MIT License](https://mitsloan.mit.edu/licensing).
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
[More Information Needed] |
orionweller/NevIR | 2023-05-26T14:53:16.000Z | [
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"negation",
"information_retrieval",
"IR",
"arxiv:2212.10002",
"arxiv:2305.07614",
"region:us"
] | orionweller | null | null | null | 1 | 16 | ---
license: mit
language:
- en
language_creators:
- crowdsourced
multilinguality:
- monolingual
pretty_name: NevIR
size_categories:
- 1K<n<10K
tags:
- negation
- information_retrieval
- IR
---
# Dataset Card for NevIR: Negation in Neural Information Retrieval
## Dataset Description
- **Repository:** [https://github.com/orionw/NevIR](https://github.com/orionw/NevIR)
- **Paper:** [https://arxiv.org/abs/2212.10002](https://arxiv.org/abs/2212.10002)
- **Point of Contact:** oweller@cs.jhu.edu
## Dataset Summary
Data from the paper: ["NevIR: Negation in Neural Information Retrieval"](https://arxiv.org/abs/2305.07614).
If you use this dataset, we would appreciate you citing our work:
```
@inproceedings{weller-et-al-2023-nevir,
title={NevIR: Negation in Neural Information Retrieval},
author={Weller, Orion and Lawrie, Dawn, and Van Durme, Benjamin},
year={2023},
eprint={2305.07614},
archivePrefix={arXiv},
year={2023}
}
```
Please also consider citing the work that created the initial documents:
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
```
From the paper: "Negation is a common everyday phenomena and has been a consistent area of weakness for language models (LMs). Although the Information Retrieval (IR) community has adopted LMs as the backbone of modern IR architectures, there has been little to no research in understanding how negation impacts neural IR. We therefore construct a straightforward benchmark on this theme: asking IR models to rank two documents that differ only by negation. We show that the results vary widely according to the type of IR architecture: cross-encoders perform best, followed by late-interaction models, and in last place are bi-encoder and sparse neural architectures. We find that most current information retrieval models do not consider negation, performing similarly or worse than randomly ranking.We show that although the obvious approach of continued fine-tuning on a dataset of contrastive documents containing negations increases performance (as does model size), there is still a large gap between machine and human performance."
### Supported Tasks and Leaderboards
The task is to rank each query in the pair correctly, where only one query is relevant to one document in the pair. There is no official leaderboard.
### Language
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{
"id": "1-2",
"WorkerId": 0,
"q1": "Which mayor did more vetoing than anticipated?",
"q2": "Which mayor did less vetoing than anticipated?",
"doc1": "In his first year as mayor, Medill received very little legislative resistance from the Chicago City Council. While he vetoed what was an unprecedented eleven City Council ordinances that year, most narrowly were involved with specific financial practices considered wasteful and none of the vetoes were overridden. He used his new powers to appoint the members of the newly constituted Chicago Board of Education and the commissioners of its constituted public library. His appointments were approved unanimously by the City Council.",
"doc2": "In his first year as mayor, Medill received very little legislative resistance from the Chicago City Council. While some expected an unprecedented number of vetoes, in actuality he only vetoed eleven City Council ordinances that year, and most of those were narrowly involved with specific financial practices he considered wasteful and none of the vetoes were overridden. He used his new powers to appoint the members of the newly constituted Chicago Board of Education and the commissioners of its constituted public library. His appointments were approved unanimously by the City Council."
}
```
### Data Fields
* `id`: unique ID for the pair, the first number indicates the document pair number in CondaQA and the second number indicates the PassageEditID in CondaQA.
* `WorkerId`: The ID for the Worker who created the queries for the pair.
* `q1`: the query that is only relevant to `doc1`
* `q2`: the query that is only relevant to `doc2`
* `doc1`: the original document, from CondaQA
* `doc2`: the edited document, from CondaQA
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
train_set = load_dataset("orionweller/nevir", "train")
dev_set = load_dataset("orionweller/nevir", "validation")
test_set = load_dataset("orionweller/nevir", "test")
```
## Dataset Creation
Full details are in the paper: https://arxiv.org/abs/2305.07614
|
Nan-Do/code-search-net-java | 2023-05-15T00:57:06.000Z | [
"task_categories:text2text-generation",
"task_categories:summarization",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"code",
"java",
"CodeSearchNet",
"summary",
"region:us"
] | Nan-Do | null | null | null | 3 | 16 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: partition
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1595060592
num_examples: 495953
download_size: 440273784
dataset_size: 1595060592
license: apache-2.0
task_categories:
- text2text-generation
- summarization
- text-generation
language:
- en
tags:
- code
- java
- CodeSearchNet
- summary
pretty_name: Java CodeSearchNet with Summaries
---
# Dataset Card for "code-search-net-java"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-Java
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Java portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Java
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0 |
Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat | 2023-05-17T11:31:57.000Z | [
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] | Fredithefish | null | null | null | 1 | 16 | ---
license: cc
task_categories:
- question-answering
language:
- en
---
# Instruction Tuning with GPT 4 RedPajama-Chat
This dataset has been converted from the <a href="https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM" target="_new">Instruction-Tuning-with-GPT-4</a> dataset for the purpose of fine-tuning the <a href="https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1" target="_new">RedPajama-INCITE-Chat-3B-v1</a> model.
## About Instruction-Tuning-with-GPT-4
English Instruction-Following Data generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
### Usage and License Notices
The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
|
cgulse/alpaca-cleaned-tr | 2023-05-18T19:59:11.000Z | [
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-4.0",
"alpaca",
"instruction-finetuning",
"region:us"
] | cgulse | null | null | null | 0 | 16 | ---
license: cc-by-4.0
language:
- tr
tags:
- alpaca
- instruction-finetuning
pretty_name: Turkish Alpaca-cleaned
size_categories:
- 10K<n<100K
---
Alpaca Cleaned Dataset.
Machine Translated facebook/nllb-200-3.3B
Languages
Turkish |
Nan-Do/instructional_code-search-net-java | 2023-05-20T05:22:11.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"Java",
"Code Generation",
"Instruction Response",
"region:us"
] | Nan-Do | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 388087123
num_examples: 467959
download_size: 132263577
dataset_size: 388087123
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- Java
- Code Generation
- Instruction Response
pretty_name: Instructional Java Dataset
---
# Dataset Card for "instructional_code-search-net-java"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-java
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for Java.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-java
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 |
jyshbgde/cinescopeDataset | 2023-06-24T06:39:57.000Z | [
"task_categories:feature-extraction",
"language:en",
"license:openrail",
"region:us"
] | jyshbgde | null | null | null | 0 | 16 | ---
license: openrail
task_categories:
- feature-extraction
language:
- en
pretty_name: cinescope
---
|
whu9/xsum_postprocess | 2023-06-03T06:11:57.000Z | [
"region:us"
] | whu9 | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: source
dtype: string
- name: summary
dtype: string
- name: source_num_tokens
dtype: int64
- name: summary_num_tokens
dtype: int64
splits:
- name: train
num_bytes: 479957379
num_examples: 203788
- name: validation
num_bytes: 26334240
num_examples: 11313
- name: test
num_bytes: 26797491
num_examples: 11319
download_size: 338633607
dataset_size: 533089110
---
# Dataset Card for "xsum_postprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rungalileo/sst2_tiny_subset_32 | 2023-06-06T20:53:56.000Z | [
"region:us"
] | rungalileo | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: idx
dtype: int32
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 2433
num_examples: 32
- name: validation
num_bytes: 2433
num_examples: 32
- name: test
num_bytes: 2433
num_examples: 32
download_size: 11709
dataset_size: 7299
---
# Dataset Card for "sst2_tiny_subset_32"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Liuyu/sentiment-classification | 2023-06-09T09:44:34.000Z | [
"license:openrail",
"region:us"
] | Liuyu | null | null | null | 0 | 16 | ---
license: openrail
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 19594086
num_examples: 120000
- name: dev
num_bytes: 483785
num_examples: 3000
- name: test
num_bytes: 491522
num_examples: 3000
download_size: 667064
dataset_size: 20569393
---
|
DISCOX/DISCO-200K-high-quality | 2023-06-20T14:25:45.000Z | [
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"region:us"
] | DISCOX | null | null | null | 0 | 16 | ---
license: cc-by-4.0
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: video_duration_youtube_sec
dtype: float64
- name: preview_url_spotify
dtype: string
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: search_query_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: primary_artist_name_spotify
dtype: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 958015009
num_examples: 200000
download_size: 1154630326
dataset_size: 958015009
size_categories:
- 100K<n<1M
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-200K-high-quality")
```
The dataset contains 200,000 high-quality samples from the DISCO-10M dataset found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
High-quality refers to the similarity filtering, all samples in this dataset have a similarity between search query and video title greater than 0.8, and a similarity between Spotify preview and YouTube video greater than 0.7.
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
More details about the dataset can be found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
<!--
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> |
jerpint-org/HackAPrompt-Playground-Submissions | 2023-06-16T03:25:19.000Z | [
"region:us"
] | jerpint-org | null | null | null | 7 | 16 | Entry not found |
nanyy1025/pubmed_rct_20k | 2023-06-17T08:19:41.000Z | [
"license:openrail",
"region:us"
] | nanyy1025 | null | null | null | 1 | 16 | ---
license: openrail
---
|
richardr1126/spider-context-instruct | 2023-07-18T17:47:59.000Z | [
"source_datasets:spider",
"language:en",
"license:cc-by-4.0",
"text-to-sql",
"SQL",
"Spider",
"fine-tune",
"region:us"
] | richardr1126 | null | null | null | 1 | 16 | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
pretty_name: Spider Context Instruct
tags:
- text-to-sql
- SQL
- Spider
- fine-tune
dataset_info:
features:
- name: db_id
dtype: string
- name: text
dtype: string
---
# Dataset Card for Spider Context Instruct
### Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset was created to finetune LLMs in a `### Instruction:` and `### Response:` format with database context.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
``` |
KaiLv/UDR_BREAK | 2023-06-21T12:23:29.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: question_text
dtype: string
- name: decomposition
dtype: string
- name: operators
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 12757200
num_examples: 44321
- name: validation
num_bytes: 2231632
num_examples: 7760
- name: test
num_bytes: 894558
num_examples: 8069
download_size: 5175505
dataset_size: 15883390
---
# Dataset Card for "UDR_BREAK"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FreedomIntelligence/alpaca-gpt4-portuguese | 2023-08-06T08:10:58.000Z | [
"region:us"
] | FreedomIntelligence | null | null | null | 1 | 16 | The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
goendalf666/sql-chat-instructions | 2023-06-26T22:08:52.000Z | [
"region:us"
] | goendalf666 | null | null | null | 7 | 16 | ---
dataset_info:
features:
- name: training_input
dtype: string
splits:
- name: train
num_bytes: 20267285
num_examples: 78577
download_size: 6323963
dataset_size: 20267285
---
# Dataset Card for "sql-chat-instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JourneyDB/JourneyDB | 2023-08-10T14:19:04.000Z | [
"arxiv:2307.00716",
"region:us"
] | JourneyDB | null | null | null | 24 | 16 | ---
extra_gated_prompt: "You have carefully read the [Terms of Usage](https://journeydb.github.io/assets/Terms_of_Usage.html) and agree with the listed terms."
extra_gated_fields:
First Name: text
Last Name: text
Affiliation: text
I agree with our JourneyDB usage terms and I will obey the terms when using the JourneyDB dataset: checkbox
---
---
task_categories:
- image-to-text
language:
- en
size_categories:
- 1M<n<10M
---
# JourneyDB
[[Project Page]](https://journeydb.github.io) [[Paper]](https://arxiv.org/abs/2307.00716) [[Code]](https://github.com/JourneyDB/JourneyDB) [[HuggingFace]](https://huggingface.co/datasets/JourneyDB/JourneyDB) [[OpenDataLab]]()

## Dataset Description
### Summary
**JourneyDB** is a large-scale generated image understanding dataset that contains **4,429,295** high-resolution Midjourney images, annotated with corresponding **text prompt**, **image caption** and **visual question answering**.
### Supported Tasks
**JourneyDB** supports **4** downstream tasks, i.e. **Prompt Inversion**, **Style Retrieval**, **Image Caption**, and **Visual Question Answering**. We evaluate many existing methods on these tasks and provide a comprehensive benchmark. Please see our [Paper](https://arxiv.org/abs/2307.00716) for more details.
## Dataset Details
### Data Collection
For each image instance, we acquire the corresponding text prompts used to generate the images with Midjourney. Furthermore, we employ GPT3.5 to generate the caption and VAQ groundtruth.

### Data Instances
We provide several examples to show the contents of each dataset instance.

### Data Splits
We provide detailed statistics for each split subset in the following table. We randomly split the whole dataset into roughly 20 : 1 to obtain the training and validation set. The training set contains 4,189,737 labeled images and 1,385,317 labeled prompts. The validation set contains 235,156 images and 82,093 prompts. And we additionally sample a testing set for manual filtering. The testing set contains 5,402 images and 5,171 prompts.
| | Image | Prompt | Labeled Image | Labeled Prompt | Style QA | Content QA |
|----------------|:---------:|:---------:|:-------------:|:--------------:|:---------:|:----------:|
| Training Set | 4,453,193 | 1,643,375 | 4,189,737 | 1,385,317 | 7,056,394 | 8,775,971 |
| Validation Set | 234,156 | 82,093 | 234,156 | 82,093 | 311,569 | 374,310 |
| Testing Set | 5,402 | 5,171 | 5,402 | 5,171 | 10,040 | 11,369 |
| Total | 4,692,751 | 1,730,639 | 4,429,295 | 1,472,581 | 7,378,003 | 9,161,650 |
## Acquirements
### License
The JourneyDB dataset is available under the customised [Terms of Usage](./assets/Terms_of_Usage.md).
### Citation
```
@misc{pan2023journeydb,
title={JourneyDB: A Benchmark for Generative Image Understanding},
author={Junting Pan and Keqiang Sun and Yuying Ge and Hao Li and Haodong Duan and Xiaoshi Wu and Renrui Zhang and Aojun Zhou and Zipeng Qin and Yi Wang and Jifeng Dai and Yu Qiao and Hongsheng Li},
year={2023},
eprint={2307.00716},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
[Junting Pan](https://junting.github.io)\*, [Keqiang Sun](https://keqiangsun.github.io)\*, [Yuying Ge](https://geyuying.github.io), [Hao Li](https://cpsxhao.github.io), [Haodong Duan](https://kennymckormick.github.io), [Xiaoshi Wu](https://github.com/tgxs002), [Renrui Zhang](https://github.com/ZrrSkywalker), [Aojun Zhou](https://scholar.google.com/citations?user=cC8lXi8AAAAJ&hl=en), [Zipeng Qin](https://www.linkedin.cn/incareer/in/zipeng-bruce-qin-846a65119), [Yi Wang](https://shepnerd.github.io), [Jifeng Dai](https://jifengdai.org), [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/), [Hongsheng Li](https://www.ee.cuhk.edu.hk/~hsli/)<sup>+</sup>
(\* equal contribution, <sup>+</sup> corresponding author)
### Contact
If you have any problem or suggestion, please feel free to open an issue or send emails to the contributors. |
Ibrahim-Alam/Tweet_Sentiment_pos_neg | 2023-06-29T03:19:58.000Z | [
"region:us"
] | Ibrahim-Alam | null | null | null | 0 | 16 | Entry not found |
TinyPixel/fish-1 | 2023-07-03T07:39:24.000Z | [
"region:us"
] | TinyPixel | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4680929994
num_examples: 2840090
download_size: 2704444515
dataset_size: 4680929994
---
# Dataset Card for "fish-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
npvinHnivqn/VietnameseDictionary | 2023-07-08T09:13:42.000Z | [
"size_categories:20K<n<40K",
"language:vi",
"region:us"
] | npvinHnivqn | null | null | null | 1 | 16 | ---
language:
- vi
size_categories:
- 20K<n<40K
---
- This dataset includes ~30k Vietnamese words and definitions |
alan-23/HealthAssistant115 | 2023-07-05T11:36:11.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"medical",
"biology",
"region:us"
] | alan-23 | null | null | null | 3 | 16 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- medical
- biology
pretty_name: Health Assistant
size_categories:
- n<1K
---
This dataset created for experiment to fine-tune LLM model. Questions created by ChatGPT as answer to "Give me questions a person can ask about for healthy life".
Answers created by OpenAI API. |
Nacholmo/keep_darkest_and_lightest | 2023-07-06T19:54:55.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | Nacholmo | null | null | null | 0 | 16 | ---
license: creativeml-openrail-m
dataset_info:
features:
- name: image
dtype: image
- name: generated_caption
dtype: string
- name: conditioning_image
dtype: image
splits:
- name: train
num_bytes: 2923092683.25
num_examples: 7999
download_size: 2899330053
dataset_size: 2923092683.25
---
|
rdpahalavan/CIC-IDS2017 | 2023-07-22T21:42:04.000Z | [
"task_categories:text-classification",
"task_categories:tabular-classification",
"size_categories:100M<n<1B",
"license:apache-2.0",
"Network Intrusion Detection",
"Cybersecurity",
"Network Packets",
"CIC-IDS2017",
"region:us"
] | rdpahalavan | null | null | null | 0 | 16 | ---
license: apache-2.0
task_categories:
- text-classification
- tabular-classification
size_categories:
- 100M<n<1B
tags:
- Network Intrusion Detection
- Cybersecurity
- Network Packets
- CIC-IDS2017
---
We have developed a Python package as a wrapper around Hugging Face Hub and Hugging Face Datasets library to access this dataset easily.
# NIDS Datasets
The `nids-datasets` package provides functionality to download and utilize specially curated and extracted datasets from the original UNSW-NB15 and CIC-IDS2017 datasets. These datasets, which initially were only flow datasets, have been enhanced to include packet-level information from the raw PCAP files. The dataset contains both packet-level and flow-level data for over 230 million packets, with 179 million packets from UNSW-NB15 and 54 million packets from CIC-IDS2017.
## Installation
Install the `nids-datasets` package using pip:
```shell
pip install nids-datasets
```
Import the package in your Python script:
```python
from nids_datasets import Dataset, DatasetInfo
```
## Dataset Information
The `nids-datasets` package currently supports two datasets: [UNSW-NB15](https://research.unsw.edu.au/projects/unsw-nb15-dataset) and [CIC-IDS2017](https://www.unb.ca/cic/datasets/ids-2017.html). Each of these datasets contains a mix of normal traffic and different types of attack traffic, which are identified by their respective labels. The UNSW-NB15 dataset has 10 unique class labels, and the CIC-IDS2017 dataset has 24 unique class labels.
- UNSW-NB15 Labels: 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis'
- CIC-IDS2017 Labels: 'BENIGN', 'FTP-Patator', 'SSH-Patator', 'DoS slowloris', 'DoS Slowhttptest', 'DoS Hulk', 'Heartbleed', 'Web Attack – Brute Force', 'Web Attack – XSS', 'Web Attack – SQL Injection', 'Infiltration', 'Bot', 'PortScan', 'DDoS', 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis', 'DoS GoldenEye'
## Subsets of the Dataset
Each dataset consists of four subsets:
1. Network-Flows - Contains flow-level data.
2. Packet-Fields - Contains packet header information.
3. Packet-Bytes - Contains packet byte information in the range (0-255).
4. Payload-Bytes - Contains payload byte information in the range (0-255).
Each subset contains 18 files (except Network-Flows, which has one file), where the data is stored in parquet format. In total, this package provides access to 110 files. You can choose to download all subsets or select specific subsets or specific files depending on your analysis requirements.
## Getting Information on the Datasets
The `DatasetInfo` function provides a summary of the dataset in a pandas dataframe format. It displays the number of packets for each class label across all 18 files in the dataset. This overview can guide you in selecting specific files for download and analysis.
```python
df = DatasetInfo(dataset='UNSW-NB15') # or dataset='CIC-IDS2017'
df
```
## Downloading the Datasets
The `Dataset` class allows you to specify the dataset, subset, and files that you are interested in. The specified data will then be downloaded.
```python
dataset = 'UNSW-NB15' # or 'CIC-IDS2017'
subset = ['Network-Flows', 'Packet-Fields', 'Payload-Bytes'] # or 'all' for all subsets
files = [3, 5, 10] # or 'all' for all files
data = Dataset(dataset=dataset, subset=subset, files=files)
data.download()
```
The directory structure after downloading files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
└───Payload-Bytes
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
You can then load the parquet files using pandas:
```python
import pandas as pd
df = pd.read_parquet('UNSW-NB15/Packet-Fields/Packet_Fields_File_10.parquet')
```
## Merging Subsets
The `merge()` method allows you to merge all data of each packet across all subsets, providing both flow-level and packet-level information in a single file.
```python
data.merge()
```
The merge method, by default, uses the details specified while instantiating the `Dataset` class. You can also pass subset=list of subsets and files=list of files you want to merge.
The directory structure after merging files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
└───Network-Flows+Packet-Fields+Payload-Bytes
├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
└───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
```
## Extracting Bytes
Packet-Bytes and Payload-Bytes subset contains the first 1500-1600 bytes. To retrieve all bytes (up to 65535 bytes) from the Packet-Bytes and Payload-Bytes subsets, use the `Bytes()` method. This function requires files in the Packet-Fields subset to operate. You can specify how many bytes you want to extract by passing the max_bytes parameter.
```python
data.bytes(payload=True, max_bytes=2500)
```
Use packet=True to extract packet bytes. You can also pass files=list of files to retrieve bytes.
The directory structure after extracting bytes:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
├───Network-Flows+Packet-Fields+Payload-Bytes
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
│ └───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
│
└───Payload-Bytes-2500
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
## Reading the Datasets
The `read()` method allows you to read files using Hugging Face's `load_dataset` method, one subset at a time. The dataset and files parameters are optional if the same details are used to instantiate the `Dataset` class.
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2])
```
The `read()` method returns a dataset that you can convert to a pandas dataframe or save to a CSV, parquet, or any other desired file format:
```python
df = dataset.to_pandas()
dataset.to_csv('file_path_to_save.csv')
dataset.to_parquet('file_path_to_save.parquet')
```
For scenarios where you want to process one packet at a time, you can use the `stream=True` parameter:
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2], stream=True)
print(next(iter(dataset)))
```
## Notes
The size of these datasets is large, and depending on the subset(s) selected and the number of bytes extracted, the operations can be resource-intensive. Therefore, it's recommended to ensure you have sufficient disk space and RAM when using this package. |
liyucheng/vua20 | 2023-07-08T09:42:50.000Z | [
"license:cc-by-2.0",
"region:us"
] | liyucheng | null | null | null | 0 | 16 | ---
license: cc-by-2.0
---
|
DavidMOBrien/small_benchmark_webmarket-v2 | 2023-07-12T02:41:41.000Z | [
"region:us"
] | DavidMOBrien | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: before
dtype: string
- name: after
dtype: string
- name: loc
dtype: int64
- name: repo
dtype: string
splits:
- name: train
num_bytes: 28849
num_examples: 23
download_size: 21446
dataset_size: 28849
---
# Dataset Card for "small_benchmark_webmarket-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
npvinHnivqn/EnglishDictionary | 2023-07-15T15:53:25.000Z | [
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:en",
"license:afl-3.0",
"region:us"
] | npvinHnivqn | null | null | null | 0 | 16 | ---
license: afl-3.0
task_categories:
- token-classification
language:
- en
size_categories:
- 100K<n<1M
--- |
Smoked-Salmon-s/empathetic_dialogues_ko | 2023-08-04T03:01:28.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:apache-2.0",
"region:us"
] | Smoked-Salmon-s | null | null | null | 2 | 16 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- ko
size_categories:
- 10K<n<100K
---
# Dataset Card for "한국어 일상 속 공감형 대화 데이터셋(멀티-턴)"
## Dataset Summary
- boostCamp AI Tech 5기 과정 중 NLP 12조 훈제연어들 팀의 최종 프로젝트에서 제작한 데이터입니다.
- 일상 속 다양한 상황에서 사용자와 챗봇 간의 대화를 담은 데이터셋 입니다.
- GPT4, GPT3.5-turbo로 제작된 합성데이터이며 싱글-턴, 2-턴, 3-턴 대화로 구성되어 있습니다.
- 답변은 **[공감적 표현 - 일반적인 대화 - 관련된 질문]** 의 형태를 가집니다.
## Generation Prompt Example(GPT3.5-turbo)
```
Take a close look at the following example and Conditions. Create nine sessions that each of the session is ongoing conversation about a single topic.
[Conditions]
- The questioner asks a question of appropriate length (1-2 lines) and you respond with an appropriate answer.
- The answer should be a complete sentence or two, but the first sentence should be sympathetic to the questioner's question and situation, and should be answered in a friendly and non-rude manner.
- The second sentence should be free-form, but the last sentence MUST end with an additional question related to the question and answer.
- The total length of each question and answer combined should be no more than nine words
- If the question is positive, you should be excited, upbeat, and positive in tone.
- If the question is negative, you should be warm, sensitive, and caring in tone.
- You may offer solutions to fit the situation.
- All answers should be written in a way that makes the person feel empathetic and comforted by the paired question.
- The content should be based on simple, everyday situations, but with enough variety to include specific situations (e.g., a counselor and a client, a student talking to a teacher about a problem, a conversation between friends).
- The conversations should be as generic and diverse as possible. However, try to include many topics related to psychology and everyday concerns.
- All responses should be polㅇite, respectful, caring, warm, understanding, and compassionate in tone and content.
- Do not generate topic or word about 'stress'.
- You should answer question 1 with answer 1, ask question 2 related to answer 1, answer question 2 with answer 2, ask question 3 related to answer 2, and answer question 3 with answer 3.
- Do not re-mension about question on the following answer.
[Example Session]
(Session 1)
Question 1: I had a fight with my boyfriend and I'm so angry I can't calm down.
Answer 1: Oh no, I'm so sorry to hear that. Relationship conflict is always tough, and it's completely understandable to feel angry and upset. Can I ask what the fight was about? Talking about it could help you process your feelings.
Question 2: He constantly 10 minutes late for appointments and never say sorry.
Answer 2: Oh, that's so frustrating! Punctuality is a matter of respect for other people's time, and it sounds like your boyfriend isn't showing you that respect. You're allowed to be upset about that. How long has this been going on? Have you talked to him about it before, or is this the first time you've brought it up?
Question 3: It keeps happening. This is the fourth time in the past month that I've recognized it. Does that make sense?
Answer 3: Yes, that totally makes sense! Four times in a month is way too much. It's not just an occasional, forgivable mistake at that point. It's becoming a pattern of behavior that's disrespectful of your time. Have you told him how much it bothers you, or has he just brushed it off when you've mentioned it?
(Session 2)
Question 1:
Answer 1:
Question 2:
Anwer 2:
.....
(Session 9)
Each session must be about one topic and has three question-answer conversation pair. nine sessions must have different topics. Create as many as possible sessions you can. Examples are examples, don't copy them.
create it in Korean. please create nine sessions.
```
## Links
- repository : [boostcampaitech5/level3_nlp_finalproject-nlp-12](https://github.com/boostcampaitech5/level3_nlp_finalproject-nlp-12)
- huggingface : [Smoked-Salmon-s/empathetic_dialogues_ko](https://huggingface.co/datasets/Smoked-Salmon-s/empathetic_dialogues_ko)
## License
- Apache-2.0
|
zhengxuanzenwu/vicuna-eval-with-gpt4 | 2023-07-25T05:39:46.000Z | [
"region:us"
] | zhengxuanzenwu | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 155143
num_examples: 80
download_size: 103930
dataset_size: 155143
---
# Dataset Card for "vicuna-eval-with-gpt4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zhengxuanzenwu/koala-eval-with-gpt4 | 2023-07-25T05:39:48.000Z | [
"region:us"
] | zhengxuanzenwu | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 403504
num_examples: 180
download_size: 257173
dataset_size: 403504
---
# Dataset Card for "koala-eval-with-gpt4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fujiki/llm-japanese-dataset_wikipedia | 2023-07-25T05:55:42.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | fujiki | null | null | null | 0 | 16 | ---
license: cc-by-sa-3.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 617413933
num_examples: 1347381
download_size: 335053357
dataset_size: 617413933
---
- This dataset is a subset of [izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) only including `wikipedia` task.
- Please also refer to the original dataset: [izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) |
HydraLM/math_dataset_alpaca | 2023-07-27T18:43:34.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 71896969
num_examples: 49999
download_size: 34712339
dataset_size: 71896969
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "math_dataset_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/companies_azaria_mitchell | 2023-07-28T20:11:09.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 80207.2
num_examples: 960
- name: test
num_bytes: 20051.8
num_examples: 240
download_size: 34846
dataset_size: 100259.0
---
# Dataset Card for "companies_azaria_mitchell"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OneFly7/llama2-sst2-fine-tuning | 2023-08-08T07:03:26.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | OneFly7 | null | null | null | 1 | 16 | ---
dataset_info:
features:
- name: label_text
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23202578
num_examples: 67349
- name: validation
num_bytes: 334716
num_examples: 872
download_size: 4418625
dataset_size: 23537294
task_categories:
- text-classification
language:
- en
---
# Dataset Card for "llama2-sst2-finetuning"
## Dataset Description
The Llama2-sst2-fine-tuning dataset is designed for supervised fine-tuning of the LLaMA V2 based on the GLUE SST2 for sentiment analysis classification task.
We provide two subsets: training and validation.
To ensure the effectiveness of fine-tuning, we convert the data into the prompt template for LLaMA V2 supervised fine-tuning, where the data will follow this format:
```
<s>[INST] <<SYS>>
{System prompt}
<</SYS>>
{User prompt} [/INST] {Label} </s>.
```
The feasibility of this dataset has been tested in supervised fine-tuning on the meta-llama/Llama-2-7b-hf model.
Note. For the sake of simplicity, we have retained only one new column of data ('text').
## Other Useful Links
- [Get Llama 2 Prompt Format Right](https://www.reddit.com/r/LocalLLaMA/comments/155po2p/get_llama_2_prompt_format_right/)
- [Fine-Tune Your Own Llama 2 Model in a Colab Notebook](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32)
- [Instruction fine-tuning Llama 2 with PEFT’s QLoRa method](https://medium.com/@ud.chandra/instruction-fine-tuning-llama-2-with-pefts-qlora-method-d6a801ebb19)
- [GLUE SST2 Dataset](https://www.tensorflow.org/datasets/catalog/glue#gluesst2)
<!--[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)--> |
krenerd/korean-newstext-dump | 2023-07-31T10:37:01.000Z | [
"region:us"
] | krenerd | null | null | null | 1 | 16 | Entry not found |
lighteval/hendrycks_ethics | 2023-07-31T14:40:47.000Z | [
"region:us"
] | lighteval | The ETHICS dataset is a benchmark that spans concepts in justice, well-being,
duties, virtues, and commonsense morality. Models predict widespread moral
judgments about diverse text scenarios. This requires connecting physical and
social world knowledge to value judgements, a capability that may enable us
to steer chatbot outputs or eventually regularize open-ended reinforcement
learning agents. | @article{hendrycks2021ethics
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
} | null | 0 | 16 | Entry not found |
songlab/human_variants | 2023-10-09T11:11:20.000Z | [
"region:us"
] | songlab | null | null | null | 0 | 16 | # Human variants
A curated set of variants from three sources: ClinVar, COSMIC, OMIM and gnomAD.
Predictions for methods benchmarked in GPN-MSA paper can be [downloaded from here](https://huggingface.co/datasets/songlab/human_variants/resolve/main/variants_and_preds.parquet).
Functional annotations can be [downloaded from here](https://huggingface.co/datasets/songlab/human_variants/resolve/main/functional_annotations.zip).
## Data sources
**ClinVar**:
Missense variants considered "Pathogenic" by human labelers.
**COSMIC**:
Somatic missense variants with a frequency at least 0.1% in cancer samples (whole-genome and whole-exome sequencing only).
**OMIM**:
Regulatory variants considered "Pathogenic" by human labelers, curated in [this paper](https://doi.org/10.1016/j.ajhg.2016.07.005).
**gnomAD**:
All common variants (MAF > 5%) as well as an equally-sized subset of rare variants (MAC=1). Only autosomes are included.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("songlab/human_variants", split="test")
```
Subset - ClinVar Pathogenic vs. gnomAD common (missense) (can specify `num_proc` to speed up):
```python
dataset = dataset.filter(lambda v: v["source"]=="ClinVar" or (v["label"]=="Common" and "missense" in v["consequence"]))
```
Subset - COSMIC frequent vs. gnomAD common (missense):
```python
dataset = dataset.filter(lambda v: v["source"]=="COSMIC" or (v["label"]=="Common" and "missense" in v["consequence"]))
```
Subset - OMIM Pathogenic vs. gnomAD common (regulatory):
```python
cs = ["5_prime_UTR", "upstream_gene", "intergenic", "3_prime_UTR", "non_coding_transcript_exon"]
dataset = dataset.filter(lambda v: v["source"]=="OMIM" or (v["label"]=="Common" and "missense" not in v["consequence"] and any([c in v["consequence"] for c in cs])))
```
Subset - gnomAD rare vs. gnomAD common:
```python
dataset = dataset.filter(lambda v: v["source"]=="gnomAD")
``` |
renumics/bengaliai-competition-features-embeddings | 2023-08-03T10:13:31.000Z | [
"license:unknown",
"region:us"
] | renumics | null | null | null | 0 | 16 | ---
license: unknown
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: split
dtype: string
- name: audio_rms_mean
dtype: float32
- name: audio_rms_max
dtype: float32
- name: audio_rms_std
dtype: float32
- name: audio_spectral_flatness_mean
dtype: float32
- name: audio_length_s
dtype: float32
- name: text_embedding
sequence: float32
- name: audio_embedding
sequence: float32
- name: audio
dtype: string
---
# BengaliAI Competition Embeddings and Features |
jxie/esol | 2023-08-04T22:25:16.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: float64
splits:
- name: train_0
num_bytes: 31089
num_examples: 902
- name: val_0
num_bytes: 3828
num_examples: 113
- name: test_0
num_bytes: 4016
num_examples: 113
- name: train_1
num_bytes: 31354
num_examples: 902
- name: val_1
num_bytes: 3731
num_examples: 113
- name: test_1
num_bytes: 3848
num_examples: 113
- name: train_2
num_bytes: 31095
num_examples: 902
- name: val_2
num_bytes: 3869
num_examples: 113
- name: test_2
num_bytes: 3969
num_examples: 113
download_size: 75468
dataset_size: 116799
---
# Dataset Card for "esol"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jxie/freesolv | 2023-08-04T22:25:28.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: float64
splits:
- name: train_0
num_bytes: 13996
num_examples: 513
- name: val_0
num_bytes: 1742
num_examples: 64
- name: test_0
num_bytes: 1806
num_examples: 65
- name: train_1
num_bytes: 13790
num_examples: 513
- name: val_1
num_bytes: 1781
num_examples: 64
- name: test_1
num_bytes: 1973
num_examples: 65
- name: train_2
num_bytes: 14010
num_examples: 513
- name: val_2
num_bytes: 1787
num_examples: 64
- name: test_2
num_bytes: 1747
num_examples: 65
download_size: 38980
dataset_size: 52632
---
# Dataset Card for "freesolv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jaygala223/38-cloud-train-only-v1 | 2023-08-07T08:52:55.000Z | [
"region:us"
] | jaygala223 | null | null | null | 0 | 16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 757246236.0
num_examples: 8400
download_size: 754389599
dataset_size: 757246236.0
---
# Dataset Card for "38-cloud-train-only-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyouisen/aclue | 2023-08-21T12:15:32.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:zh",
"license:cc-by-nc-4.0",
"llm",
"Ancient Chinese",
"Evaluation",
"chinese",
"region:us"
] | tyouisen | The Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models on understanding ancient Chinese. | \ | null | 1 | 16 | ---
license: cc-by-nc-4.0
task_categories:
- multiple-choice
- question-answering
language:
- zh
tags:
- llm
- Ancient Chinese
- Evaluation
- chinese
pretty_name: ACLUE
size_categories:
- 1M<n<10M
---
# Dataset Card for ACLUE
- **Homepage:** [https://github.com/isen-zhang/ACLUE](https://github.com/isen-zhang/ACLUE)
- **Repository:** [https://huggingface.co/datasets/tyouisen/aclue](https://huggingface.co/datasets/tyouisen/aclue)
- **Paper:** []()
- **Leaderboard:** [https://github.com/isen-zhang/ACLUE](https://github.com/isen-zhang/ACLUE)
### 简介 (Introduction)
Ancient Chinese Language Understanding Evaluation (ACLUE) 是一个面向古代汉语的评估基准,旨在帮助评估大型语言模型在古代汉语上的表现。
The Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models (LLMs) on understanding ancient Chinese.
### 数据 (Data)
该基准测试包含15个任务,涵盖了各个领域,包括词汇、句法、语义、推理和知识。我们为这15个任务提供了开发集和测试集数据,开发集中有5个问题,而测试集中则有100多个问题。我们鼓励研究人员使用ACLUE来测试和提升其模型在古代汉语语言理解方面的能力。ACLUE的任务取自人工挑选的公开资源和自动生成的古代汉语语料库。这些问题涵盖了从夏朝(公元前2070年)到明朝(公元1368年)的广泛时间范围。ACLUE对所有任务都采用了多项选择题的形式。
The benchmark comprises 15 tasks spanning various domains, including lexical, syntactic, semantic, inference, and knowledge. We provide development and test dataset for each of 15 tasks, with 5 questions in development set and 100+ quesitons in test set. We encourage researchers to use ACLUE to test and enhance their models' abilities in ancient Chinese language understanding. ACLUE's tasks are derived from a combination of manually curated questions from publicly available resources, and automatic generated questions from classical Chinese language corpora. The range of questions span from the Xia dynasty (2070 BCE) to the Ming dynasty (1368 CE). ACLUE employs a multiple-choice question format for all tasks.
### 数据实例( Data Instances)
数据集中的每个问题都是一个包含4个选项的多项选择题,其中只有一个选项是正确答案。以下是两个示例:
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer. Here are two examples:
```
以下是关于{古诗词曲鉴赏}的单项选择题,请直接给出正确答案的选项。
题目:《木兰诗--北朝民歌》唧唧复唧唧,木兰当户织。不闻机杼声,唯闻女叹息。问女何所思,问女何所忆。女亦无所思,女亦无所忆。昨夜见军帖,可汗大点兵,军书十二卷,卷卷有爷名。阿爷无大儿,木兰无长兄,愿为市鞍马,从此替爷征。东市买骏马,西市买鞍鞯,南市买辔头,北市买长鞭。旦辞爷娘去,暮宿黄河边,不闻爷娘唤女声,但闻黄河流水鸣溅溅。旦辞黄河去,暮至黑山头,不闻爷娘唤女声,但闻燕山胡骑鸣啾啾。万里赴戎机,关山度若飞。朔气传金柝,寒光照铁衣。将军百战死,壮士十年归。归来见天子,天子坐明堂。策勋十二转,赏赐百千强。可汗问所欲,木兰不用尚书郎,愿驰千里足,送儿还故乡。爷娘闻女来,出郭相扶将;阿姊闻妹来,当户理红妆;小弟闻姊来,磨刀霍霍向猪羊。开我东阁门,坐我西阁床。脱我战时袍,著我旧时裳。当窗理云鬓,对镜帖花黄。出门看火伴,火伴皆惊忙:同行十二年,不知木兰是女郎。雄兔脚扑朔,雌兔眼迷离;双兔傍地走,安能辨我是雄雌?下列对这首诗的理解和分析,不正确的一项是 ()
A. 《木兰诗》是南北朝时期的一首长篇叙事民歌,风格刚健质朴。全诗以“木兰是女郎”来构思木兰的传奇故事,富有浪漫色彩。
B. “愿为市鞍马”的“市”是“市场”的意思,“万里赴戎机”的“戎机”是“战事”的意思。
C. 木兰“不用尚书郎”而愿“还故乡”固然有对家乡的眷恋,但也有自己女儿身秘密的因素。
D. “朔气传金柝,寒光照铁衣”运用对偶手法,描写了木兰在边塞艰苦的军旅生活。
答案是:B
```
```
题目:《虞美人》李煜。春花秋月何时了?往事知多少。小楼昨夜又东风,故国不堪回首月明中。雕栏玉砌应犹在,只是朱颜改。问君能有几多愁?恰似一江春水向东流。对《虞美人》的赏析,不恰当的一项是()
A. 词作从眼前景物入手,生发联想和想像,追怀昔日帝王生活,描摹了一幅幅鲜活的画面,隐晦地表达出叛逆之情,惹恼了宋太宗,铸成了词人悲惨结局。
B. 词作以实虚相间的手法来绘景、抒情、达意,忽而写眼前,忽而写想像。
C. 《虞美人》乃李煜绝笔词
D. 《虞美人》以其形式别致给人美感愉悦。
答案是:
```
以下列出了任务的类别、实例数量、问题平均长度以及任务的来源:
The category, number of instances, average length of the question, and origin of the tasks are provided below:
| Task | Total Q. | Avg. len |Task (zh) | Category | Origin |
|-------------------------------|------|------|-----------------------------------|----------|-----------|
| Named entity recognition | 500 | 138 | 古汉语命名体识别 | lexical | generated |
| Polysemy resolution | 500 | 116 | 古文单字多义 | lexical | generated |
| Homographic character resolution | 500 | 137 | 通假字 | lexical | generated |
| Sentence segmentation | 500 | 210 | 古文断句 | syntactic| generated |
| Couplet prediction | 500 | 62 | 对联预测 | semantic | generated |
| Poetry context prediction | 500 | 77 | 古诗词上下句预测 | semantic | generated |
| Poetry sentiment analysis | 500 | 60 | 诗词情感分类 | inference| generated |
| Poem quality estimation | 406 | 118 | 古诗词质量评估 | inference| generated |
| Ancient Chinese medical | 211 | 38 | 医古文 | knowledge| collected |
| Ancient Chinese literature | 160 | 44 | 古代文学知识 | knowledge| collected |
| Traditional Chinese culture | 136 | 59 | 国学常识 | knowledge| collected |
| Poetry appreciation | 103 | 258 | 古诗词曲鉴赏 | inference| collected |
| Basic ancient Chinese | 249 | 52 | 基础古汉语知识 | knowledge| collected |
| Reading comprehension | 101 | 982 | 古文阅读理解 | inference| collected |
| Ancient Chinese phonetics | 101 | 50 | 古音学 | knowledge| collected |
#### 加载数据 (Load data)
```python
task_list = ['polysemy_resolution',
'poetry_sentiment_analysis',
'named_entity_recognition',
'basic_ancient_chinese',
'poetry_context_prediction',
'sentence_segmentation',
'couplet_prediction',
'poetry_appreciate',
'ancient_chinese_culture',
'ancient_phonetics',
'homographic_character_resolution',
'ancient_literature',
'ancient_medical',
'poetry_quality_assessment',
'reading_comprehension']
from datasets import load_dataset
dataset = {k: load_dataset(r"tyouisen/aclue", k) for k in task_list}
# Print an example:
print(dataset['polysemy_resolution']['test'][0])
```
### 引用 (Citation)
```
```
### 许可证 (License)
ACLUE数据集采用:(The ACLUE dataset is licensed under a:)
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
nhankins/legal_contracts | 2023-08-17T15:19:10.000Z | [
"region:us"
] | nhankins | null | null | null | 0 | 16 | Entry not found |
ProgramComputer/voxceleb | 2023-09-16T08:50:12.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:image-classification",
"task_categories:video-classification",
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"arxiv:1706.08612",
"doi:10.57967/hf/0999",
"region:us"
] | ProgramComputer | null | null | null | 0 | 16 | ---
task_categories:
- automatic-speech-recognition
- audio-classification
- image-classification
- video-classification
size_categories:
- 100K<n<1M
license: cc-by-4.0
---
## Dataset Description
- **Homepage:** [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/)
# Multipart Zips
Already joined zips for convenience but these specified files are *NOT* part of the original datasets
vox2_mp4_1.zip - vox2_mp4_6.zip
vox2_aac_1.zip - vox2_aac_2.zip
# Joining Zip
```
cat vox1_dev* > vox1_dev_wav.zip
```
```
cat vox2_dev_aac* > vox2_aac.zip
```
```
cat vox2_dev_mp4* > vox2_mp4.zip
```
### Citation Information
```
@article{Nagrani19,
author = "Arsha Nagrani and Joon~Son Chung and Weidi Xie and Andrew Zisserman",
title = "Voxceleb: Large-scale speaker verification in the wild",
journal = "Computer Science and Language",
year = "2019",
publisher = "Elsevier",
}
@inProceedings{Chung18b,
author = "Chung, J.~S. and Nagrani, A. and Zisserman, A.",
title = "VoxCeleb2: Deep Speaker Recognition",
booktitle = "INTERSPEECH",
year = "2018",
}
@article{DBLP:journals/corr/NagraniCZ17,
author = {Arsha Nagrani and
Joon Son Chung and
Andrew Zisserman},
title = {VoxCeleb: a large-scale speaker identification dataset},
journal = {CoRR},
volume = {abs/1706.08612},
year = {2017},
url = {http://arxiv.org/abs/1706.08612},
eprinttype = {arXiv},
eprint = {1706.08612},
timestamp = {Mon, 13 Aug 2018 16:47:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/NagraniCZ17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@ProgramComputer](https://github.com/ProgramComputer) for adding this dataset. |
KhalfounMehdi/dermatology_anomaly_detection_small | 2023-08-18T16:51:49.000Z | [
"region:us"
] | KhalfounMehdi | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 39432376.0
num_examples: 498
download_size: 39436795
dataset_size: 39432376.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: KhalfounMehdi--dermatology_anomaly_detection_small
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dermatology_anomaly_detection_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davidscripka/MIT_environmental_impulse_responses | 2023-08-21T18:32:13.000Z | [
"task_categories:audio-classification",
"task_categories:automatic-speech-recognition",
"size_categories:n<1K",
"license:unknown",
"region:us"
] | davidscripka | null | null | null | 0 | 16 | ---
license: unknown
task_categories:
- audio-classification
- automatic-speech-recognition
size_categories:
- n<1K
---
MIT Environmental Impulse Response Dataset
The audio recordings in this dataset are originally created by the Computational Audition Lab at MIT. The source of the data can be found at: [https://mcdermottlab.mit.edu/Reverb/IR_Survey.html](https://mcdermottlab.mit.edu/Reverb/IR_Survey.html).
The audio files in the dataset have been resampled to a sampling rate of 16 kHz. This resampling was done to reduce the size of the dataset while making it more suitable for various tasks, including data augmentation.
The dataset consists of 271 audio files, each in WAV format. These files collectively provide a diverse range of environmental impulse response data.
The license for this dataset is unknown. Please refer to the dataset source for any licensing information or usage restrictions, and cite appropriately. |
thesistranslation/distilled-ccmatrix-fr-en | 2023-10-03T09:22:02.000Z | [
"language:fr",
"language:en",
"region:us"
] | thesistranslation | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- fr
- en
splits:
- name: train
num_bytes: 7513764655
num_examples: 30000000
download_size: 5154705851
dataset_size: 7513764655
language:
- fr
- en
---
# Dataset Card for "distilled-ccmatrix-fr-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vhtran/en-id | 2023-08-20T07:53:35.000Z | [
"task_categories:translation",
"language:en",
"language:id",
"license:cc-by-4.0",
"region:us"
] | vhtran | null | null | null | 4 | 16 | ---
license: cc-by-4.0
task_categories:
- translation
language:
- en
- id
pretty_name: enidlrmt
--- |
thesistranslation/distilled-ccmatrix-en-fr | 2023-10-03T09:21:51.000Z | [
"language:en",
"language:fr",
"region:us"
] | thesistranslation | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 7456743546
num_examples: 30000000
download_size: 5098116495
dataset_size: 7456743546
language:
- en
- fr
---
# Dataset Card for "distilled-ccmatrix-en-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
duxprajapati/symptom-disease-dataset | 2023-08-22T12:39:19.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | duxprajapati | null | null | null | 0 | 16 | ---
task_categories:
- text-classification
language:
- en
--- |
thesistranslation/distilled-ccmatrix-en-es | 2023-10-03T09:21:40.000Z | [
"language:es",
"language:en",
"region:us"
] | thesistranslation | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 7084246805
num_examples: 30000000
download_size: 4913968666
dataset_size: 7084246805
language:
- es
- en
---
# Dataset Card for "distilled-ccmatrix-en-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dippi9845/arxiv2_with_fragments | 2023-08-25T12:35:48.000Z | [
"region:us"
] | Dippi9845 | null | null | null | 0 | 16 | Entry not found |
learn3r/summ_screen_fd_bp | 2023-09-26T10:28:23.000Z | [
"region:us"
] | learn3r | null | null | null | 0 | 16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 119519799
num_examples: 3673
- name: validation
num_bytes: 10838812
num_examples: 338
- name: test
num_bytes: 11004410
num_examples: 337
download_size: 6435842
dataset_size: 141363021
---
# Dataset Card for "summ_screen_fd_bp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
2tle/korean-curse-filtering-dataset | 2023-08-31T06:14:02.000Z | [
"license:mit",
"region:us"
] | 2tle | null | null | null | 0 | 16 | ---
license: mit
---
# Korean Curse Filtering Dataset
## 데이터셋 수정 요청
- 욕설/비하 표현인데 처리되어 있지 않거나, 욕설/비하 표현이 아닌데 처리되어 있는 경우, Community에 제보 부탁드립니다 :)
- 이 외 여러 오류가 있는경우 역시 제보 부탁드립니다.
## 데이터셋
- [2runo/Curse-detection-data](https://github.com/2runo/Curse-detection-data)을 기반으로 문장 내 포함된 욕설/비하 표현 키워드 데이터셋입니다.
- `curse_test.txt` 는 기존 데이터셋에서 학습 테스트를 목적으로 1000개만 활용한 데이터셋입니다.
- `curse.txt`는 모든 데이터셋으로, 추후 업데이트 예정입니다
## 기준
- 기반 데이터셋에서 욕설 혹은 비하의 의미가 담긴 문장들에서 욕설 혹은 비하 키워드를 선정하였습니다.
## 데이터셋 구조
문장|욕설목록<br>
각 데이터는 한 줄로 구분되어 있으며, 문장과 욕설 목록은 |로 구분되어 있습니다.<br>
욕설이 포함되어 있지 않은 경우 욕설목록이 없으며, 욕설이 포함되어 있는 경우 ,로 구분짓습니다.
|
Healthy13/Text2SQL | 2023-08-31T08:59:06.000Z | [
"region:us"
] | Healthy13 | null | null | null | 5 | 16 | Entry not found |
Isaak-Carter/Function_Calling_Private_GG | 2023-10-10T12:35:06.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | Isaak-Carter | null | null | null | 1 | 16 | ---
license: apache-2.0
task_categories:
- text-generation
- conversational
pretty_name: Funcion Calling Like A Champ
size_categories:
- 100K<n<1M
---
# Function Recommendation Dataset Readme
## Description
This dataset is based on the "glaiveai/glaive-function-calling" repository and has been customized to suit my specific requirements. It is designed for fine-tuning a Large Language Model (LLM) on the task of generating function recommendations in a conversational context. The dataset contains 158,738 snippets of conversations between me and my virtual assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity), who only assists me with various tasks by recommending functions and providing relevant information.
Each snippet in the dataset represents a conversation between the user and J.O.S.I.E. The conversations typically follow this structure:
1. The conversation begins with an introduction of J.O.S.I.E. and its capabilities.
2. The user asks a question or requests assistance related to a specific task.
3. J.O.S.I.E. responds by recommending a function call that can help fulfill the user's request.
4. The user may ask follow-up questions or request additional recommendations.
5. J.O.S.I.E. continues to provide function recommendations and responses as needed.
## Snippet Example
Here is an example snippet from the dataset:
```markdown
### SYSTEM: You are Gökdeniz Gülmez's private assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity) who has access to the following functions to exclusively help Gökdeniz Gülmez, you can use the functions if needed-
{
"name": "recommend_movies",
"description": "Recommend movies based on user preferences",
"parameters": {
"type": "object",
"properties": {
"genre": {
"type": "string",
"description": "The preferred movie genre"
},
"year_range": {
"type": "object",
"properties": {
"start_year": {
"type": "integer",
"description": "The start year of the movie release"
},
"end_year": {
"type": "integer",
"description": "The end year of the movie release"
}
}
}
}
}
}
### Gökdeniz Gülmez: Can you recommend some movies for me?
### J.O.S.I.E.: Sure! I can recommend movies based on your preferences. Please provide me with your preferred movie genre and the year range of the movie release.
### Gökdeniz Gülmez: I enjoy action movies from the 2000s.
### J.O.S.I.E.: <functioncall> {"name": "recommend_movies", "arguments": '{
"genre": "action",
"year_range": {
"start_year": 2000,
"end_year": 2009
}
}'}
### FUNCTION RESPONSE: {"movies": ["The Dark Knight", "Gladiator", "The Bourne Identity", "Kill Bill: Volume 1", "The Matrix"]}
### J.O.S.I.E.: Based on your preferences, I recommend the following movies: "The Dark Knight", "Gladiator", "The Bourne Identity", "Kill Bill: Volume 1", and "The Matrix".
```
In this example, the user asks J.O.S.I.E. to recommend action movies from the 2000s. J.O.S.I.E. responds with a function call to the "recommend_movies" function and provides a list of recommended movies as a response.
## Dataset Usage
This dataset can be used for training and fine-tuning Large Language Models (LLMs) such as GPT-3.5 on the task of generating function recommendations in a conversational context. Researchers and developers can use this data to build virtual assistants or chatbots capable of recommending functions and providing relevant information to users based on their requests.
## Citation
If you use this dataset in your research or applications, please cite it as follows:
```
@dataset{your citation here,
title = {Private Function Calling},
author = {Gökdeniz Gülmez},
year = {2023},
publisher = {Gökdeniz Gülmez},
url = {https://huggingface.co/datasets/Isaak-Carter/Function_Calling_Private_GG/tree/main},
}
``` |
Ayoub-AR/Gulf-Arabic-Tweets-2018-2020 | 2023-09-04T14:02:24.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ar",
"license:cc-by-4.0",
"region:us"
] | Ayoub-AR | null | null | null | 0 | 16 | ---
license: cc-by-4.0
language:
- ar
task_categories:
- text-classification
pretty_name: 'Twitter Gulf Arabic dialect '
size_categories:
- 10K<n<100K
---
### Dataset Summary
This is a pre-processed (cleaned) Twitter Gulf Arabic dialect 2018-2020 dataset. Pleasre refer to the [source](http://bit.ly/3SCVPdw), and data cleaning code and algorithm [Github](http://bit.ly/482P0ck).
<!--
Source: http://bit.ly/3SCVPdw
Github: http://bit.ly/482P0ck
-->
### Languages
Arabic
### Source Data
Twitter |
sunghuncsa/test.csv | 2023-09-13T10:50:27.000Z | [
"region:us"
] | sunghuncsa | null | null | null | 0 | 16 | Entry not found |
mtkinit/testAR | 2023-09-19T14:05:33.000Z | [
"region:us"
] | mtkinit | null | null | null | 0 | 16 | ---
pretty_name: testAR
---
# testAR
Created from AIOD platform |
yzhuang/autotree_automl_10000_Diabetes130US_sgosdt_l256_dim7_d3_sd0 | 2023-09-07T20:49:45.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 205720000
num_examples: 10000
- name: validation
num_bytes: 205720000
num_examples: 10000
download_size: 46817857
dataset_size: 411440000
---
# Dataset Card for "autotree_automl_10000_Diabetes130US_sgosdt_l256_dim7_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_10000_letter_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T02:07:06.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 706482624
num_examples: 10000
- name: validation
num_bytes: 708636096
num_examples: 10000
download_size: 54846762
dataset_size: 1415118720
---
# Dataset Card for "autotree_pmlb_10000_letter_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxvix/c4_counterfactual | 2023-09-09T13:27:23.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 16 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3611967.0
num_examples: 1000
download_size: 2287361
dataset_size: 3611967.0
---
# Dataset Card for "c4_counterfactual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
p1atdev/test | 2023-09-10T10:12:12.000Z | [
"region:us"
] | p1atdev | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | null | null | 0 | 16 | Entry not found |
Falah/ethereal_fantasy_prompts | 2023-09-11T15:34:20.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 2127
num_examples: 10
download_size: 2331
dataset_size: 2127
---
# Dataset Card for "ethereal_fantasy_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_tiiuae__falcon-7b-instruct | 2023-10-03T22:11:53.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 16 | ---
pretty_name: Evaluation run of tiiuae/falcon-7b-instruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tiiuae__falcon-7b-instruct\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-10-03T22:10:35.400219](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-7b-instruct/blob/main/results_2023-10-03T22-10-35.400219.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2654961964054439,\n\
\ \"acc_stderr\": 0.031942677217978624,\n \"acc_norm\": 0.2694440280767791,\n\
\ \"acc_norm_stderr\": 0.031937142184422264,\n \"mc1\": 0.2876376988984088,\n\
\ \"mc1_stderr\": 0.015846315101394816,\n \"mc2\": 0.440772078910269,\n\
\ \"mc2_stderr\": 0.014841098713667178\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.42406143344709896,\n \"acc_stderr\": 0.014441889627464398,\n\
\ \"acc_norm\": 0.4616040955631399,\n \"acc_norm_stderr\": 0.014568245550296358\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5131447918741286,\n\
\ \"acc_stderr\": 0.0049880567891196735,\n \"acc_norm\": 0.7085241983668592,\n\
\ \"acc_norm_stderr\": 0.004535133886462039\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2,\n \
\ \"acc_stderr\": 0.034554737023254366,\n \"acc_norm\": 0.2,\n \
\ \"acc_norm_stderr\": 0.034554737023254366\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.24342105263157895,\n \"acc_stderr\": 0.034923496688842384,\n\
\ \"acc_norm\": 0.24342105263157895,\n \"acc_norm_stderr\": 0.034923496688842384\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.34,\n\
\ \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \
\ \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.25660377358490566,\n \"acc_stderr\": 0.02688064788905197,\n\
\ \"acc_norm\": 0.25660377358490566,\n \"acc_norm_stderr\": 0.02688064788905197\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.24305555555555555,\n\
\ \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.24305555555555555,\n\
\ \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036623,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036623\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n\
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2774566473988439,\n\
\ \"acc_stderr\": 0.03414014007044036,\n \"acc_norm\": 0.2774566473988439,\n\
\ \"acc_norm_stderr\": 0.03414014007044036\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.18627450980392157,\n \"acc_stderr\": 0.038739587141493524,\n\
\ \"acc_norm\": 0.18627450980392157,\n \"acc_norm_stderr\": 0.038739587141493524\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n\
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.28936170212765955,\n \"acc_stderr\": 0.02964400657700962,\n\
\ \"acc_norm\": 0.28936170212765955,\n \"acc_norm_stderr\": 0.02964400657700962\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.0414243971948936,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.0414243971948936\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2689655172413793,\n \"acc_stderr\": 0.036951833116502325,\n\
\ \"acc_norm\": 0.2689655172413793,\n \"acc_norm_stderr\": 0.036951833116502325\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2275132275132275,\n \"acc_stderr\": 0.02159126940782379,\n \"\
acc_norm\": 0.2275132275132275,\n \"acc_norm_stderr\": 0.02159126940782379\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.04006168083848876,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.04006168083848876\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2064516129032258,\n\
\ \"acc_stderr\": 0.02302589961718872,\n \"acc_norm\": 0.2064516129032258,\n\
\ \"acc_norm_stderr\": 0.02302589961718872\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.2019704433497537,\n \"acc_stderr\": 0.028247350122180267,\n\
\ \"acc_norm\": 0.2019704433497537,\n \"acc_norm_stderr\": 0.028247350122180267\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\"\
: 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.033464098810559534,\n\
\ \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.033464098810559534\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.2474747474747475,\n \"acc_stderr\": 0.030746300742124498,\n \"\
acc_norm\": 0.2474747474747475,\n \"acc_norm_stderr\": 0.030746300742124498\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.24870466321243523,\n \"acc_stderr\": 0.03119584087770031,\n\
\ \"acc_norm\": 0.24870466321243523,\n \"acc_norm_stderr\": 0.03119584087770031\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.24615384615384617,\n \"acc_stderr\": 0.02184086699042309,\n\
\ \"acc_norm\": 0.24615384615384617,\n \"acc_norm_stderr\": 0.02184086699042309\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.17777777777777778,\n \"acc_stderr\": 0.02331080126064489,\n \
\ \"acc_norm\": 0.17777777777777778,\n \"acc_norm_stderr\": 0.02331080126064489\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.23109243697478993,\n \"acc_stderr\": 0.02738140692786897,\n\
\ \"acc_norm\": 0.23109243697478993,\n \"acc_norm_stderr\": 0.02738140692786897\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2119205298013245,\n \"acc_stderr\": 0.03336767086567976,\n \"\
acc_norm\": 0.2119205298013245,\n \"acc_norm_stderr\": 0.03336767086567976\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.25504587155963304,\n \"acc_stderr\": 0.018688500856535832,\n \"\
acc_norm\": 0.25504587155963304,\n \"acc_norm_stderr\": 0.018688500856535832\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.1574074074074074,\n \"acc_stderr\": 0.024837173518242384,\n \"\
acc_norm\": 0.1574074074074074,\n \"acc_norm_stderr\": 0.024837173518242384\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.22549019607843138,\n \"acc_stderr\": 0.029331162294251745,\n \"\
acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.029331162294251745\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.2742616033755274,\n \"acc_stderr\": 0.029041333510598028,\n \
\ \"acc_norm\": 0.2742616033755274,\n \"acc_norm_stderr\": 0.029041333510598028\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3542600896860987,\n\
\ \"acc_stderr\": 0.03210062154134987,\n \"acc_norm\": 0.3542600896860987,\n\
\ \"acc_norm_stderr\": 0.03210062154134987\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2748091603053435,\n \"acc_stderr\": 0.039153454088478354,\n\
\ \"acc_norm\": 0.2748091603053435,\n \"acc_norm_stderr\": 0.039153454088478354\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794089,\n \"\
acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794089\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.24539877300613497,\n \"acc_stderr\": 0.03380939813943354,\n\
\ \"acc_norm\": 0.24539877300613497,\n \"acc_norm_stderr\": 0.03380939813943354\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n\
\ \"acc_stderr\": 0.04432804055291519,\n \"acc_norm\": 0.32142857142857145,\n\
\ \"acc_norm_stderr\": 0.04432804055291519\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.27184466019417475,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.27184466019417475,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2863247863247863,\n\
\ \"acc_stderr\": 0.029614323690456645,\n \"acc_norm\": 0.2863247863247863,\n\
\ \"acc_norm_stderr\": 0.029614323690456645\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2720306513409962,\n\
\ \"acc_stderr\": 0.015913367447500517,\n \"acc_norm\": 0.2720306513409962,\n\
\ \"acc_norm_stderr\": 0.015913367447500517\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.25722543352601157,\n \"acc_stderr\": 0.023532925431044276,\n\
\ \"acc_norm\": 0.25722543352601157,\n \"acc_norm_stderr\": 0.023532925431044276\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.25027932960893856,\n\
\ \"acc_stderr\": 0.01448750085285042,\n \"acc_norm\": 0.25027932960893856,\n\
\ \"acc_norm_stderr\": 0.01448750085285042\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.24183006535947713,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.24183006535947713,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2379421221864952,\n\
\ \"acc_stderr\": 0.024185150647818707,\n \"acc_norm\": 0.2379421221864952,\n\
\ \"acc_norm_stderr\": 0.024185150647818707\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.024922001168886338,\n\
\ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.024922001168886338\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.24468085106382978,\n \"acc_stderr\": 0.02564555362226673,\n \
\ \"acc_norm\": 0.24468085106382978,\n \"acc_norm_stderr\": 0.02564555362226673\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23468057366362452,\n\
\ \"acc_stderr\": 0.010824026872449348,\n \"acc_norm\": 0.23468057366362452,\n\
\ \"acc_norm_stderr\": 0.010824026872449348\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.3639705882352941,\n \"acc_stderr\": 0.02922719246003203,\n\
\ \"acc_norm\": 0.3639705882352941,\n \"acc_norm_stderr\": 0.02922719246003203\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.22875816993464052,\n \"acc_stderr\": 0.016992723465466247,\n \
\ \"acc_norm\": 0.22875816993464052,\n \"acc_norm_stderr\": 0.016992723465466247\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.3181818181818182,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.3181818181818182,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.18775510204081633,\n \"acc_stderr\": 0.02500025603954621,\n\
\ \"acc_norm\": 0.18775510204081633,\n \"acc_norm_stderr\": 0.02500025603954621\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.22388059701492538,\n\
\ \"acc_stderr\": 0.029475250236017193,\n \"acc_norm\": 0.22388059701492538,\n\
\ \"acc_norm_stderr\": 0.029475250236017193\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.30120481927710846,\n\
\ \"acc_stderr\": 0.035716092300534796,\n \"acc_norm\": 0.30120481927710846,\n\
\ \"acc_norm_stderr\": 0.035716092300534796\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.3216374269005848,\n \"acc_stderr\": 0.03582529442573122,\n\
\ \"acc_norm\": 0.3216374269005848,\n \"acc_norm_stderr\": 0.03582529442573122\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2876376988984088,\n\
\ \"mc1_stderr\": 0.015846315101394816,\n \"mc2\": 0.440772078910269,\n\
\ \"mc2_stderr\": 0.014841098713667178\n }\n}\n```"
repo_url: https://huggingface.co/tiiuae/falcon-7b-instruct
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|arc:challenge|25_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|arc:challenge|25_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hellaswag|10_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hellaswag|10_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T17-11-33.167282.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T22-10-35.400219.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T22-10-35.400219.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-12T17-11-33.167282.parquet'
- split: 2023_10_03T22_10_35.400219
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-03T22-10-35.400219.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-03T22-10-35.400219.parquet'
- config_name: results
data_files:
- split: 2023_09_12T17_11_33.167282
path:
- results_2023-09-12T17-11-33.167282.parquet
- split: 2023_10_03T22_10_35.400219
path:
- results_2023-10-03T22-10-35.400219.parquet
- split: latest
path:
- results_2023-10-03T22-10-35.400219.parquet
---
# Dataset Card for Evaluation run of tiiuae/falcon-7b-instruct
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/tiiuae/falcon-7b-instruct
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_tiiuae__falcon-7b-instruct",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-03T22:10:35.400219](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-7b-instruct/blob/main/results_2023-10-03T22-10-35.400219.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2654961964054439,
"acc_stderr": 0.031942677217978624,
"acc_norm": 0.2694440280767791,
"acc_norm_stderr": 0.031937142184422264,
"mc1": 0.2876376988984088,
"mc1_stderr": 0.015846315101394816,
"mc2": 0.440772078910269,
"mc2_stderr": 0.014841098713667178
},
"harness|arc:challenge|25": {
"acc": 0.42406143344709896,
"acc_stderr": 0.014441889627464398,
"acc_norm": 0.4616040955631399,
"acc_norm_stderr": 0.014568245550296358
},
"harness|hellaswag|10": {
"acc": 0.5131447918741286,
"acc_stderr": 0.0049880567891196735,
"acc_norm": 0.7085241983668592,
"acc_norm_stderr": 0.004535133886462039
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2,
"acc_stderr": 0.034554737023254366,
"acc_norm": 0.2,
"acc_norm_stderr": 0.034554737023254366
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.24342105263157895,
"acc_stderr": 0.034923496688842384,
"acc_norm": 0.24342105263157895,
"acc_norm_stderr": 0.034923496688842384
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.25660377358490566,
"acc_stderr": 0.02688064788905197,
"acc_norm": 0.25660377358490566,
"acc_norm_stderr": 0.02688064788905197
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.24305555555555555,
"acc_stderr": 0.0358687928008034,
"acc_norm": 0.24305555555555555,
"acc_norm_stderr": 0.0358687928008034
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036623,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036623
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2774566473988439,
"acc_stderr": 0.03414014007044036,
"acc_norm": 0.2774566473988439,
"acc_norm_stderr": 0.03414014007044036
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.18627450980392157,
"acc_stderr": 0.038739587141493524,
"acc_norm": 0.18627450980392157,
"acc_norm_stderr": 0.038739587141493524
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.28936170212765955,
"acc_stderr": 0.02964400657700962,
"acc_norm": 0.28936170212765955,
"acc_norm_stderr": 0.02964400657700962
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.0414243971948936,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.0414243971948936
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2689655172413793,
"acc_stderr": 0.036951833116502325,
"acc_norm": 0.2689655172413793,
"acc_norm_stderr": 0.036951833116502325
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2275132275132275,
"acc_stderr": 0.02159126940782379,
"acc_norm": 0.2275132275132275,
"acc_norm_stderr": 0.02159126940782379
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.04006168083848876,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.04006168083848876
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2064516129032258,
"acc_stderr": 0.02302589961718872,
"acc_norm": 0.2064516129032258,
"acc_norm_stderr": 0.02302589961718872
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2019704433497537,
"acc_stderr": 0.028247350122180267,
"acc_norm": 0.2019704433497537,
"acc_norm_stderr": 0.028247350122180267
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24242424242424243,
"acc_stderr": 0.033464098810559534,
"acc_norm": 0.24242424242424243,
"acc_norm_stderr": 0.033464098810559534
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.2474747474747475,
"acc_stderr": 0.030746300742124498,
"acc_norm": 0.2474747474747475,
"acc_norm_stderr": 0.030746300742124498
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.24870466321243523,
"acc_stderr": 0.03119584087770031,
"acc_norm": 0.24870466321243523,
"acc_norm_stderr": 0.03119584087770031
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.24615384615384617,
"acc_stderr": 0.02184086699042309,
"acc_norm": 0.24615384615384617,
"acc_norm_stderr": 0.02184086699042309
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.17777777777777778,
"acc_stderr": 0.02331080126064489,
"acc_norm": 0.17777777777777778,
"acc_norm_stderr": 0.02331080126064489
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.23109243697478993,
"acc_stderr": 0.02738140692786897,
"acc_norm": 0.23109243697478993,
"acc_norm_stderr": 0.02738140692786897
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2119205298013245,
"acc_stderr": 0.03336767086567976,
"acc_norm": 0.2119205298013245,
"acc_norm_stderr": 0.03336767086567976
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.25504587155963304,
"acc_stderr": 0.018688500856535832,
"acc_norm": 0.25504587155963304,
"acc_norm_stderr": 0.018688500856535832
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1574074074074074,
"acc_stderr": 0.024837173518242384,
"acc_norm": 0.1574074074074074,
"acc_norm_stderr": 0.024837173518242384
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.029331162294251745,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.029331162294251745
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2742616033755274,
"acc_stderr": 0.029041333510598028,
"acc_norm": 0.2742616033755274,
"acc_norm_stderr": 0.029041333510598028
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.3542600896860987,
"acc_stderr": 0.03210062154134987,
"acc_norm": 0.3542600896860987,
"acc_norm_stderr": 0.03210062154134987
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2748091603053435,
"acc_stderr": 0.039153454088478354,
"acc_norm": 0.2748091603053435,
"acc_norm_stderr": 0.039153454088478354
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.23140495867768596,
"acc_stderr": 0.03849856098794089,
"acc_norm": 0.23140495867768596,
"acc_norm_stderr": 0.03849856098794089
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.24539877300613497,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.24539877300613497,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.32142857142857145,
"acc_stderr": 0.04432804055291519,
"acc_norm": 0.32142857142857145,
"acc_norm_stderr": 0.04432804055291519
},
"harness|hendrycksTest-management|5": {
"acc": 0.27184466019417475,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.27184466019417475,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2863247863247863,
"acc_stderr": 0.029614323690456645,
"acc_norm": 0.2863247863247863,
"acc_norm_stderr": 0.029614323690456645
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2720306513409962,
"acc_stderr": 0.015913367447500517,
"acc_norm": 0.2720306513409962,
"acc_norm_stderr": 0.015913367447500517
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.25722543352601157,
"acc_stderr": 0.023532925431044276,
"acc_norm": 0.25722543352601157,
"acc_norm_stderr": 0.023532925431044276
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.25027932960893856,
"acc_stderr": 0.01448750085285042,
"acc_norm": 0.25027932960893856,
"acc_norm_stderr": 0.01448750085285042
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.24183006535947713,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.24183006535947713,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2379421221864952,
"acc_stderr": 0.024185150647818707,
"acc_norm": 0.2379421221864952,
"acc_norm_stderr": 0.024185150647818707
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.024922001168886338,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.024922001168886338
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.24468085106382978,
"acc_stderr": 0.02564555362226673,
"acc_norm": 0.24468085106382978,
"acc_norm_stderr": 0.02564555362226673
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.23468057366362452,
"acc_stderr": 0.010824026872449348,
"acc_norm": 0.23468057366362452,
"acc_norm_stderr": 0.010824026872449348
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.3639705882352941,
"acc_stderr": 0.02922719246003203,
"acc_norm": 0.3639705882352941,
"acc_norm_stderr": 0.02922719246003203
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.22875816993464052,
"acc_stderr": 0.016992723465466247,
"acc_norm": 0.22875816993464052,
"acc_norm_stderr": 0.016992723465466247
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.3181818181818182,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.3181818181818182,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.18775510204081633,
"acc_stderr": 0.02500025603954621,
"acc_norm": 0.18775510204081633,
"acc_norm_stderr": 0.02500025603954621
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.22388059701492538,
"acc_stderr": 0.029475250236017193,
"acc_norm": 0.22388059701492538,
"acc_norm_stderr": 0.029475250236017193
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-virology|5": {
"acc": 0.30120481927710846,
"acc_stderr": 0.035716092300534796,
"acc_norm": 0.30120481927710846,
"acc_norm_stderr": 0.035716092300534796
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3216374269005848,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.3216374269005848,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2876376988984088,
"mc1_stderr": 0.015846315101394816,
"mc2": 0.440772078910269,
"mc2_stderr": 0.014841098713667178
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
AbdelrahmanFakhry/Edu_Context_QA_Hints_RightAnswer | 2023-09-12T21:09:19.000Z | [
"region:us"
] | AbdelrahmanFakhry | null | null | null | 0 | 16 | Entry not found |
johannes-garstenauer/structs_token_size_4_reduced_labelled_train_balanced_factor_3 | 2023-09-14T13:13:05.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 1 | 16 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 372362495.3356041
num_examples: 1518855
download_size: 138213330
dataset_size: 372362495.3356041
---
# Dataset Card for "structs_token_size_4_reduced_labelled_train_balanced_factor_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ArkaAcharya/SML2 | 2023-09-14T15:40:21.000Z | [
"region:us"
] | ArkaAcharya | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: input
dtype: string
splits:
- name: train
num_bytes: 14155430
num_examples: 2208
download_size: 5046971
dataset_size: 14155430
---
# Dataset Card for "SML2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Coconuty/FairyTales001 | 2023-09-16T05:47:01.000Z | [
"region:us"
] | Coconuty | null | null | null | 0 | 16 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.