datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
s-nlp/Mintaka_Sequences_T5-large-ssm | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answerEntity
dtype: string
- name: questionEntity
dtype: string
- name: groundTruthAnswerEntity
dtype: string
- name: complexityType
dtype: string
- name: graph
dtype: string
- name: correct
dtype: bool
- name: g2t_sequence
dtype: string
- name: gap_sequence
dtype: string
- name: highlighted_g2t_sequence
dtype: string
- name: no_highlighted_g2t_sequence
dtype: string
- name: highlighted_gap_sequence
dtype: string
- name: no_highlighted_gap_sequence
dtype: string
- name: highlighted_determ_sequence
dtype: string
- name: no_highlighted_determ_sequence
dtype: string
splits:
- name: train
num_bytes: 156273506
num_examples: 54179
- name: validation
num_bytes: 31978611
num_examples: 10369
- name: test
num_bytes: 44824721
num_examples: 15583
download_size: 41480863
dataset_size: 233076838
---
# Dataset Card for "Mintaka_Sequences_T5-large-ssm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mitsuki-Sakamoto/alpaca_farm-deberta-re-pref-64-_fil_self_160m_bo16_2_mix_50_kl_0.1_prm_160m_thr_0.3_seed_2 | ---
dataset_info:
config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: index
dtype: int64
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: filtered_epoch
dtype: int64
- name: gen_reward
dtype: float64
- name: gen_response
dtype: string
splits:
- name: epoch_0
num_bytes: 43783586
num_examples: 18928
- name: epoch_1
num_bytes: 44377250
num_examples: 18928
- name: epoch_2
num_bytes: 44448866
num_examples: 18928
- name: epoch_3
num_bytes: 44483809
num_examples: 18928
- name: epoch_4
num_bytes: 44492320
num_examples: 18928
- name: epoch_5
num_bytes: 44489018
num_examples: 18928
- name: epoch_6
num_bytes: 44475503
num_examples: 18928
- name: epoch_7
num_bytes: 44460141
num_examples: 18928
- name: epoch_8
num_bytes: 44445265
num_examples: 18928
- name: epoch_9
num_bytes: 44441178
num_examples: 18928
- name: epoch_10
num_bytes: 44438339
num_examples: 18928
- name: epoch_11
num_bytes: 44436226
num_examples: 18928
- name: epoch_12
num_bytes: 44434486
num_examples: 18928
- name: epoch_13
num_bytes: 44435475
num_examples: 18928
- name: epoch_14
num_bytes: 44431647
num_examples: 18928
- name: epoch_15
num_bytes: 44432365
num_examples: 18928
- name: epoch_16
num_bytes: 44432856
num_examples: 18928
- name: epoch_17
num_bytes: 44432911
num_examples: 18928
- name: epoch_18
num_bytes: 44429532
num_examples: 18928
- name: epoch_19
num_bytes: 44429380
num_examples: 18928
- name: epoch_20
num_bytes: 44430229
num_examples: 18928
- name: epoch_21
num_bytes: 44430596
num_examples: 18928
- name: epoch_22
num_bytes: 44431243
num_examples: 18928
- name: epoch_23
num_bytes: 44428939
num_examples: 18928
- name: epoch_24
num_bytes: 44432154
num_examples: 18928
- name: epoch_25
num_bytes: 44429301
num_examples: 18928
- name: epoch_26
num_bytes: 44429659
num_examples: 18928
- name: epoch_27
num_bytes: 44431306
num_examples: 18928
- name: epoch_28
num_bytes: 44432280
num_examples: 18928
- name: epoch_29
num_bytes: 44431422
num_examples: 18928
download_size: 701477709
dataset_size: 1332537282
configs:
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: epoch_0
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_0-*
- split: epoch_1
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_1-*
- split: epoch_2
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_2-*
- split: epoch_3
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_3-*
- split: epoch_4
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_4-*
- split: epoch_5
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_5-*
- split: epoch_6
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_6-*
- split: epoch_7
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_7-*
- split: epoch_8
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_8-*
- split: epoch_9
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_9-*
- split: epoch_10
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_10-*
- split: epoch_11
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_11-*
- split: epoch_12
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_12-*
- split: epoch_13
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_13-*
- split: epoch_14
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_14-*
- split: epoch_15
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_15-*
- split: epoch_16
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_16-*
- split: epoch_17
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_17-*
- split: epoch_18
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_18-*
- split: epoch_19
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_19-*
- split: epoch_20
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_20-*
- split: epoch_21
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_21-*
- split: epoch_22
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_22-*
- split: epoch_23
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_23-*
- split: epoch_24
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_24-*
- split: epoch_25
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_25-*
- split: epoch_26
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_26-*
- split: epoch_27
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_27-*
- split: epoch_28
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_28-*
- split: epoch_29
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_29-*
---
|
VishwanathanR/flowers-dataset | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 347100141.78
num_examples: 8189
download_size: 346573740
dataset_size: 347100141.78
---
# Dataset Card for "flowers-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Indic-Benchmark/gujarati-arc-c-2.5k | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
struct:
- name: choices
list:
- name: label
dtype: string
- name: text
dtype: string
- name: stem
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 1759481
num_examples: 2557
download_size: 687997
dataset_size: 1759481
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hjawad367/ForestPickle | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1064774270.0
num_examples: 369
download_size: 361815484
dataset_size: 1064774270.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
richardr1126/spider-schema | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
pretty_name: Spider Schema
tags:
- text-to-sql
dataset_info:
features:
- name: db_id
dtype: string
- name: Schema (values (type))
dtype: string
- name: Primary Keys
dtype: string
- name: Foreign Keys
dtype: string
---
# Dataset Card for Spider Schema
### Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset contains the 166 databases used in the Spider dataset.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
``` |
nrajsubramanian/usfaq | ---
license: mit
---
|
CYF200127/MolNexTR | ---
license: apache-2.0
---
|
izumi-lab/wikinews-en-20230728 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 114757457
num_examples: 43246
download_size: 38557626
dataset_size: 114757457
license: cc-by-2.5
language:
- en
---
# Dataset Card for "wikinews-en-20230728"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
knguyennguyen/wikipedia_laptop | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 124410662
num_examples: 14742
download_size: 67555456
dataset_size: 124410662
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
llama2d/llama2d-synthetic | ---
dataset_info:
features:
- name: input_ids
sequence: float32
- name: coords
sequence:
sequence: float32
- name: labels
sequence: float32
- name: attention_mask
sequence: float32
splits:
- name: train
num_bytes: 864288
num_examples: 18
download_size: 84278
dataset_size: 864288
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2d-synthetic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
omarsou/common_voice_16_1_spanish_test_set | ---
license: cc0-1.0
---
# Dataset Card for Common Voice Corpus 16 Spanish Dataset
## Table of Contents
- [Acknowledgement](#acknowledgement)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Acknowledgement
The dataset belongs to COMMON VOICE MOZILLA FOUNDATION.
I just uploaded the spanish test set (from HERE : https://huggingface.co/datasets/mozilla-foundation/common_voice_16_1/tree/main)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
### Languages
```
Spanish
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_spanish_test_set = load_dataset("omarsou/common_voice_16_1_spanish_test_set")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_spanish_test_set = load_dataset("omarsou/common_voice_16_1_spanish_test_set", streaming=True)
print(next(iter(cv_16)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_spanish_test_set = load_dataset("omarsou/common_voice_16_1_spanish_test_set")
batch_sampler = BatchSampler(RandomSampler(cv_spanish_test_set), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_spanish_test_set, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_spanish_test_set = load_dataset("omarsou/common_voice_16_1_spanish_test_set", streaming=True)
dataloader = DataLoader(cv_16, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material available is the test set.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
Prabakaran143/Prabakaran-dataset | ---
license: openrail
---
|
thomaslmc/VertexQandA | ---
license: apache-2.0
---
|
deetsadi/processed_dwi_with_adc_semantic | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: conditioning_image
dtype: image
splits:
- name: train
num_bytes: 35595508.0
num_examples: 200
download_size: 35408470
dataset_size: 35595508.0
---
# Dataset Card for "processed_dwi_with_adc_semantic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BrahimLtr/IRONTVMAX | ---
license: afl-3.0
---
|
SassyRong/meme-imgflip-small-test-dataset | ---
license: cc0-1.0
task_categories:
- text-to-image
language:
- en
size_categories:
- n<1K
--- |
hippocrates/medical_meadow_mediqa_train | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30570668
num_examples: 2208
download_size: 12800020
dataset_size: 30570668
---
# Dataset Card for "medical_meadow_mediqa_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SJTU-TES/GED | ---
license: apache-2.0
---
|
ernestum/ppo-Pendulum-v1 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float32
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 2575710
num_examples: 200
download_size: 940375
dataset_size: 2575710
---
# Dataset Card for "ppo-Pendulum-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ColumbiaNLP/FLUTE | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
- machine-generated
- crowdsourced
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: FLUTE
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text2text-generation
task_ids:
- natural-language-inference
- explanation-generation
---
# Dataset Card for FigLang2022SharedTask
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://figlang2022sharedtask.github.io/
- **Repository:**
- **Paper:** TBA
- **Point of Contact:** tuhin.chakr@cs.columbia.edu
### Dataset Summary
Model in the loop approach for fig lang generation and explainability
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
TBA
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
Thanmay/xlsum-hi | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: text
dtype: string
- name: itv2 hi title
dtype: string
- name: itv2 hi summary
dtype: string
- name: itv2 hi text
dtype: string
splits:
- name: test
num_bytes: 8004101
num_examples: 1000
- name: validation
num_bytes: 8068773
num_examples: 1000
download_size: 6365106
dataset_size: 16072874
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
tmnam20/ViPubMed_dedup | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: en
dtype: string
splits:
- name: train
num_bytes: 24402494216
num_examples: 20032999
download_size: 13770715220
dataset_size: 24402494216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_dillfrescott__Nous-Hermes-2-SOLAR-10.7B-x2-MoE | ---
pretty_name: Evaluation run of dillfrescott/Nous-Hermes-2-SOLAR-10.7B-x2-MoE
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [dillfrescott/Nous-Hermes-2-SOLAR-10.7B-x2-MoE](https://huggingface.co/dillfrescott/Nous-Hermes-2-SOLAR-10.7B-x2-MoE)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_dillfrescott__Nous-Hermes-2-SOLAR-10.7B-x2-MoE\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-05T03:01:59.242688](https://huggingface.co/datasets/open-llm-leaderboard/details_dillfrescott__Nous-Hermes-2-SOLAR-10.7B-x2-MoE/blob/main/results_2024-01-05T03-01-59.242688.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6675321374781713,\n\
\ \"acc_stderr\": 0.03146967963091572,\n \"acc_norm\": 0.6683730894298693,\n\
\ \"acc_norm_stderr\": 0.03211553610160914,\n \"mc1\": 0.39657282741738065,\n\
\ \"mc1_stderr\": 0.017124930942023518,\n \"mc2\": 0.5585119677423217,\n\
\ \"mc2_stderr\": 0.015328900928932843\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6271331058020477,\n \"acc_stderr\": 0.01413117676013117,\n\
\ \"acc_norm\": 0.6715017064846417,\n \"acc_norm_stderr\": 0.0137249784655373\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6571400119498108,\n\
\ \"acc_stderr\": 0.004736950810617788,\n \"acc_norm\": 0.8483369846644094,\n\
\ \"acc_norm_stderr\": 0.0035796087435066063\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7631578947368421,\n \"acc_stderr\": 0.03459777606810536,\n\
\ \"acc_norm\": 0.7631578947368421,\n \"acc_norm_stderr\": 0.03459777606810536\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.73,\n\
\ \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249386,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249386\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n\
\ \"acc_stderr\": 0.03716177437566018,\n \"acc_norm\": 0.7291666666666666,\n\
\ \"acc_norm_stderr\": 0.03716177437566018\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.44,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\"\
: 0.44,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n\
\ \"acc_stderr\": 0.03669072477416906,\n \"acc_norm\": 0.6358381502890174,\n\
\ \"acc_norm_stderr\": 0.03669072477416906\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6085106382978723,\n \"acc_stderr\": 0.03190701242326812,\n\
\ \"acc_norm\": 0.6085106382978723,\n \"acc_norm_stderr\": 0.03190701242326812\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.543859649122807,\n\
\ \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.543859649122807,\n\
\ \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n\
\ \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.48148148148148145,\n \"acc_stderr\": 0.025733641991838987,\n \"\
acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.025733641991838987\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n\
\ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n\
\ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8,\n\
\ \"acc_stderr\": 0.022755204959542943,\n \"acc_norm\": 0.8,\n \
\ \"acc_norm_stderr\": 0.022755204959542943\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.03515895551165698,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.03515895551165698\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\"\
: 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8303030303030303,\n \"acc_stderr\": 0.02931118867498311,\n\
\ \"acc_norm\": 0.8303030303030303,\n \"acc_norm_stderr\": 0.02931118867498311\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8838383838383839,\n \"acc_stderr\": 0.022828881775249377,\n \"\
acc_norm\": 0.8838383838383839,\n \"acc_norm_stderr\": 0.022828881775249377\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \
\ \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35555555555555557,\n \"acc_stderr\": 0.029185714949857396,\n \
\ \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.029185714949857396\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.029953823891887037,\n\
\ \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.029953823891887037\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"\
acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5462962962962963,\n \"acc_stderr\": 0.03395322726375798,\n \"\
acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.03395322726375798\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.0251956584289318,\n \"acc_norm\"\
: 0.8480392156862745,\n \"acc_norm_stderr\": 0.0251956584289318\n },\n\
\ \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\":\
\ 0.8734177215189873,\n \"acc_stderr\": 0.021644195727955173,\n \"\
acc_norm\": 0.8734177215189873,\n \"acc_norm_stderr\": 0.021644195727955173\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7354260089686099,\n\
\ \"acc_stderr\": 0.029605103217038325,\n \"acc_norm\": 0.7354260089686099,\n\
\ \"acc_norm_stderr\": 0.029605103217038325\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n\
\ \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097653,\n \"\
acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097653\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5357142857142857,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.5357142857142857,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822584,\n\
\ \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822584\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092365,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092365\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8288633461047255,\n\
\ \"acc_stderr\": 0.013468201614066297,\n \"acc_norm\": 0.8288633461047255,\n\
\ \"acc_norm_stderr\": 0.013468201614066297\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7427745664739884,\n \"acc_stderr\": 0.02353292543104429,\n\
\ \"acc_norm\": 0.7427745664739884,\n \"acc_norm_stderr\": 0.02353292543104429\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3418994413407821,\n\
\ \"acc_stderr\": 0.015864506461604644,\n \"acc_norm\": 0.3418994413407821,\n\
\ \"acc_norm_stderr\": 0.015864506461604644\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.02355083135199509,\n\
\ \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.02355083135199509\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.729903536977492,\n\
\ \"acc_stderr\": 0.02521804037341063,\n \"acc_norm\": 0.729903536977492,\n\
\ \"acc_norm_stderr\": 0.02521804037341063\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7746913580246914,\n \"acc_stderr\": 0.02324620264781975,\n\
\ \"acc_norm\": 0.7746913580246914,\n \"acc_norm_stderr\": 0.02324620264781975\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5212765957446809,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.5212765957446809,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.500651890482399,\n\
\ \"acc_stderr\": 0.012770225252255563,\n \"acc_norm\": 0.500651890482399,\n\
\ \"acc_norm_stderr\": 0.012770225252255563\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7683823529411765,\n \"acc_stderr\": 0.025626533803777562,\n\
\ \"acc_norm\": 0.7683823529411765,\n \"acc_norm_stderr\": 0.025626533803777562\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.684640522875817,\n \"acc_stderr\": 0.018798086284886883,\n \
\ \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.018798086284886883\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n\
\ \"acc_stderr\": 0.043091187099464585,\n \"acc_norm\": 0.7181818181818181,\n\
\ \"acc_norm_stderr\": 0.043091187099464585\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7877551020408163,\n \"acc_stderr\": 0.026176967197866764,\n\
\ \"acc_norm\": 0.7877551020408163,\n \"acc_norm_stderr\": 0.026176967197866764\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466108,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466108\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.39657282741738065,\n\
\ \"mc1_stderr\": 0.017124930942023518,\n \"mc2\": 0.5585119677423217,\n\
\ \"mc2_stderr\": 0.015328900928932843\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8310970797158642,\n \"acc_stderr\": 0.010529981411838881\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6899166034874905,\n \
\ \"acc_stderr\": 0.01274030571737627\n }\n}\n```"
repo_url: https://huggingface.co/dillfrescott/Nous-Hermes-2-SOLAR-10.7B-x2-MoE
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|arc:challenge|25_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|gsm8k|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hellaswag|10_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T03-01-59.242688.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-05T03-01-59.242688.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- '**/details_harness|winogrande|5_2024-01-05T03-01-59.242688.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-05T03-01-59.242688.parquet'
- config_name: results
data_files:
- split: 2024_01_05T03_01_59.242688
path:
- results_2024-01-05T03-01-59.242688.parquet
- split: latest
path:
- results_2024-01-05T03-01-59.242688.parquet
---
# Dataset Card for Evaluation run of dillfrescott/Nous-Hermes-2-SOLAR-10.7B-x2-MoE
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [dillfrescott/Nous-Hermes-2-SOLAR-10.7B-x2-MoE](https://huggingface.co/dillfrescott/Nous-Hermes-2-SOLAR-10.7B-x2-MoE) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_dillfrescott__Nous-Hermes-2-SOLAR-10.7B-x2-MoE",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-05T03:01:59.242688](https://huggingface.co/datasets/open-llm-leaderboard/details_dillfrescott__Nous-Hermes-2-SOLAR-10.7B-x2-MoE/blob/main/results_2024-01-05T03-01-59.242688.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6675321374781713,
"acc_stderr": 0.03146967963091572,
"acc_norm": 0.6683730894298693,
"acc_norm_stderr": 0.03211553610160914,
"mc1": 0.39657282741738065,
"mc1_stderr": 0.017124930942023518,
"mc2": 0.5585119677423217,
"mc2_stderr": 0.015328900928932843
},
"harness|arc:challenge|25": {
"acc": 0.6271331058020477,
"acc_stderr": 0.01413117676013117,
"acc_norm": 0.6715017064846417,
"acc_norm_stderr": 0.0137249784655373
},
"harness|hellaswag|10": {
"acc": 0.6571400119498108,
"acc_stderr": 0.004736950810617788,
"acc_norm": 0.8483369846644094,
"acc_norm_stderr": 0.0035796087435066063
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7631578947368421,
"acc_stderr": 0.03459777606810536,
"acc_norm": 0.7631578947368421,
"acc_norm_stderr": 0.03459777606810536
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249386,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249386
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566018,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566018
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.44,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.03669072477416906,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.03669072477416906
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6085106382978723,
"acc_stderr": 0.03190701242326812,
"acc_norm": 0.6085106382978723,
"acc_norm_stderr": 0.03190701242326812
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.543859649122807,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.543859649122807,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.025733641991838987,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.025733641991838987
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8,
"acc_stderr": 0.022755204959542943,
"acc_norm": 0.8,
"acc_norm_stderr": 0.022755204959542943
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.03515895551165698,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.03515895551165698
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8303030303030303,
"acc_stderr": 0.02931118867498311,
"acc_norm": 0.8303030303030303,
"acc_norm_stderr": 0.02931118867498311
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8838383838383839,
"acc_stderr": 0.022828881775249377,
"acc_norm": 0.8838383838383839,
"acc_norm_stderr": 0.022828881775249377
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35555555555555557,
"acc_stderr": 0.029185714949857396,
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.029185714949857396
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.029953823891887037,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.029953823891887037
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.03395322726375798,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.03395322726375798
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.0251956584289318,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.0251956584289318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8734177215189873,
"acc_stderr": 0.021644195727955173,
"acc_norm": 0.8734177215189873,
"acc_norm_stderr": 0.021644195727955173
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7354260089686099,
"acc_stderr": 0.029605103217038325,
"acc_norm": 0.7354260089686099,
"acc_norm_stderr": 0.029605103217038325
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097653,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097653
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5357142857142857,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.5357142857142857,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092365,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092365
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8288633461047255,
"acc_stderr": 0.013468201614066297,
"acc_norm": 0.8288633461047255,
"acc_norm_stderr": 0.013468201614066297
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7427745664739884,
"acc_stderr": 0.02353292543104429,
"acc_norm": 0.7427745664739884,
"acc_norm_stderr": 0.02353292543104429
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3418994413407821,
"acc_stderr": 0.015864506461604644,
"acc_norm": 0.3418994413407821,
"acc_norm_stderr": 0.015864506461604644
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.02355083135199509,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.02355083135199509
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.729903536977492,
"acc_stderr": 0.02521804037341063,
"acc_norm": 0.729903536977492,
"acc_norm_stderr": 0.02521804037341063
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7746913580246914,
"acc_stderr": 0.02324620264781975,
"acc_norm": 0.7746913580246914,
"acc_norm_stderr": 0.02324620264781975
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5212765957446809,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.5212765957446809,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.500651890482399,
"acc_stderr": 0.012770225252255563,
"acc_norm": 0.500651890482399,
"acc_norm_stderr": 0.012770225252255563
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7683823529411765,
"acc_stderr": 0.025626533803777562,
"acc_norm": 0.7683823529411765,
"acc_norm_stderr": 0.025626533803777562
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.684640522875817,
"acc_stderr": 0.018798086284886883,
"acc_norm": 0.684640522875817,
"acc_norm_stderr": 0.018798086284886883
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7181818181818181,
"acc_stderr": 0.043091187099464585,
"acc_norm": 0.7181818181818181,
"acc_norm_stderr": 0.043091187099464585
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7877551020408163,
"acc_stderr": 0.026176967197866764,
"acc_norm": 0.7877551020408163,
"acc_norm_stderr": 0.026176967197866764
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466108,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466108
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.39657282741738065,
"mc1_stderr": 0.017124930942023518,
"mc2": 0.5585119677423217,
"mc2_stderr": 0.015328900928932843
},
"harness|winogrande|5": {
"acc": 0.8310970797158642,
"acc_stderr": 0.010529981411838881
},
"harness|gsm8k|5": {
"acc": 0.6899166034874905,
"acc_stderr": 0.01274030571737627
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
adityarra07/train_ds_noise | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: 'null'
- name: sampling_rate
dtype: int64
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5052608063.049213
num_examples: 22152
- name: test
num_bytes: 114044060.65026213
num_examples: 500
download_size: 5191539498
dataset_size: 5166652123.699475
---
# Dataset Card for "train_ds_noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Poreya/azil | ---
license: mit
---
|
IvanD2002/Task_Dataset_Instruct_Format | ---
license: apache-2.0
---
|
Falah/tilt_shift_photography_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 62449
num_examples: 1000
download_size: 1523
dataset_size: 62449
---
# Dataset Card for "tilt_shift_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PCA-Bench/PCA-Bench-V1 | ---
dataset_info:
- config_name: Autonomous Driving
features:
- name: domain
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: actions
sequence: string
- name: answer_index
dtype: int64
- name: reason
dtype: string
- name: key_concept
sequence: string
- name: question_prompt
dtype: string
- name: answer_with_reason
dtype: string
- name: full_meta_data_json
dtype: string
splits:
- name: test_open
num_bytes: 134659773
num_examples: 100
- name: test_closed
num_bytes: 67549223
num_examples: 150
download_size: 270416985
dataset_size: 202208996
- config_name: Domestic Robot
features:
- name: domain
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: actions
sequence: string
- name: answer_index
dtype: int64
- name: reason
dtype: string
- name: key_concept
sequence: string
- name: question_prompt
dtype: string
- name: answer_with_reason
dtype: string
- name: full_meta_data_json
dtype: string
splits:
- name: test_open
num_bytes: 91702060
num_examples: 100
- name: test_closed
num_bytes: 177827577
num_examples: 200
download_size: 105390299
dataset_size: 269529637
- config_name: Open-World Game
features:
- name: domain
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: actions
sequence: string
- name: answer_index
dtype: int64
- name: reason
dtype: string
- name: key_concept
sequence: string
- name: question_prompt
dtype: string
- name: answer_with_reason
dtype: string
- name: full_meta_data_json
dtype: string
splits:
- name: test_open
num_bytes: 16139511
num_examples: 117
- name: test_closed
num_bytes: 19069366
num_examples: 141
download_size: 34988721
dataset_size: 35208877
configs:
- config_name: Autonomous Driving
data_files:
- split: test_open
path: Autonomous Driving/test_open-*
- split: test_closed
path: Autonomous Driving/test_closed-*
- config_name: Domestic Robot
data_files:
- split: test_open
path: Domestic Robot/test_open-*
- split: test_closed
path: Domestic Robot/test_closed-*
- config_name: Open-World Game
data_files:
- split: test_open
path: Open-World Game/test_open-*
- split: test_closed
path: Open-World Game/test_closed-*
license: apache-2.0
task_categories:
- multiple-choice
- visual-question-answering
language:
- en
pretty_name: PCA-Bench
---
<h1 align="center">PCA-Bench</h1>
<p align="center">
<a href="https://github.com/pkunlp-icler/PCA-EVAL">
<img alt="Static Badge" src="https://img.shields.io/badge/Github-Online-white">
<a href="https://github.com/pkunlp-icler/PCA-EVAL/blob/main/PCA_Bench_Paper.pdf">
<img alt="Static Badge" src="https://img.shields.io/badge/Paper-PCABench-red">
<a href="https://huggingface.co/datasets/PCA-Bench/PCA-Bench-V1">
<img alt="Static Badge" src="https://img.shields.io/badge/HFDataset-PCABenchV1-yellow">
</a>
<a href="https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV">
<img alt="Static Badge" src="https://img.shields.io/badge/Leaderboard-Online-blue">
</a>
</p>
*PCA-Bench is an innovative benchmark for evaluating and locating errors in Multimodal LLMs when conducting embodied decision making tasks, specifically focusing on perception, cognition, and action.*
## Release
- [2024.02.15] [PCA-Bench-V1](https://github.com/pkunlp-icler/PCA-EVAL) is released. We release the open and closed track data in [huggingface](https://huggingface.co/datasets/PCA-Bench/PCA-Bench-V1). We also set an online [leaderboard ](https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV) accepting users' submission.
- [2023.12.15] [PCA-EVAL](https://arxiv.org/abs/2310.02071) is accepted to Foundation Model for Decision Making Workshop @NeurIPS 2023. PCA-Evaluation tool is released in github.
## Leaderboard
[Leaderboard with Full Metrics](https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV)
## Submit Results
📢 For close track evaluaiton and PCA-Evaluation, please follow [this file](https://github.com/pkunlp-icler/PCA-EVAL/blob/main/pca-eval/results/chatgpt_holmes_outputs/Autonomous%20Driving.json) to organize your model output. Submit **Six JSON files** from different domains and different tracks, along with your **model name** and **organization** to us via [email](mailto:leo.liang.chen@stu.pku.edu.cn). Ensure you use the dataset's provided prompt as the default input for fair comparison.
We will send the PCA-Eval results of your model to you and update the leaderboard.
We provide sample code to get the six json files. User only needs to add your model inference code:
```python
# Sample code for PCA-Eval
from datasets import load_dataset
from tqdm import tqdm
import json
import os
def YOUR_INFERENCE_CODE(prompt,image):
"""Simple single round multimodal conversation call.
"""
response = YOUR_MODEL.inference(prompt,image)
return response
output_path = "./Results-DIR-PATH/"
os.mkdir(output_path)
dataset_ad = load_dataset("PCA-Bench/PCA-Bench-V1","Autonomous Driving")
dataset_dr = load_dataset("PCA-Bench/PCA-Bench-V1","Domestic Robot")
dataset_og = load_dataset("PCA-Bench/PCA-Bench-V1","Open-World Game")
test_dataset_dict = {"Autonomous-Driving":dataset_ad,"Domestic-Robot":dataset_dr,"Open-World-Game":dataset_og}
test_split = ["test_closed","test_open"]
test_domain = list(test_dataset_dict.keys())
for domain in test_domain:
for split in test_split:
print("testing on %s:%s"%(domain,split))
prediction_results = []
output_filename = output_path+"%s-%s.json"%(domain,split)
prompts = test_dataset_dict[domain][split]['question_prompt']
images = test_dataset_dict[domain][split]['image']
for prompt_id in tqdm(range(len(prompts))):
user_inputs = prompts[prompt_id] # do not change the prompts for fair comparison
index = prompt_id
image = images[prompt_id]
outputs = YOUR_INFERENCE_CODE(user_inputs,image)
prediction_results.append({
'prompt': user_inputs,
'model_output': outputs,
'index': index,
})
with open(output_filename, 'w') as f:
json.dump(prediction_results, f, indent=4)
# submit the 6 json files in the output_path to our email
```
You could also simply compute the multiple-choice accuracy locally as a comparison metric in your own experiments. However, in the online leaderboard, we only consider the average action score and Genuine PCA score when ranking models.
For more information, refer to the offical [github repo](https://github.com/pkunlp-icler/PCA-EVAL) |
laion/School_BUD-E | ---
license: cc-by-4.0
---
|
wiki_source | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- sv
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: WikiSource
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sv
config_name: en-sv
splits:
- name: train
num_bytes: 8153542
num_examples: 33283
download_size: 2375052
dataset_size: 8153542
---
# Dataset Card for WikiSource
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/WikiSource.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
xinqiyang/iruca_llama2_japanese_demo | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 24485.34975369458
num_examples: 15
download_size: 3242
dataset_size: 24485.34975369458
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# iruca-1k: Lazy Llama 2 Formatting
This is a subset (1000 samples) of the excellent [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). It was created using the following [colab notebook](https://colab.research.google.com/drive/1Ad7a9zMmkxuXTOh1Z7-rNSICA4dybpM2?usp=sharing).
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for [this article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) about fine-tuning a Llama 2 (chat) model in a Google Colab.
### Format from xlsx file to CSV
```bash
pip install openpyxl pandas
python generate.py
pip install huggingface_hub
huggingface-cli repo create iruca_llama2_japanese_demo --type dataset
git clone https://huggingface.co/datasets/xinqiyang/iruca_llama2_japanese_demo
``` |
sedkichayata/beauty | ---
license: apache-2.0
license_name: sedki
license_link: LICENSE
--- |
newspop | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
paperswithcode_id: null
pretty_name: News Popularity in Multiple Social Media Platforms
tags:
- social-media-shares-prediction
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: headline
dtype: string
- name: source
dtype: string
- name: topic
dtype: string
- name: publish_date
dtype: string
- name: facebook
dtype: int32
- name: google_plus
dtype: int32
- name: linked_in
dtype: int32
splits:
- name: train
num_bytes: 27927641
num_examples: 93239
download_size: 30338277
dataset_size: 27927641
---
# Dataset Card for News Popularity in Multiple Social Media Platforms
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [UCI](https://archive.ics.uci.edu/ml/datasets/News+Popularity+in+Multiple+Social+Media+Platforms)
- **Repository:**
- **Paper:** [Arxiv](https://arxiv.org/abs/1801.07055)
- **Leaderboard:** [Kaggle](https://www.kaggle.com/nikhiljohnk/news-popularity-in-multiple-social-media-platforms/code)
- **Point of Contact:**
### Dataset Summary
Social sharing data across Facebook, Google+ and LinkedIn for 100k news items on the topics of: economy, microsoft, obama and palestine.
### Supported Tasks and Leaderboards
Popularity prediction/shares prediction
### Languages
English
## Dataset Structure
### Data Instances
```
{ "id": 35873,
"title": "Microsoft's 'teen girl' AI turns into a Hitler-loving sex robot within 24 ...",
"headline": "Developers at Microsoft created 'Tay', an AI modelled to speak 'like a teen girl', in order to improve the customer service on their voice",
"source": "Telegraph.co.uk",
"topic": "microsoft",
"publish_date": "2016-03-24 09:53:54",
"facebook": 22346,
"google_plus": 973,
"linked_in": 1009
}
```
### Data Fields
- id: the sentence id in the source dataset
- title: the title of the link as shared on social media
- headline: the headline, or sometimes the lede of the story
- source: the source news site
- topic: the topic: one of "economy", "microsoft", "obama" and "palestine"
- publish_date: the date the original article was published
- facebook: the number of Facebook shares, or -1 if this data wasn't collected
- google_plus: the number of Google+ likes, or -1 if this data wasn't collected
- linked_in: the number of LinkedIn shares, or -1 if if this data wasn't collected
### Data Splits
None
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
The source headlines were by journalists, while the titles were written by the
people sharing it on social media.
### Annotations
#### Annotation process
The 'annotations' are simply the number of shares, or likes in the case of
Google+ as collected from various API endpoints.
#### Who are the annotators?
Social media users.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License: Creative Commons Attribution 4.0 International License (CC-BY)
### Citation Information
```
@article{Moniz2018MultiSourceSF,
title={Multi-Source Social Feedback of Online News Feeds},
author={N. Moniz and L. Torgo},
journal={ArXiv},
year={2018},
volume={abs/1801.07055}
}
```
### Contributions
Thanks to [@frankier](https://github.com/frankier) for adding this dataset. |
severo/doc-formats-txt-1 | ---
size_categories:
- n<1K
---
# [doc] formats - txt - 1
This dataset contains one txt file at the root. It can only contain one column of strings.
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_31_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 790
num_examples: 32
download_size: 1847
dataset_size: 790
---
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_31_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ronec | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- found
language:
- ro
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: ronec
pretty_name: RONEC
dataset_info:
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_ids
sequence: int32
- name: space_after
sequence: bool
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-ORG
'4': I-ORG
'5': B-GPE
'6': I-GPE
'7': B-LOC
'8': I-LOC
'9': B-NAT_REL_POL
'10': I-NAT_REL_POL
'11': B-EVENT
'12': I-EVENT
'13': B-LANGUAGE
'14': I-LANGUAGE
'15': B-WORK_OF_ART
'16': I-WORK_OF_ART
'17': B-DATETIME
'18': I-DATETIME
'19': B-PERIOD
'20': I-PERIOD
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-NUMERIC
'26': I-NUMERIC
'27': B-ORDINAL
'28': I-ORDINAL
'29': B-FACILITY
'30': I-FACILITY
config_name: ronec
splits:
- name: train
num_bytes: 8701577
num_examples: 9000
- name: validation
num_bytes: 1266490
num_examples: 1330
- name: test
num_bytes: 1902224
num_examples: 2000
download_size: 14675943
dataset_size: 11870291
---
# Dataset Card for RONEC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/dumitrescustefan/ronec
- **Repository:** https://github.com/dumitrescustefan/ronec
- **Paper:** https://arxiv.org/abs/1909.01247
- **Leaderboard:** https://lirobenchmark.github.io/
- **Point of Contact:** [Stefan](dumitrescu.stefan@gmail.com) and [Andrei-Marius](avram.andreimarius@gmail.com)
### Dataset Summary
RONEC, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities.
The corpus has the following classes and distribution in the train/valid/test splits:
| Classes | Total | Train | | Valid | | Test | |
|------------- |:------: |:------: |:-------: |:------: |:-------: |:------: |:-------: |
| | # | # | % | # | % | # | % |
| PERSON | **26130** | 19167 | 73.35 | 2733 | 10.46 | 4230 | 16.19 |
| GPE | **11103** | 8193 | 73.79 | 1182 | 10.65 | 1728 | 15.56 |
| LOC | **2467** | 1824 | 73.94 | 270 | 10.94 | 373 | 15.12 |
| ORG | **7880** | 5688 | 72.18 | 880 | 11.17 | 1312 | 16.65 |
| LANGUAGE | **467** | 342 | 73.23 | 52 | 11.13 | 73 | 15.63 |
| NAT_REL_POL | **4970** | 3673 | 73.90 | 516 | 10.38 | 781 | 15.71 |
| DATETIME | **9614** | 6960 | 72.39 | 1029 | 10.7 | 1625 | 16.9 |
| PERIOD | **1188** | 862 | 72.56 | 129 | 10.86 | 197 | 16.58 |
| QUANTITY | **1588** | 1161 | 73.11 | 181 | 11.4 | 246 | 15.49 |
| MONEY | **1424** | 1041 | 73.10 | 159 | 11.17 | 224 | 15.73 |
| NUMERIC | **7735** | 5734 | 74.13 | 814 | 10.52 | 1187 | 15.35 |
| ORDINAL | **1893** | 1377 | 72.74 | 212 | 11.2 | 304 | 16.06 |
| FACILITY | **1126** | 840 | 74.6 | 113 | 10.04 | 173 | 15.36 |
| WORK_OF_ART | **1596** | 1157 | 72.49 | 176 | 11.03 | 263 | 16.48 |
| EVENT | **1102** | 826 | 74.95 | 107 | 9.71 | 169 | 15.34 |
### Supported Tasks and Leaderboards
The corpus is meant to train Named Entity Recognition models for the Romanian language.
Please see the leaderboard here : [https://lirobenchmark.github.io/](https://lirobenchmark.github.io/)
### Languages
RONEC is in Romanian (`ro`)
## Dataset Structure
### Data Instances
The dataset is a list of instances. For example, an instance looks like:
```json
{
"id": 10454,
"tokens": ["Pentru", "a", "vizita", "locația", "care", "va", "fi", "pusă", "la", "dispoziția", "reprezentanților", "consiliilor", "județene", ",", "o", "delegație", "a", "U.N.C.J.R.", ",", "din", "care", "a", "făcut", "parte", "și", "dl", "Constantin", "Ostaficiuc", ",", "președintele", "C.J.T.", ",", "a", "fost", "prezentă", "la", "Bruxelles", ",", "între", "1-3", "martie", "."],
"ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "O", "O", "O", "O", "O", "O", "B-ORG", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "B-ORG", "O", "O", "O", "O", "O", "B-GPE", "O", "B-PERIOD", "I-PERIOD", "I-PERIOD", "O"],
"ner_ids": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 3, 0, 0, 0, 0, 0, 5, 0, 19, 20, 20, 0],
"space_after": [true, true, true, true, true, true, true, true, true, true, true, true, false, true, true, true, true, false, true, true, true, true, true, true, true, true, true, false, true, true, false, true, true, true, true, true, false, true, true, true, false, false]
}
```
### Data Fields
The fields of each examples are:
- ``tokens`` are the words of the sentence.
- ``ner_tags`` are the string tags assigned to each token, following the BIO2 format. For example, the span ``"între", "1-3", "martie"`` has three tokens, but is a single class ``PERIOD``, marked as ``"B-PERIOD", "I-PERIOD", "I-PERIOD"``.
- ``ner_ids`` are the integer encoding of each tag, to be compatible with the standard and to be quickly used for model training. Note that each ``B``-starting tag is odd, and each ``I``-starting tag is even.
- ``space_after`` is used to help if there is a need to detokenize the dataset. A ``true`` value means that there is a space after the token on that respective position.
### Data Splits
The dataset is split in train: 9000 sentences, dev: 1330 sentence and test: 2000 sentences.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
*The corpus data source represents sentences that are free of copyright, taken from older datasets like the freely available SEETimes and more recent datasources like the Romanian Wikipedia or the Common Crawl.*
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
The corpus was annotated with the following classes:
1. PERSON - proper nouns, including common nouns or pronouns if they refer to a person. (e.g. 'sister')
2. GPE - geo political entity, like a city or a country; has to have a governance form
3. LOC - location, like a sea, continent, region, road, address, etc.
4. ORG - organization
5. LANGUAGE - language (e.g. Romanian, French, etc.)
6. NAT_REL_POL - national, religious or political organizations
7. DATETIME - a time and date in any format, including references to time (e.g. 'yesterday')
8. PERIOD - a period that is precisely bounded by two date times
9. QUANTITY - a quantity that is not numerical; it has a unit of measure
10. MONEY - a monetary value, numeric or otherwise
11. NUMERIC - a simple numeric value, represented as digits or words
12. ORDINAL - an ordinal value like 'first', 'third', etc.
13. FACILITY - a named place that is easily recognizable
14. WORK_OF_ART - a work of art like a named TV show, painting, etc.
15. EVENT - a named recognizable or periodic major event
#### Annotation process
The corpus was annotated by 3 language experts, and was cross-checked for annotation consistency. The annotation took several months to complete, but the result is a high quality dataset.
#### Who are the annotators?
Stefan Dumitrescu (lead).
### Personal and Sensitive Information
All the source data is already freely downloadable and usable online, so there are no privacy concerns.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
MIT License
### Citation Information
```bibtex
@article{dumitrescu2019introducing,
title={Introducing RONEC--the Romanian Named Entity Corpus},
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
journal={arXiv preprint arXiv:1909.01247},
year={2019}
}
```
### Contributions
Thanks to [@iliemihai](https://github.com/iliemihai) for adding v1.0 of the dataset. |
thanhduycao/data_soict_train_synthesis_entity | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: sentence_norm
dtype: string
splits:
- name: train
num_bytes: 6498333095
num_examples: 18312
- name: test
num_bytes: 389981876
num_examples: 748
download_size: 1639149838
dataset_size: 6888314971
---
# Dataset Card for "data_soict_train_synthesis_entity"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chillguypoonawala/temp123 | ---
license: mit
---
|
dinhquangson/FUNSD_RE | ---
license: mit
task_categories:
- token-classification
--- |
IlyaGusev/ru_turbo_alpaca | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: alternative_output
dtype: string
- name: label
dtype: string
- name: all_labels
sequence: string
- name: agreement
dtype: float32
- name: overlap
dtype: uint32
splits:
- name: train
num_bytes: 54774775
num_examples: 29822
download_size: 14565995
dataset_size: 54774775
license: cc-by-4.0
task_categories:
- text-generation
- text2text-generation
language:
- ru
tags:
- instruction-finetuning
- instruction generation
- alpaca
size_categories:
- 10K<n<100K
---
# RuTurboAlpaca
Dataset of ChatGPT-generated instructions in Russian.
<img src="https://cdn.midjourney.com/770a35fa-00c0-4214-bb88-727dbc7cfaf3/0_0.png" >
* Code: [rulm/self_instruct](https://github.com/IlyaGusev/rulm/tree/master/self_instruct)
* Code is based on [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [self-instruct](https://github.com/yizhongw/self-instruct/).
* 29822 examples
Preliminary evaluation by an expert based on 400 samples:
* 83% of samples contain correct instructions
* 63% of samples have correct instructions and outputs
Crowdsouring-based evaluation on 3500 samples:
* 90% of samples contain correct instructions
* 68% of samples have correct instructions and outputs
Prompt template:
```
Составь набор из {{num_tasks}} разных заданий для дообучения языковой модели:
1. Делай задания максимально непохожими друг на друга: по типу, по запрашиваемым действиям, по формулировке, по наличию входа.
2. Задания должны быть выполнимы языковой моделью, которая не умеет работать с картинками, видео, и аудио, и не имеет доступа ко внешнему миру.
3. Используй хороший грамотный русский язык.
4. Делай задания в одно или два предложения.
5. Генерируй подходящие реалистичные входные данные, не используй общие шаблоны типа \"Имя человека\" или [имя] вместо реального имени.
6. Задание может быть без входных данных, в таком случае используй токен <noinput> вместо них.
7. На выходе сгенерируй подходящий длинный ответ.
8. Следуй тому же шаблону, который приведен в примерах, разделяй задания с помощью ###. Это важно!
Примеры заданий:
{% for task in example_tasks %}
{{task.index}}. Задание: {{task.instruction}}
{{task.index}}. Вход: {{task.input}}
{{task.index}}. Выход: {{task.output}}
{{ "###" if not loop.last else "" }}
{% endfor %}
```
## Legal disclaimer
Data is based on OpenAI’s gpt-3.5-turbo, whose [terms of use](https://openai.com/policies/terms-of-use) prohibit for us developing models that compete with OpenAI. Not for you. |
aladaf/homo-silicus-unboxing-mistral-instruct | ---
license: apache-2.0
---
|
sc890/DEEPFRUlT_DATASET | ---
language:
- en
license: apache-2.0
size_categories:
- 100M<n<1B
task_categories:
- feature-extraction
- text-classification
tags:
- biomedical
- imaging
- computer vision
- tuberculosis
- multimodal
dataset_info:
features:
- name: image_name
dtype: string
- name: image_id
dtype: string
- name: number
dtype: string
- name: image_path
dtype: string
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 1229202
num_examples: 10689
- name: test
num_bytes: 306617
num_examples: 2694
download_size: 42809832
dataset_size: 70088819.588
configs:
- config_name: default
data_files:
- split: train
path: data/train-data-*
- split: test
path: data/test-data-*
---
# DeepFruit Dataset
<!--The dataset is from Mendeley, comprises 21,122 images of 20 diverse fruit types across 8 different combinations and 2 csv files. -->
## Dataset Details
This dataset contains total of 21,122 fully labeled images, featuring 20 different kinds of fruits. It is structured into an 80% training set (16,899 images) and a 20% testing set (4,223 images), facilitating a ready-to-use framework for model training and evaluation.
Additionally, there are two CSV files that label the types of fruits depicted in each image.
### Dataset Description
The "DeepFruit" dataset is a comprehensive collection designed for the advancement of research in fruit detection, recognition, and classification. It encompasses a wide array of applications, including but not limited to, fruit recognition systems and calorie estimation. A total of 21,122 fully labeled images, featuring 20 different kinds of fruits. It is structured into an 80% training set (16,899 images) and a 20% testing set (4,223 images), facilitating a ready-to-use framework for model training and evaluation. This dataset provides a valuable resource for researchers aiming to develop automated systems leveraging deep learning, computer vision, and machine learning techniques for fruit image analysis.
- **Language(s):** en
- **License:** Mendeley License: CC BY 4.0
### Dataset Sources
Data: https://data.mendeley.com/datasets/5prc54r4rt/1
Paper: https://www.sciencedirect.com/science/article/pii/S2352340923006248#sec0003
## Uses
Convert Fruit Dataset From Image to PIL.
### Direct Use
This section describes suitable use cases for the dataset.
## Dataset Structure
"Train" & "Test": Datasets
"image_id": datasets.Value("string")
"number" - folder number:datasets.Value("int32")
"image": datasets.Image()
"image_path": datasets.Value("string")
"label": datasets.Value("string")
### Curation Rationale
It lies in its foundational role for enabling advanced machine learning applications in dietary and health management. By converting fruit images to the PIL format, it prepares data for analysis that could lead to innovations in recognizing and understanding fruit characteristics. This groundwork is crucial for developing technologies that assist in dietary planning, nutritional education, and managing health conditions through better food choices, thereby having a broad positive effect on public health and awareness.
#### Data Collection and Processing
Image Format: All images are expected to be in JPEG format. Non-JPEG files are excluded during the data processing phase, ensuring consistency in file format.
Label Extraction: Labels are extracted from separate CSV files (Labels_Train.csv and Labels_Test.csv), which map image names to their corresponding fruit labels. This method ensures that labels are organized and accessible.
Data Splitting: The dataset is split into training and testing sets, as indicated by the separate ZIP files for train and test data. This standard practice facilitates the evaluation of model performance on unseen data.
Python Imaging Library (PIL): Used for opening and manipulating images in the Python Imaging Library format. This choice is made for its wide adoption and ease of integration with other Python libraries for data science and machine learning tasks.
Datasets Library from Hugging Face: Facilitates the creation, distribution, and loading of the dataset. This library provides a standardized way to work with datasets, including features for splitting, processing, and accessing dataset information.
#### Supported Tasks
The fruit images were captured under various conditions, including different plate sizes, shapes, and situations, as well as varying angles, brightness levels, and distances.
1. Foundation for Advanced ML Models/ Algorithms Training: By converting the fruit dataset into PIL format, we ensure that the data is in a uniform, accessible format that is compatible with various machine learning and deep learning libraries. This standardization is vital for the efficient training, validation, and testing of different classification models.
2. Enables Comprehensive Analysis: The dataset, featuring a wide variety of fruit images, is essential for developing a deep understanding of fruit characteristics. This includes not only basic identification but also detailed analyses such as sugar content, calorie count, and vitamin composition, which are crucial for dietary planning and health management.
3. Basis for Practical Applications: The dataset's conversion and subsequent use in machine learning model training are not academic exercises but are intended for real-world applications. The insights gained from this project could significantly impact dietary planning, particularly for individuals with specific health considerations like diabetes, by providing accurate, detailed information about fruit characteristics.
## Bias, Risks, and Limitations
Representation Bias: Given the dataset comprises 20 diverse fruit types across 8 combinations, there might be an underrepresentation of certain fruits, particularly those that are less common or indigenous to specific regions. This could lead to a model trained on this dataset performing less accurately on fruit types or varieties not included or underrepresented.
Misclassification Risk: In critical applications where accurate fruit identification is crucial (e.g., dietary management apps, agricultural sorting mechanisms), misclassification could lead to adverse outcomes. This risk is heightened if the dataset contains mislabeled examples or if the model struggles with fruits that have similar appearances.
Scope of Application: The dataset's utility is primarily confined to the domain of fruit recognition and classification. It may not be suitable for more nuanced tasks within agricultural technology, such as detecting fruit diseases or assessing ripeness, unless supplemented with additional, specialized data. |
da2-52000720/vec-seed | ---
dataset_info:
features:
- name: syllable
dtype: string
- name: wrong
dtype: string
- name: correct
dtype: string
splits:
- name: seed0
num_bytes: 907640.0263149611
num_examples: 31289
- name: seed1
num_bytes: 12758155
num_examples: 436392
- name: seed0_filtered
num_bytes: 263339.9336508038
num_examples: 9092
- name: seed1_filtered
num_bytes: 3312328.0105730626
num_examples: 113298
- name: seed1_1
num_bytes: 13386857
num_examples: 459920
- name: seed_filtered
num_bytes: 3282350
num_examples: 117816
download_size: 43197420
dataset_size: 33910669.970538825
configs:
- config_name: default
data_files:
- split: seed0
path: data/seed0-*
- split: seed1
path: data/seed1-*
- split: seed0_filtered
path: data/seed0_filtered-*
- split: seed1_filtered
path: data/seed1_filtered-*
- split: seed_filtered
path: data/seed_filtered-*
- split: seed1_1
path: data/seed1_1-*
---
|
open-llm-leaderboard/details_TheBloke__gpt4-x-vicuna-13B-HF | ---
pretty_name: Evaluation run of TheBloke/gpt4-x-vicuna-13B-HF
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheBloke/gpt4-x-vicuna-13B-HF](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__gpt4-x-vicuna-13B-HF\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-07-19T19:01:51.030763](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__gpt4-x-vicuna-13B-HF/blob/main/results_2023-07-19T19%3A01%3A51.030763.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5137597162733054,\n\
\ \"acc_stderr\": 0.03484317305077308,\n \"acc_norm\": 0.5174954549900392,\n\
\ \"acc_norm_stderr\": 0.03482742951911445,\n \"mc1\": 0.3635250917992656,\n\
\ \"mc1_stderr\": 0.016838862883965827,\n \"mc2\": 0.5357942440986606,\n\
\ \"mc2_stderr\": 0.015916184024373756\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5110921501706485,\n \"acc_stderr\": 0.01460779491401305,\n\
\ \"acc_norm\": 0.5341296928327645,\n \"acc_norm_stderr\": 0.014577311315231104\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6038637721569409,\n\
\ \"acc_stderr\": 0.004880937933163287,\n \"acc_norm\": 0.8012348137821151,\n\
\ \"acc_norm_stderr\": 0.003982553164086259\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.45925925925925926,\n\
\ \"acc_stderr\": 0.04304979692464243,\n \"acc_norm\": 0.45925925925925926,\n\
\ \"acc_norm_stderr\": 0.04304979692464243\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.506578947368421,\n \"acc_stderr\": 0.040685900502249704,\n\
\ \"acc_norm\": 0.506578947368421,\n \"acc_norm_stderr\": 0.040685900502249704\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n\
\ \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \
\ \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.4867924528301887,\n \"acc_stderr\": 0.030762134874500482,\n\
\ \"acc_norm\": 0.4867924528301887,\n \"acc_norm_stderr\": 0.030762134874500482\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5486111111111112,\n\
\ \"acc_stderr\": 0.04161402398403279,\n \"acc_norm\": 0.5486111111111112,\n\
\ \"acc_norm_stderr\": 0.04161402398403279\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\"\
: 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4046242774566474,\n\
\ \"acc_stderr\": 0.03742461193887249,\n \"acc_norm\": 0.4046242774566474,\n\
\ \"acc_norm_stderr\": 0.03742461193887249\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.04023382273617747,\n\
\ \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.04023382273617747\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n\
\ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.3617021276595745,\n \"acc_stderr\": 0.03141082197596241,\n\
\ \"acc_norm\": 0.3617021276595745,\n \"acc_norm_stderr\": 0.03141082197596241\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2894736842105263,\n\
\ \"acc_stderr\": 0.04266339443159394,\n \"acc_norm\": 0.2894736842105263,\n\
\ \"acc_norm_stderr\": 0.04266339443159394\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.47586206896551725,\n \"acc_stderr\": 0.041618085035015295,\n\
\ \"acc_norm\": 0.47586206896551725,\n \"acc_norm_stderr\": 0.041618085035015295\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.28835978835978837,\n \"acc_stderr\": 0.023330654054535896,\n \"\
acc_norm\": 0.28835978835978837,\n \"acc_norm_stderr\": 0.023330654054535896\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n\
\ \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n\
\ \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5741935483870968,\n\
\ \"acc_stderr\": 0.028129112709165894,\n \"acc_norm\": 0.5741935483870968,\n\
\ \"acc_norm_stderr\": 0.028129112709165894\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.3891625615763547,\n \"acc_stderr\": 0.03430462416103873,\n\
\ \"acc_norm\": 0.3891625615763547,\n \"acc_norm_stderr\": 0.03430462416103873\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\"\
: 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6424242424242425,\n \"acc_stderr\": 0.037425970438065864,\n\
\ \"acc_norm\": 0.6424242424242425,\n \"acc_norm_stderr\": 0.037425970438065864\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6363636363636364,\n \"acc_stderr\": 0.034273086529999344,\n \"\
acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.034273086529999344\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7046632124352331,\n \"acc_stderr\": 0.03292296639155141,\n\
\ \"acc_norm\": 0.7046632124352331,\n \"acc_norm_stderr\": 0.03292296639155141\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.43846153846153846,\n \"acc_stderr\": 0.02515826601686857,\n\
\ \"acc_norm\": 0.43846153846153846,\n \"acc_norm_stderr\": 0.02515826601686857\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.23703703703703705,\n \"acc_stderr\": 0.025928876132766135,\n \
\ \"acc_norm\": 0.23703703703703705,\n \"acc_norm_stderr\": 0.025928876132766135\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.0322529423239964,\n \
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.0322529423239964\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"\
acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.6770642201834862,\n \"acc_stderr\": 0.02004811592341531,\n \"\
acc_norm\": 0.6770642201834862,\n \"acc_norm_stderr\": 0.02004811592341531\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.36574074074074076,\n \"acc_stderr\": 0.03284738857647206,\n \"\
acc_norm\": 0.36574074074074076,\n \"acc_norm_stderr\": 0.03284738857647206\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6715686274509803,\n \"acc_stderr\": 0.03296245110172228,\n \"\
acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.03296245110172228\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7046413502109705,\n \"acc_stderr\": 0.02969633871342288,\n \
\ \"acc_norm\": 0.7046413502109705,\n \"acc_norm_stderr\": 0.02969633871342288\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5739910313901345,\n\
\ \"acc_stderr\": 0.03318833286217281,\n \"acc_norm\": 0.5739910313901345,\n\
\ \"acc_norm_stderr\": 0.03318833286217281\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6717557251908397,\n \"acc_stderr\": 0.04118438565806298,\n\
\ \"acc_norm\": 0.6717557251908397,\n \"acc_norm_stderr\": 0.04118438565806298\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6776859504132231,\n \"acc_stderr\": 0.042664163633521685,\n \"\
acc_norm\": 0.6776859504132231,\n \"acc_norm_stderr\": 0.042664163633521685\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6574074074074074,\n\
\ \"acc_stderr\": 0.045879047413018105,\n \"acc_norm\": 0.6574074074074074,\n\
\ \"acc_norm_stderr\": 0.045879047413018105\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6625766871165644,\n \"acc_stderr\": 0.03714908409935574,\n\
\ \"acc_norm\": 0.6625766871165644,\n \"acc_norm_stderr\": 0.03714908409935574\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n\
\ \"acc_stderr\": 0.04635550135609976,\n \"acc_norm\": 0.39285714285714285,\n\
\ \"acc_norm_stderr\": 0.04635550135609976\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6796116504854369,\n \"acc_stderr\": 0.04620284082280041,\n\
\ \"acc_norm\": 0.6796116504854369,\n \"acc_norm_stderr\": 0.04620284082280041\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7735042735042735,\n\
\ \"acc_stderr\": 0.027421007295392912,\n \"acc_norm\": 0.7735042735042735,\n\
\ \"acc_norm_stderr\": 0.027421007295392912\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6871008939974457,\n\
\ \"acc_stderr\": 0.01658093594030406,\n \"acc_norm\": 0.6871008939974457,\n\
\ \"acc_norm_stderr\": 0.01658093594030406\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5375722543352601,\n \"acc_stderr\": 0.026842985519615375,\n\
\ \"acc_norm\": 0.5375722543352601,\n \"acc_norm_stderr\": 0.026842985519615375\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.31731843575418994,\n\
\ \"acc_stderr\": 0.01556639263005703,\n \"acc_norm\": 0.31731843575418994,\n\
\ \"acc_norm_stderr\": 0.01556639263005703\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5424836601307189,\n \"acc_stderr\": 0.028526383452142638,\n\
\ \"acc_norm\": 0.5424836601307189,\n \"acc_norm_stderr\": 0.028526383452142638\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5530546623794212,\n\
\ \"acc_stderr\": 0.02823776942208535,\n \"acc_norm\": 0.5530546623794212,\n\
\ \"acc_norm_stderr\": 0.02823776942208535\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5740740740740741,\n \"acc_stderr\": 0.027513747284379424,\n\
\ \"acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.027513747284379424\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.38652482269503546,\n \"acc_stderr\": 0.02904919034254346,\n \
\ \"acc_norm\": 0.38652482269503546,\n \"acc_norm_stderr\": 0.02904919034254346\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41264667535853977,\n\
\ \"acc_stderr\": 0.012573836633799015,\n \"acc_norm\": 0.41264667535853977,\n\
\ \"acc_norm_stderr\": 0.012573836633799015\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.44485294117647056,\n \"acc_stderr\": 0.03018753206032939,\n\
\ \"acc_norm\": 0.44485294117647056,\n \"acc_norm_stderr\": 0.03018753206032939\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5196078431372549,\n \"acc_stderr\": 0.020212274976302957,\n \
\ \"acc_norm\": 0.5196078431372549,\n \"acc_norm_stderr\": 0.020212274976302957\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5454545454545454,\n\
\ \"acc_stderr\": 0.04769300568972743,\n \"acc_norm\": 0.5454545454545454,\n\
\ \"acc_norm_stderr\": 0.04769300568972743\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5795918367346938,\n \"acc_stderr\": 0.03160106993449601,\n\
\ \"acc_norm\": 0.5795918367346938,\n \"acc_norm_stderr\": 0.03160106993449601\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7412935323383084,\n\
\ \"acc_stderr\": 0.030965903123573033,\n \"acc_norm\": 0.7412935323383084,\n\
\ \"acc_norm_stderr\": 0.030965903123573033\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.45180722891566266,\n\
\ \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.45180722891566266,\n\
\ \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7426900584795322,\n \"acc_stderr\": 0.03352799844161865,\n\
\ \"acc_norm\": 0.7426900584795322,\n \"acc_norm_stderr\": 0.03352799844161865\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3635250917992656,\n\
\ \"mc1_stderr\": 0.016838862883965827,\n \"mc2\": 0.5357942440986606,\n\
\ \"mc2_stderr\": 0.015916184024373756\n }\n}\n```"
repo_url: https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|arc:challenge|25_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hellaswag|10_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:01:51.030763.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:01:51.030763.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T19:01:51.030763.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T19:01:51.030763.parquet'
- config_name: results
data_files:
- split: 2023_07_19T19_01_51.030763
path:
- results_2023-07-19T19:01:51.030763.parquet
- split: latest
path:
- results_2023-07-19T19:01:51.030763.parquet
---
# Dataset Card for Evaluation run of TheBloke/gpt4-x-vicuna-13B-HF
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/gpt4-x-vicuna-13B-HF](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__gpt4-x-vicuna-13B-HF",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-07-19T19:01:51.030763](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__gpt4-x-vicuna-13B-HF/blob/main/results_2023-07-19T19%3A01%3A51.030763.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5137597162733054,
"acc_stderr": 0.03484317305077308,
"acc_norm": 0.5174954549900392,
"acc_norm_stderr": 0.03482742951911445,
"mc1": 0.3635250917992656,
"mc1_stderr": 0.016838862883965827,
"mc2": 0.5357942440986606,
"mc2_stderr": 0.015916184024373756
},
"harness|arc:challenge|25": {
"acc": 0.5110921501706485,
"acc_stderr": 0.01460779491401305,
"acc_norm": 0.5341296928327645,
"acc_norm_stderr": 0.014577311315231104
},
"harness|hellaswag|10": {
"acc": 0.6038637721569409,
"acc_stderr": 0.004880937933163287,
"acc_norm": 0.8012348137821151,
"acc_norm_stderr": 0.003982553164086259
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.45925925925925926,
"acc_stderr": 0.04304979692464243,
"acc_norm": 0.45925925925925926,
"acc_norm_stderr": 0.04304979692464243
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.506578947368421,
"acc_stderr": 0.040685900502249704,
"acc_norm": 0.506578947368421,
"acc_norm_stderr": 0.040685900502249704
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4867924528301887,
"acc_stderr": 0.030762134874500482,
"acc_norm": 0.4867924528301887,
"acc_norm_stderr": 0.030762134874500482
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5486111111111112,
"acc_stderr": 0.04161402398403279,
"acc_norm": 0.5486111111111112,
"acc_norm_stderr": 0.04161402398403279
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4046242774566474,
"acc_stderr": 0.03742461193887249,
"acc_norm": 0.4046242774566474,
"acc_norm_stderr": 0.03742461193887249
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.20588235294117646,
"acc_stderr": 0.04023382273617747,
"acc_norm": 0.20588235294117646,
"acc_norm_stderr": 0.04023382273617747
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3617021276595745,
"acc_stderr": 0.03141082197596241,
"acc_norm": 0.3617021276595745,
"acc_norm_stderr": 0.03141082197596241
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2894736842105263,
"acc_stderr": 0.04266339443159394,
"acc_norm": 0.2894736842105263,
"acc_norm_stderr": 0.04266339443159394
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.47586206896551725,
"acc_stderr": 0.041618085035015295,
"acc_norm": 0.47586206896551725,
"acc_norm_stderr": 0.041618085035015295
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.28835978835978837,
"acc_stderr": 0.023330654054535896,
"acc_norm": 0.28835978835978837,
"acc_norm_stderr": 0.023330654054535896
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5741935483870968,
"acc_stderr": 0.028129112709165894,
"acc_norm": 0.5741935483870968,
"acc_norm_stderr": 0.028129112709165894
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3891625615763547,
"acc_stderr": 0.03430462416103873,
"acc_norm": 0.3891625615763547,
"acc_norm_stderr": 0.03430462416103873
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6424242424242425,
"acc_stderr": 0.037425970438065864,
"acc_norm": 0.6424242424242425,
"acc_norm_stderr": 0.037425970438065864
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.034273086529999344,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.034273086529999344
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7046632124352331,
"acc_stderr": 0.03292296639155141,
"acc_norm": 0.7046632124352331,
"acc_norm_stderr": 0.03292296639155141
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.43846153846153846,
"acc_stderr": 0.02515826601686857,
"acc_norm": 0.43846153846153846,
"acc_norm_stderr": 0.02515826601686857
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.025928876132766135,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.025928876132766135
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.0322529423239964,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.0322529423239964
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.6770642201834862,
"acc_stderr": 0.02004811592341531,
"acc_norm": 0.6770642201834862,
"acc_norm_stderr": 0.02004811592341531
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.36574074074074076,
"acc_stderr": 0.03284738857647206,
"acc_norm": 0.36574074074074076,
"acc_norm_stderr": 0.03284738857647206
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.03296245110172228,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.03296245110172228
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7046413502109705,
"acc_stderr": 0.02969633871342288,
"acc_norm": 0.7046413502109705,
"acc_norm_stderr": 0.02969633871342288
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5739910313901345,
"acc_stderr": 0.03318833286217281,
"acc_norm": 0.5739910313901345,
"acc_norm_stderr": 0.03318833286217281
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6717557251908397,
"acc_stderr": 0.04118438565806298,
"acc_norm": 0.6717557251908397,
"acc_norm_stderr": 0.04118438565806298
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6776859504132231,
"acc_stderr": 0.042664163633521685,
"acc_norm": 0.6776859504132231,
"acc_norm_stderr": 0.042664163633521685
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.045879047413018105,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.045879047413018105
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6625766871165644,
"acc_stderr": 0.03714908409935574,
"acc_norm": 0.6625766871165644,
"acc_norm_stderr": 0.03714908409935574
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.39285714285714285,
"acc_stderr": 0.04635550135609976,
"acc_norm": 0.39285714285714285,
"acc_norm_stderr": 0.04635550135609976
},
"harness|hendrycksTest-management|5": {
"acc": 0.6796116504854369,
"acc_stderr": 0.04620284082280041,
"acc_norm": 0.6796116504854369,
"acc_norm_stderr": 0.04620284082280041
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7735042735042735,
"acc_stderr": 0.027421007295392912,
"acc_norm": 0.7735042735042735,
"acc_norm_stderr": 0.027421007295392912
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6871008939974457,
"acc_stderr": 0.01658093594030406,
"acc_norm": 0.6871008939974457,
"acc_norm_stderr": 0.01658093594030406
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5375722543352601,
"acc_stderr": 0.026842985519615375,
"acc_norm": 0.5375722543352601,
"acc_norm_stderr": 0.026842985519615375
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.31731843575418994,
"acc_stderr": 0.01556639263005703,
"acc_norm": 0.31731843575418994,
"acc_norm_stderr": 0.01556639263005703
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5424836601307189,
"acc_stderr": 0.028526383452142638,
"acc_norm": 0.5424836601307189,
"acc_norm_stderr": 0.028526383452142638
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5530546623794212,
"acc_stderr": 0.02823776942208535,
"acc_norm": 0.5530546623794212,
"acc_norm_stderr": 0.02823776942208535
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5740740740740741,
"acc_stderr": 0.027513747284379424,
"acc_norm": 0.5740740740740741,
"acc_norm_stderr": 0.027513747284379424
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.38652482269503546,
"acc_stderr": 0.02904919034254346,
"acc_norm": 0.38652482269503546,
"acc_norm_stderr": 0.02904919034254346
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.41264667535853977,
"acc_stderr": 0.012573836633799015,
"acc_norm": 0.41264667535853977,
"acc_norm_stderr": 0.012573836633799015
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.44485294117647056,
"acc_stderr": 0.03018753206032939,
"acc_norm": 0.44485294117647056,
"acc_norm_stderr": 0.03018753206032939
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5196078431372549,
"acc_stderr": 0.020212274976302957,
"acc_norm": 0.5196078431372549,
"acc_norm_stderr": 0.020212274976302957
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5454545454545454,
"acc_stderr": 0.04769300568972743,
"acc_norm": 0.5454545454545454,
"acc_norm_stderr": 0.04769300568972743
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5795918367346938,
"acc_stderr": 0.03160106993449601,
"acc_norm": 0.5795918367346938,
"acc_norm_stderr": 0.03160106993449601
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7412935323383084,
"acc_stderr": 0.030965903123573033,
"acc_norm": 0.7412935323383084,
"acc_norm_stderr": 0.030965903123573033
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.45180722891566266,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.45180722891566266,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7426900584795322,
"acc_stderr": 0.03352799844161865,
"acc_norm": 0.7426900584795322,
"acc_norm_stderr": 0.03352799844161865
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3635250917992656,
"mc1_stderr": 0.016838862883965827,
"mc2": 0.5357942440986606,
"mc2_stderr": 0.015916184024373756
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
galman33/gal_yair_166000_256x256_fixed | ---
dataset_info:
features:
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: country_code
dtype:
class_label:
names:
'0': ad
'1': ae
'2': al
'3': aq
'4': ar
'5': au
'6': bd
'7': be
'8': bg
'9': bm
'10': bo
'11': br
'12': bt
'13': bw
'14': ca
'15': ch
'16': cl
'17': co
'18': cz
'19': de
'20': dk
'21': ec
'22': ee
'23': es
'24': fi
'25': fr
'26': gb
'27': gh
'28': gl
'29': gr
'30': gt
'31': hk
'32': hr
'33': hu
'34': id
'35': ie
'36': il
'37': is
'38': it
'39': ix
'40': jp
'41': kg
'42': kh
'43': kr
'44': la
'45': lk
'46': ls
'47': lt
'48': lu
'49': lv
'50': me
'51': mg
'52': mk
'53': mn
'54': mo
'55': mt
'56': mx
'57': my
'58': nl
'59': 'no'
'60': nz
'61': pe
'62': ph
'63': pl
'64': pt
'65': ro
'66': rs
'67': ru
'68': se
'69': sg
'70': si
'71': sk
'72': sn
'73': sz
'74': th
'75': tn
'76': tr
'77': tw
'78': ua
'79': ug
'80': us
'81': uy
'82': za
- name: image
dtype: image
splits:
- name: train
num_bytes: 16156275005.0
num_examples: 166000
download_size: 16115168331
dataset_size: 16156275005.0
---
# Dataset Card for "gal_yair_166000_256x256_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/olivia_asobiasobase | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Olivia
This is the dataset of Olivia, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 641 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 300 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 300 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 641 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 641 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 641 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
rwitz2/no_robots_formatted | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 28805395
num_examples: 9500
- name: test
num_bytes: 1545168
num_examples: 500
download_size: 18891461
dataset_size: 30350563
---
# Dataset Card for "no_robots_formatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lilsunx/sih | ---
license: openrail
---
|
open-llm-leaderboard/details_azarafrooz__mistral-v2-7b-selfplay-low-tmp | ---
pretty_name: Evaluation run of azarafrooz/mistral-v2-7b-selfplay-low-tmp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [azarafrooz/mistral-v2-7b-selfplay-low-tmp](https://huggingface.co/azarafrooz/mistral-v2-7b-selfplay-low-tmp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_azarafrooz__mistral-v2-7b-selfplay-low-tmp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-21T11:31:23.339994](https://huggingface.co/datasets/open-llm-leaderboard/details_azarafrooz__mistral-v2-7b-selfplay-low-tmp/blob/main/results_2024-03-21T11-31-23.339994.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6075157900064023,\n\
\ \"acc_stderr\": 0.0331399850758573,\n \"acc_norm\": 0.6121293596581681,\n\
\ \"acc_norm_stderr\": 0.03381162626787054,\n \"mc1\": 0.5287637698898409,\n\
\ \"mc1_stderr\": 0.017474513848525518,\n \"mc2\": 0.6813244751586996,\n\
\ \"mc2_stderr\": 0.015204757863568796\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5861774744027304,\n \"acc_stderr\": 0.014392730009221005,\n\
\ \"acc_norm\": 0.6305460750853242,\n \"acc_norm_stderr\": 0.014104578366491888\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6670981876120294,\n\
\ \"acc_stderr\": 0.004702886273189419,\n \"acc_norm\": 0.849133638717387,\n\
\ \"acc_norm_stderr\": 0.0035718708487317116\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n\
\ \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n\
\ \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \
\ \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.02890159361241178,\n\
\ \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.02890159361241178\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6875,\n\
\ \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.6875,\n\
\ \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n\
\ \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \
\ \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5838150289017341,\n\
\ \"acc_stderr\": 0.03758517775404948,\n \"acc_norm\": 0.5838150289017341,\n\
\ \"acc_norm_stderr\": 0.03758517775404948\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n\
\ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467382,\n\
\ \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467382\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n\
\ \"acc_stderr\": 0.04615186962583703,\n \"acc_norm\": 0.40350877192982454,\n\
\ \"acc_norm_stderr\": 0.04615186962583703\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6137931034482759,\n \"acc_stderr\": 0.04057324734419035,\n\
\ \"acc_norm\": 0.6137931034482759,\n \"acc_norm_stderr\": 0.04057324734419035\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3783068783068783,\n \"acc_stderr\": 0.024976954053155254,\n \"\
acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.024976954053155254\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.632258064516129,\n\
\ \"acc_stderr\": 0.02743086657997347,\n \"acc_norm\": 0.632258064516129,\n\
\ \"acc_norm_stderr\": 0.02743086657997347\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n\
\ \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\"\
: 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7393939393939394,\n \"acc_stderr\": 0.034277431758165236,\n\
\ \"acc_norm\": 0.7393939393939394,\n \"acc_norm_stderr\": 0.034277431758165236\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198896,\n \"\
acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198896\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.025416343096306443,\n\
\ \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.025416343096306443\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.558974358974359,\n \"acc_stderr\": 0.025174048384000745,\n \
\ \"acc_norm\": 0.558974358974359,\n \"acc_norm_stderr\": 0.025174048384000745\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3037037037037037,\n \"acc_stderr\": 0.028037929969114993,\n \
\ \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.028037929969114993\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n\
\ \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7944954128440367,\n \"acc_stderr\": 0.01732435232501601,\n \"\
acc_norm\": 0.7944954128440367,\n \"acc_norm_stderr\": 0.01732435232501601\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.44907407407407407,\n \"acc_stderr\": 0.03392238405321616,\n \"\
acc_norm\": 0.44907407407407407,\n \"acc_norm_stderr\": 0.03392238405321616\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7647058823529411,\n \"acc_stderr\": 0.029771775228145624,\n \"\
acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.029771775228145624\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7552742616033755,\n \"acc_stderr\": 0.027985699387036423,\n \
\ \"acc_norm\": 0.7552742616033755,\n \"acc_norm_stderr\": 0.027985699387036423\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6188340807174888,\n\
\ \"acc_stderr\": 0.03259625118416827,\n \"acc_norm\": 0.6188340807174888,\n\
\ \"acc_norm_stderr\": 0.03259625118416827\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n\
\ \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.034878251684978906,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.034878251684978906\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.047184714852195886,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.047184714852195886\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n\
\ \"acc_stderr\": 0.022509033937077785,\n \"acc_norm\": 0.8632478632478633,\n\
\ \"acc_norm_stderr\": 0.022509033937077785\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7816091954022989,\n\
\ \"acc_stderr\": 0.01477435831993449,\n \"acc_norm\": 0.7816091954022989,\n\
\ \"acc_norm_stderr\": 0.01477435831993449\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.024818350129436593,\n\
\ \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.024818350129436593\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3139664804469274,\n\
\ \"acc_stderr\": 0.01552192393352364,\n \"acc_norm\": 0.3139664804469274,\n\
\ \"acc_norm_stderr\": 0.01552192393352364\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6862745098039216,\n \"acc_stderr\": 0.026568921015457138,\n\
\ \"acc_norm\": 0.6862745098039216,\n \"acc_norm_stderr\": 0.026568921015457138\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n\
\ \"acc_stderr\": 0.025922371788818777,\n \"acc_norm\": 0.7041800643086816,\n\
\ \"acc_norm_stderr\": 0.025922371788818777\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.02540719779889017,\n\
\ \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.02540719779889017\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.450354609929078,\n \"acc_stderr\": 0.029680105565029036,\n \
\ \"acc_norm\": 0.450354609929078,\n \"acc_norm_stderr\": 0.029680105565029036\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4322033898305085,\n\
\ \"acc_stderr\": 0.012652297777114968,\n \"acc_norm\": 0.4322033898305085,\n\
\ \"acc_norm_stderr\": 0.012652297777114968\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n\
\ \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6356209150326797,\n \"acc_stderr\": 0.019469518221573705,\n \
\ \"acc_norm\": 0.6356209150326797,\n \"acc_norm_stderr\": 0.019469518221573705\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.029043088683304328,\n\
\ \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.029043088683304328\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7263681592039801,\n\
\ \"acc_stderr\": 0.031524391865554016,\n \"acc_norm\": 0.7263681592039801,\n\
\ \"acc_norm_stderr\": 0.031524391865554016\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \
\ \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n\
\ \"acc_stderr\": 0.03892212195333047,\n \"acc_norm\": 0.4939759036144578,\n\
\ \"acc_norm_stderr\": 0.03892212195333047\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5287637698898409,\n\
\ \"mc1_stderr\": 0.017474513848525518,\n \"mc2\": 0.6813244751586996,\n\
\ \"mc2_stderr\": 0.015204757863568796\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.01176414905469834\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3957543593631539,\n \
\ \"acc_stderr\": 0.013469823701048812\n }\n}\n```"
repo_url: https://huggingface.co/azarafrooz/mistral-v2-7b-selfplay-low-tmp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|arc:challenge|25_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|gsm8k|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hellaswag|10_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T11-31-23.339994.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T11-31-23.339994.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- '**/details_harness|winogrande|5_2024-03-21T11-31-23.339994.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-21T11-31-23.339994.parquet'
- config_name: results
data_files:
- split: 2024_03_21T11_31_23.339994
path:
- results_2024-03-21T11-31-23.339994.parquet
- split: latest
path:
- results_2024-03-21T11-31-23.339994.parquet
---
# Dataset Card for Evaluation run of azarafrooz/mistral-v2-7b-selfplay-low-tmp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [azarafrooz/mistral-v2-7b-selfplay-low-tmp](https://huggingface.co/azarafrooz/mistral-v2-7b-selfplay-low-tmp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_azarafrooz__mistral-v2-7b-selfplay-low-tmp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-21T11:31:23.339994](https://huggingface.co/datasets/open-llm-leaderboard/details_azarafrooz__mistral-v2-7b-selfplay-low-tmp/blob/main/results_2024-03-21T11-31-23.339994.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6075157900064023,
"acc_stderr": 0.0331399850758573,
"acc_norm": 0.6121293596581681,
"acc_norm_stderr": 0.03381162626787054,
"mc1": 0.5287637698898409,
"mc1_stderr": 0.017474513848525518,
"mc2": 0.6813244751586996,
"mc2_stderr": 0.015204757863568796
},
"harness|arc:challenge|25": {
"acc": 0.5861774744027304,
"acc_stderr": 0.014392730009221005,
"acc_norm": 0.6305460750853242,
"acc_norm_stderr": 0.014104578366491888
},
"harness|hellaswag|10": {
"acc": 0.6670981876120294,
"acc_stderr": 0.004702886273189419,
"acc_norm": 0.849133638717387,
"acc_norm_stderr": 0.0035718708487317116
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.625,
"acc_stderr": 0.039397364351956274,
"acc_norm": 0.625,
"acc_norm_stderr": 0.039397364351956274
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6716981132075471,
"acc_stderr": 0.02890159361241178,
"acc_norm": 0.6716981132075471,
"acc_norm_stderr": 0.02890159361241178
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6875,
"acc_stderr": 0.038760854559127644,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.038760854559127644
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5838150289017341,
"acc_stderr": 0.03758517775404948,
"acc_norm": 0.5838150289017341,
"acc_norm_stderr": 0.03758517775404948
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.03261936918467382,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.03261936918467382
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.04615186962583703,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.04615186962583703
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6137931034482759,
"acc_stderr": 0.04057324734419035,
"acc_norm": 0.6137931034482759,
"acc_norm_stderr": 0.04057324734419035
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3783068783068783,
"acc_stderr": 0.024976954053155254,
"acc_norm": 0.3783068783068783,
"acc_norm_stderr": 0.024976954053155254
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.632258064516129,
"acc_stderr": 0.02743086657997347,
"acc_norm": 0.632258064516129,
"acc_norm_stderr": 0.02743086657997347
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7393939393939394,
"acc_stderr": 0.034277431758165236,
"acc_norm": 0.7393939393939394,
"acc_norm_stderr": 0.034277431758165236
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.030313710538198896,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.030313710538198896
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.025416343096306443,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.025416343096306443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.558974358974359,
"acc_stderr": 0.025174048384000745,
"acc_norm": 0.558974358974359,
"acc_norm_stderr": 0.025174048384000745
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.028037929969114993,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.028037929969114993
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.030778057422931673,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.030778057422931673
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7944954128440367,
"acc_stderr": 0.01732435232501601,
"acc_norm": 0.7944954128440367,
"acc_norm_stderr": 0.01732435232501601
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.44907407407407407,
"acc_stderr": 0.03392238405321616,
"acc_norm": 0.44907407407407407,
"acc_norm_stderr": 0.03392238405321616
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.029771775228145624,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.029771775228145624
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7552742616033755,
"acc_stderr": 0.027985699387036423,
"acc_norm": 0.7552742616033755,
"acc_norm_stderr": 0.027985699387036423
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6188340807174888,
"acc_stderr": 0.03259625118416827,
"acc_norm": 0.6188340807174888,
"acc_norm_stderr": 0.03259625118416827
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990947,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990947
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.034878251684978906,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.034878251684978906
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.047184714852195886,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.047184714852195886
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077785,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077785
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7816091954022989,
"acc_stderr": 0.01477435831993449,
"acc_norm": 0.7816091954022989,
"acc_norm_stderr": 0.01477435831993449
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.024818350129436593,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.024818350129436593
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3139664804469274,
"acc_stderr": 0.01552192393352364,
"acc_norm": 0.3139664804469274,
"acc_norm_stderr": 0.01552192393352364
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6862745098039216,
"acc_stderr": 0.026568921015457138,
"acc_norm": 0.6862745098039216,
"acc_norm_stderr": 0.026568921015457138
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7041800643086816,
"acc_stderr": 0.025922371788818777,
"acc_norm": 0.7041800643086816,
"acc_norm_stderr": 0.025922371788818777
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.02540719779889017,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.02540719779889017
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.450354609929078,
"acc_stderr": 0.029680105565029036,
"acc_norm": 0.450354609929078,
"acc_norm_stderr": 0.029680105565029036
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4322033898305085,
"acc_stderr": 0.012652297777114968,
"acc_norm": 0.4322033898305085,
"acc_norm_stderr": 0.012652297777114968
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6356209150326797,
"acc_stderr": 0.019469518221573705,
"acc_norm": 0.6356209150326797,
"acc_norm_stderr": 0.019469518221573705
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.710204081632653,
"acc_stderr": 0.029043088683304328,
"acc_norm": 0.710204081632653,
"acc_norm_stderr": 0.029043088683304328
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7263681592039801,
"acc_stderr": 0.031524391865554016,
"acc_norm": 0.7263681592039801,
"acc_norm_stderr": 0.031524391865554016
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333047,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333047
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5287637698898409,
"mc1_stderr": 0.017474513848525518,
"mc2": 0.6813244751586996,
"mc2_stderr": 0.015204757863568796
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.01176414905469834
},
"harness|gsm8k|5": {
"acc": 0.3957543593631539,
"acc_stderr": 0.013469823701048812
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Seanxh/twitter_dataset_1713196951 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 85739
num_examples: 199
download_size: 34951
dataset_size: 85739
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bruraz/teste | ---
license: openrail
---
|
EgilKarlsen/Thunderbird_BERT_FT | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: '0'
dtype: float32
- name: '1'
dtype: float32
- name: '2'
dtype: float32
- name: '3'
dtype: float32
- name: '4'
dtype: float32
- name: '5'
dtype: float32
- name: '6'
dtype: float32
- name: '7'
dtype: float32
- name: '8'
dtype: float32
- name: '9'
dtype: float32
- name: '10'
dtype: float32
- name: '11'
dtype: float32
- name: '12'
dtype: float32
- name: '13'
dtype: float32
- name: '14'
dtype: float32
- name: '15'
dtype: float32
- name: '16'
dtype: float32
- name: '17'
dtype: float32
- name: '18'
dtype: float32
- name: '19'
dtype: float32
- name: '20'
dtype: float32
- name: '21'
dtype: float32
- name: '22'
dtype: float32
- name: '23'
dtype: float32
- name: '24'
dtype: float32
- name: '25'
dtype: float32
- name: '26'
dtype: float32
- name: '27'
dtype: float32
- name: '28'
dtype: float32
- name: '29'
dtype: float32
- name: '30'
dtype: float32
- name: '31'
dtype: float32
- name: '32'
dtype: float32
- name: '33'
dtype: float32
- name: '34'
dtype: float32
- name: '35'
dtype: float32
- name: '36'
dtype: float32
- name: '37'
dtype: float32
- name: '38'
dtype: float32
- name: '39'
dtype: float32
- name: '40'
dtype: float32
- name: '41'
dtype: float32
- name: '42'
dtype: float32
- name: '43'
dtype: float32
- name: '44'
dtype: float32
- name: '45'
dtype: float32
- name: '46'
dtype: float32
- name: '47'
dtype: float32
- name: '48'
dtype: float32
- name: '49'
dtype: float32
- name: '50'
dtype: float32
- name: '51'
dtype: float32
- name: '52'
dtype: float32
- name: '53'
dtype: float32
- name: '54'
dtype: float32
- name: '55'
dtype: float32
- name: '56'
dtype: float32
- name: '57'
dtype: float32
- name: '58'
dtype: float32
- name: '59'
dtype: float32
- name: '60'
dtype: float32
- name: '61'
dtype: float32
- name: '62'
dtype: float32
- name: '63'
dtype: float32
- name: '64'
dtype: float32
- name: '65'
dtype: float32
- name: '66'
dtype: float32
- name: '67'
dtype: float32
- name: '68'
dtype: float32
- name: '69'
dtype: float32
- name: '70'
dtype: float32
- name: '71'
dtype: float32
- name: '72'
dtype: float32
- name: '73'
dtype: float32
- name: '74'
dtype: float32
- name: '75'
dtype: float32
- name: '76'
dtype: float32
- name: '77'
dtype: float32
- name: '78'
dtype: float32
- name: '79'
dtype: float32
- name: '80'
dtype: float32
- name: '81'
dtype: float32
- name: '82'
dtype: float32
- name: '83'
dtype: float32
- name: '84'
dtype: float32
- name: '85'
dtype: float32
- name: '86'
dtype: float32
- name: '87'
dtype: float32
- name: '88'
dtype: float32
- name: '89'
dtype: float32
- name: '90'
dtype: float32
- name: '91'
dtype: float32
- name: '92'
dtype: float32
- name: '93'
dtype: float32
- name: '94'
dtype: float32
- name: '95'
dtype: float32
- name: '96'
dtype: float32
- name: '97'
dtype: float32
- name: '98'
dtype: float32
- name: '99'
dtype: float32
- name: '100'
dtype: float32
- name: '101'
dtype: float32
- name: '102'
dtype: float32
- name: '103'
dtype: float32
- name: '104'
dtype: float32
- name: '105'
dtype: float32
- name: '106'
dtype: float32
- name: '107'
dtype: float32
- name: '108'
dtype: float32
- name: '109'
dtype: float32
- name: '110'
dtype: float32
- name: '111'
dtype: float32
- name: '112'
dtype: float32
- name: '113'
dtype: float32
- name: '114'
dtype: float32
- name: '115'
dtype: float32
- name: '116'
dtype: float32
- name: '117'
dtype: float32
- name: '118'
dtype: float32
- name: '119'
dtype: float32
- name: '120'
dtype: float32
- name: '121'
dtype: float32
- name: '122'
dtype: float32
- name: '123'
dtype: float32
- name: '124'
dtype: float32
- name: '125'
dtype: float32
- name: '126'
dtype: float32
- name: '127'
dtype: float32
- name: '128'
dtype: float32
- name: '129'
dtype: float32
- name: '130'
dtype: float32
- name: '131'
dtype: float32
- name: '132'
dtype: float32
- name: '133'
dtype: float32
- name: '134'
dtype: float32
- name: '135'
dtype: float32
- name: '136'
dtype: float32
- name: '137'
dtype: float32
- name: '138'
dtype: float32
- name: '139'
dtype: float32
- name: '140'
dtype: float32
- name: '141'
dtype: float32
- name: '142'
dtype: float32
- name: '143'
dtype: float32
- name: '144'
dtype: float32
- name: '145'
dtype: float32
- name: '146'
dtype: float32
- name: '147'
dtype: float32
- name: '148'
dtype: float32
- name: '149'
dtype: float32
- name: '150'
dtype: float32
- name: '151'
dtype: float32
- name: '152'
dtype: float32
- name: '153'
dtype: float32
- name: '154'
dtype: float32
- name: '155'
dtype: float32
- name: '156'
dtype: float32
- name: '157'
dtype: float32
- name: '158'
dtype: float32
- name: '159'
dtype: float32
- name: '160'
dtype: float32
- name: '161'
dtype: float32
- name: '162'
dtype: float32
- name: '163'
dtype: float32
- name: '164'
dtype: float32
- name: '165'
dtype: float32
- name: '166'
dtype: float32
- name: '167'
dtype: float32
- name: '168'
dtype: float32
- name: '169'
dtype: float32
- name: '170'
dtype: float32
- name: '171'
dtype: float32
- name: '172'
dtype: float32
- name: '173'
dtype: float32
- name: '174'
dtype: float32
- name: '175'
dtype: float32
- name: '176'
dtype: float32
- name: '177'
dtype: float32
- name: '178'
dtype: float32
- name: '179'
dtype: float32
- name: '180'
dtype: float32
- name: '181'
dtype: float32
- name: '182'
dtype: float32
- name: '183'
dtype: float32
- name: '184'
dtype: float32
- name: '185'
dtype: float32
- name: '186'
dtype: float32
- name: '187'
dtype: float32
- name: '188'
dtype: float32
- name: '189'
dtype: float32
- name: '190'
dtype: float32
- name: '191'
dtype: float32
- name: '192'
dtype: float32
- name: '193'
dtype: float32
- name: '194'
dtype: float32
- name: '195'
dtype: float32
- name: '196'
dtype: float32
- name: '197'
dtype: float32
- name: '198'
dtype: float32
- name: '199'
dtype: float32
- name: '200'
dtype: float32
- name: '201'
dtype: float32
- name: '202'
dtype: float32
- name: '203'
dtype: float32
- name: '204'
dtype: float32
- name: '205'
dtype: float32
- name: '206'
dtype: float32
- name: '207'
dtype: float32
- name: '208'
dtype: float32
- name: '209'
dtype: float32
- name: '210'
dtype: float32
- name: '211'
dtype: float32
- name: '212'
dtype: float32
- name: '213'
dtype: float32
- name: '214'
dtype: float32
- name: '215'
dtype: float32
- name: '216'
dtype: float32
- name: '217'
dtype: float32
- name: '218'
dtype: float32
- name: '219'
dtype: float32
- name: '220'
dtype: float32
- name: '221'
dtype: float32
- name: '222'
dtype: float32
- name: '223'
dtype: float32
- name: '224'
dtype: float32
- name: '225'
dtype: float32
- name: '226'
dtype: float32
- name: '227'
dtype: float32
- name: '228'
dtype: float32
- name: '229'
dtype: float32
- name: '230'
dtype: float32
- name: '231'
dtype: float32
- name: '232'
dtype: float32
- name: '233'
dtype: float32
- name: '234'
dtype: float32
- name: '235'
dtype: float32
- name: '236'
dtype: float32
- name: '237'
dtype: float32
- name: '238'
dtype: float32
- name: '239'
dtype: float32
- name: '240'
dtype: float32
- name: '241'
dtype: float32
- name: '242'
dtype: float32
- name: '243'
dtype: float32
- name: '244'
dtype: float32
- name: '245'
dtype: float32
- name: '246'
dtype: float32
- name: '247'
dtype: float32
- name: '248'
dtype: float32
- name: '249'
dtype: float32
- name: '250'
dtype: float32
- name: '251'
dtype: float32
- name: '252'
dtype: float32
- name: '253'
dtype: float32
- name: '254'
dtype: float32
- name: '255'
dtype: float32
- name: '256'
dtype: float32
- name: '257'
dtype: float32
- name: '258'
dtype: float32
- name: '259'
dtype: float32
- name: '260'
dtype: float32
- name: '261'
dtype: float32
- name: '262'
dtype: float32
- name: '263'
dtype: float32
- name: '264'
dtype: float32
- name: '265'
dtype: float32
- name: '266'
dtype: float32
- name: '267'
dtype: float32
- name: '268'
dtype: float32
- name: '269'
dtype: float32
- name: '270'
dtype: float32
- name: '271'
dtype: float32
- name: '272'
dtype: float32
- name: '273'
dtype: float32
- name: '274'
dtype: float32
- name: '275'
dtype: float32
- name: '276'
dtype: float32
- name: '277'
dtype: float32
- name: '278'
dtype: float32
- name: '279'
dtype: float32
- name: '280'
dtype: float32
- name: '281'
dtype: float32
- name: '282'
dtype: float32
- name: '283'
dtype: float32
- name: '284'
dtype: float32
- name: '285'
dtype: float32
- name: '286'
dtype: float32
- name: '287'
dtype: float32
- name: '288'
dtype: float32
- name: '289'
dtype: float32
- name: '290'
dtype: float32
- name: '291'
dtype: float32
- name: '292'
dtype: float32
- name: '293'
dtype: float32
- name: '294'
dtype: float32
- name: '295'
dtype: float32
- name: '296'
dtype: float32
- name: '297'
dtype: float32
- name: '298'
dtype: float32
- name: '299'
dtype: float32
- name: '300'
dtype: float32
- name: '301'
dtype: float32
- name: '302'
dtype: float32
- name: '303'
dtype: float32
- name: '304'
dtype: float32
- name: '305'
dtype: float32
- name: '306'
dtype: float32
- name: '307'
dtype: float32
- name: '308'
dtype: float32
- name: '309'
dtype: float32
- name: '310'
dtype: float32
- name: '311'
dtype: float32
- name: '312'
dtype: float32
- name: '313'
dtype: float32
- name: '314'
dtype: float32
- name: '315'
dtype: float32
- name: '316'
dtype: float32
- name: '317'
dtype: float32
- name: '318'
dtype: float32
- name: '319'
dtype: float32
- name: '320'
dtype: float32
- name: '321'
dtype: float32
- name: '322'
dtype: float32
- name: '323'
dtype: float32
- name: '324'
dtype: float32
- name: '325'
dtype: float32
- name: '326'
dtype: float32
- name: '327'
dtype: float32
- name: '328'
dtype: float32
- name: '329'
dtype: float32
- name: '330'
dtype: float32
- name: '331'
dtype: float32
- name: '332'
dtype: float32
- name: '333'
dtype: float32
- name: '334'
dtype: float32
- name: '335'
dtype: float32
- name: '336'
dtype: float32
- name: '337'
dtype: float32
- name: '338'
dtype: float32
- name: '339'
dtype: float32
- name: '340'
dtype: float32
- name: '341'
dtype: float32
- name: '342'
dtype: float32
- name: '343'
dtype: float32
- name: '344'
dtype: float32
- name: '345'
dtype: float32
- name: '346'
dtype: float32
- name: '347'
dtype: float32
- name: '348'
dtype: float32
- name: '349'
dtype: float32
- name: '350'
dtype: float32
- name: '351'
dtype: float32
- name: '352'
dtype: float32
- name: '353'
dtype: float32
- name: '354'
dtype: float32
- name: '355'
dtype: float32
- name: '356'
dtype: float32
- name: '357'
dtype: float32
- name: '358'
dtype: float32
- name: '359'
dtype: float32
- name: '360'
dtype: float32
- name: '361'
dtype: float32
- name: '362'
dtype: float32
- name: '363'
dtype: float32
- name: '364'
dtype: float32
- name: '365'
dtype: float32
- name: '366'
dtype: float32
- name: '367'
dtype: float32
- name: '368'
dtype: float32
- name: '369'
dtype: float32
- name: '370'
dtype: float32
- name: '371'
dtype: float32
- name: '372'
dtype: float32
- name: '373'
dtype: float32
- name: '374'
dtype: float32
- name: '375'
dtype: float32
- name: '376'
dtype: float32
- name: '377'
dtype: float32
- name: '378'
dtype: float32
- name: '379'
dtype: float32
- name: '380'
dtype: float32
- name: '381'
dtype: float32
- name: '382'
dtype: float32
- name: '383'
dtype: float32
- name: '384'
dtype: float32
- name: '385'
dtype: float32
- name: '386'
dtype: float32
- name: '387'
dtype: float32
- name: '388'
dtype: float32
- name: '389'
dtype: float32
- name: '390'
dtype: float32
- name: '391'
dtype: float32
- name: '392'
dtype: float32
- name: '393'
dtype: float32
- name: '394'
dtype: float32
- name: '395'
dtype: float32
- name: '396'
dtype: float32
- name: '397'
dtype: float32
- name: '398'
dtype: float32
- name: '399'
dtype: float32
- name: '400'
dtype: float32
- name: '401'
dtype: float32
- name: '402'
dtype: float32
- name: '403'
dtype: float32
- name: '404'
dtype: float32
- name: '405'
dtype: float32
- name: '406'
dtype: float32
- name: '407'
dtype: float32
- name: '408'
dtype: float32
- name: '409'
dtype: float32
- name: '410'
dtype: float32
- name: '411'
dtype: float32
- name: '412'
dtype: float32
- name: '413'
dtype: float32
- name: '414'
dtype: float32
- name: '415'
dtype: float32
- name: '416'
dtype: float32
- name: '417'
dtype: float32
- name: '418'
dtype: float32
- name: '419'
dtype: float32
- name: '420'
dtype: float32
- name: '421'
dtype: float32
- name: '422'
dtype: float32
- name: '423'
dtype: float32
- name: '424'
dtype: float32
- name: '425'
dtype: float32
- name: '426'
dtype: float32
- name: '427'
dtype: float32
- name: '428'
dtype: float32
- name: '429'
dtype: float32
- name: '430'
dtype: float32
- name: '431'
dtype: float32
- name: '432'
dtype: float32
- name: '433'
dtype: float32
- name: '434'
dtype: float32
- name: '435'
dtype: float32
- name: '436'
dtype: float32
- name: '437'
dtype: float32
- name: '438'
dtype: float32
- name: '439'
dtype: float32
- name: '440'
dtype: float32
- name: '441'
dtype: float32
- name: '442'
dtype: float32
- name: '443'
dtype: float32
- name: '444'
dtype: float32
- name: '445'
dtype: float32
- name: '446'
dtype: float32
- name: '447'
dtype: float32
- name: '448'
dtype: float32
- name: '449'
dtype: float32
- name: '450'
dtype: float32
- name: '451'
dtype: float32
- name: '452'
dtype: float32
- name: '453'
dtype: float32
- name: '454'
dtype: float32
- name: '455'
dtype: float32
- name: '456'
dtype: float32
- name: '457'
dtype: float32
- name: '458'
dtype: float32
- name: '459'
dtype: float32
- name: '460'
dtype: float32
- name: '461'
dtype: float32
- name: '462'
dtype: float32
- name: '463'
dtype: float32
- name: '464'
dtype: float32
- name: '465'
dtype: float32
- name: '466'
dtype: float32
- name: '467'
dtype: float32
- name: '468'
dtype: float32
- name: '469'
dtype: float32
- name: '470'
dtype: float32
- name: '471'
dtype: float32
- name: '472'
dtype: float32
- name: '473'
dtype: float32
- name: '474'
dtype: float32
- name: '475'
dtype: float32
- name: '476'
dtype: float32
- name: '477'
dtype: float32
- name: '478'
dtype: float32
- name: '479'
dtype: float32
- name: '480'
dtype: float32
- name: '481'
dtype: float32
- name: '482'
dtype: float32
- name: '483'
dtype: float32
- name: '484'
dtype: float32
- name: '485'
dtype: float32
- name: '486'
dtype: float32
- name: '487'
dtype: float32
- name: '488'
dtype: float32
- name: '489'
dtype: float32
- name: '490'
dtype: float32
- name: '491'
dtype: float32
- name: '492'
dtype: float32
- name: '493'
dtype: float32
- name: '494'
dtype: float32
- name: '495'
dtype: float32
- name: '496'
dtype: float32
- name: '497'
dtype: float32
- name: '498'
dtype: float32
- name: '499'
dtype: float32
- name: '500'
dtype: float32
- name: '501'
dtype: float32
- name: '502'
dtype: float32
- name: '503'
dtype: float32
- name: '504'
dtype: float32
- name: '505'
dtype: float32
- name: '506'
dtype: float32
- name: '507'
dtype: float32
- name: '508'
dtype: float32
- name: '509'
dtype: float32
- name: '510'
dtype: float32
- name: '511'
dtype: float32
- name: '512'
dtype: float32
- name: '513'
dtype: float32
- name: '514'
dtype: float32
- name: '515'
dtype: float32
- name: '516'
dtype: float32
- name: '517'
dtype: float32
- name: '518'
dtype: float32
- name: '519'
dtype: float32
- name: '520'
dtype: float32
- name: '521'
dtype: float32
- name: '522'
dtype: float32
- name: '523'
dtype: float32
- name: '524'
dtype: float32
- name: '525'
dtype: float32
- name: '526'
dtype: float32
- name: '527'
dtype: float32
- name: '528'
dtype: float32
- name: '529'
dtype: float32
- name: '530'
dtype: float32
- name: '531'
dtype: float32
- name: '532'
dtype: float32
- name: '533'
dtype: float32
- name: '534'
dtype: float32
- name: '535'
dtype: float32
- name: '536'
dtype: float32
- name: '537'
dtype: float32
- name: '538'
dtype: float32
- name: '539'
dtype: float32
- name: '540'
dtype: float32
- name: '541'
dtype: float32
- name: '542'
dtype: float32
- name: '543'
dtype: float32
- name: '544'
dtype: float32
- name: '545'
dtype: float32
- name: '546'
dtype: float32
- name: '547'
dtype: float32
- name: '548'
dtype: float32
- name: '549'
dtype: float32
- name: '550'
dtype: float32
- name: '551'
dtype: float32
- name: '552'
dtype: float32
- name: '553'
dtype: float32
- name: '554'
dtype: float32
- name: '555'
dtype: float32
- name: '556'
dtype: float32
- name: '557'
dtype: float32
- name: '558'
dtype: float32
- name: '559'
dtype: float32
- name: '560'
dtype: float32
- name: '561'
dtype: float32
- name: '562'
dtype: float32
- name: '563'
dtype: float32
- name: '564'
dtype: float32
- name: '565'
dtype: float32
- name: '566'
dtype: float32
- name: '567'
dtype: float32
- name: '568'
dtype: float32
- name: '569'
dtype: float32
- name: '570'
dtype: float32
- name: '571'
dtype: float32
- name: '572'
dtype: float32
- name: '573'
dtype: float32
- name: '574'
dtype: float32
- name: '575'
dtype: float32
- name: '576'
dtype: float32
- name: '577'
dtype: float32
- name: '578'
dtype: float32
- name: '579'
dtype: float32
- name: '580'
dtype: float32
- name: '581'
dtype: float32
- name: '582'
dtype: float32
- name: '583'
dtype: float32
- name: '584'
dtype: float32
- name: '585'
dtype: float32
- name: '586'
dtype: float32
- name: '587'
dtype: float32
- name: '588'
dtype: float32
- name: '589'
dtype: float32
- name: '590'
dtype: float32
- name: '591'
dtype: float32
- name: '592'
dtype: float32
- name: '593'
dtype: float32
- name: '594'
dtype: float32
- name: '595'
dtype: float32
- name: '596'
dtype: float32
- name: '597'
dtype: float32
- name: '598'
dtype: float32
- name: '599'
dtype: float32
- name: '600'
dtype: float32
- name: '601'
dtype: float32
- name: '602'
dtype: float32
- name: '603'
dtype: float32
- name: '604'
dtype: float32
- name: '605'
dtype: float32
- name: '606'
dtype: float32
- name: '607'
dtype: float32
- name: '608'
dtype: float32
- name: '609'
dtype: float32
- name: '610'
dtype: float32
- name: '611'
dtype: float32
- name: '612'
dtype: float32
- name: '613'
dtype: float32
- name: '614'
dtype: float32
- name: '615'
dtype: float32
- name: '616'
dtype: float32
- name: '617'
dtype: float32
- name: '618'
dtype: float32
- name: '619'
dtype: float32
- name: '620'
dtype: float32
- name: '621'
dtype: float32
- name: '622'
dtype: float32
- name: '623'
dtype: float32
- name: '624'
dtype: float32
- name: '625'
dtype: float32
- name: '626'
dtype: float32
- name: '627'
dtype: float32
- name: '628'
dtype: float32
- name: '629'
dtype: float32
- name: '630'
dtype: float32
- name: '631'
dtype: float32
- name: '632'
dtype: float32
- name: '633'
dtype: float32
- name: '634'
dtype: float32
- name: '635'
dtype: float32
- name: '636'
dtype: float32
- name: '637'
dtype: float32
- name: '638'
dtype: float32
- name: '639'
dtype: float32
- name: '640'
dtype: float32
- name: '641'
dtype: float32
- name: '642'
dtype: float32
- name: '643'
dtype: float32
- name: '644'
dtype: float32
- name: '645'
dtype: float32
- name: '646'
dtype: float32
- name: '647'
dtype: float32
- name: '648'
dtype: float32
- name: '649'
dtype: float32
- name: '650'
dtype: float32
- name: '651'
dtype: float32
- name: '652'
dtype: float32
- name: '653'
dtype: float32
- name: '654'
dtype: float32
- name: '655'
dtype: float32
- name: '656'
dtype: float32
- name: '657'
dtype: float32
- name: '658'
dtype: float32
- name: '659'
dtype: float32
- name: '660'
dtype: float32
- name: '661'
dtype: float32
- name: '662'
dtype: float32
- name: '663'
dtype: float32
- name: '664'
dtype: float32
- name: '665'
dtype: float32
- name: '666'
dtype: float32
- name: '667'
dtype: float32
- name: '668'
dtype: float32
- name: '669'
dtype: float32
- name: '670'
dtype: float32
- name: '671'
dtype: float32
- name: '672'
dtype: float32
- name: '673'
dtype: float32
- name: '674'
dtype: float32
- name: '675'
dtype: float32
- name: '676'
dtype: float32
- name: '677'
dtype: float32
- name: '678'
dtype: float32
- name: '679'
dtype: float32
- name: '680'
dtype: float32
- name: '681'
dtype: float32
- name: '682'
dtype: float32
- name: '683'
dtype: float32
- name: '684'
dtype: float32
- name: '685'
dtype: float32
- name: '686'
dtype: float32
- name: '687'
dtype: float32
- name: '688'
dtype: float32
- name: '689'
dtype: float32
- name: '690'
dtype: float32
- name: '691'
dtype: float32
- name: '692'
dtype: float32
- name: '693'
dtype: float32
- name: '694'
dtype: float32
- name: '695'
dtype: float32
- name: '696'
dtype: float32
- name: '697'
dtype: float32
- name: '698'
dtype: float32
- name: '699'
dtype: float32
- name: '700'
dtype: float32
- name: '701'
dtype: float32
- name: '702'
dtype: float32
- name: '703'
dtype: float32
- name: '704'
dtype: float32
- name: '705'
dtype: float32
- name: '706'
dtype: float32
- name: '707'
dtype: float32
- name: '708'
dtype: float32
- name: '709'
dtype: float32
- name: '710'
dtype: float32
- name: '711'
dtype: float32
- name: '712'
dtype: float32
- name: '713'
dtype: float32
- name: '714'
dtype: float32
- name: '715'
dtype: float32
- name: '716'
dtype: float32
- name: '717'
dtype: float32
- name: '718'
dtype: float32
- name: '719'
dtype: float32
- name: '720'
dtype: float32
- name: '721'
dtype: float32
- name: '722'
dtype: float32
- name: '723'
dtype: float32
- name: '724'
dtype: float32
- name: '725'
dtype: float32
- name: '726'
dtype: float32
- name: '727'
dtype: float32
- name: '728'
dtype: float32
- name: '729'
dtype: float32
- name: '730'
dtype: float32
- name: '731'
dtype: float32
- name: '732'
dtype: float32
- name: '733'
dtype: float32
- name: '734'
dtype: float32
- name: '735'
dtype: float32
- name: '736'
dtype: float32
- name: '737'
dtype: float32
- name: '738'
dtype: float32
- name: '739'
dtype: float32
- name: '740'
dtype: float32
- name: '741'
dtype: float32
- name: '742'
dtype: float32
- name: '743'
dtype: float32
- name: '744'
dtype: float32
- name: '745'
dtype: float32
- name: '746'
dtype: float32
- name: '747'
dtype: float32
- name: '748'
dtype: float32
- name: '749'
dtype: float32
- name: '750'
dtype: float32
- name: '751'
dtype: float32
- name: '752'
dtype: float32
- name: '753'
dtype: float32
- name: '754'
dtype: float32
- name: '755'
dtype: float32
- name: '756'
dtype: float32
- name: '757'
dtype: float32
- name: '758'
dtype: float32
- name: '759'
dtype: float32
- name: '760'
dtype: float32
- name: '761'
dtype: float32
- name: '762'
dtype: float32
- name: '763'
dtype: float32
- name: '764'
dtype: float32
- name: '765'
dtype: float32
- name: '766'
dtype: float32
- name: '767'
dtype: float32
- name: label
dtype: string
splits:
- name: train
num_bytes: 115576722
num_examples: 37500
- name: test
num_bytes: 38525585
num_examples: 12500
download_size: 211880915
dataset_size: 154102307
---
# Dataset Card for "Thunderbird_BERT_FT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-MicPie__QA_bias-v2_TEST-MicPie__QA_bias-v2_TEST-e54ae6-1669159075 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- MicPie/QA_bias-v2_TEST
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: MicPie/QA_bias-v2_TEST
dataset_config: MicPie--QA_bias-v2_TEST
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: MicPie/QA_bias-v2_TEST
* Config: MicPie--QA_bias-v2_TEST
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. |
Nexdata/Chinese_Young_Children_Speech_Data_by_Mobile_Phone_and_Microphone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Chinese_Young_Children_Speech_Data_by_Mobile_Phone_and_Microphone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** hhttps://www.nexdata.ai/datasets/76?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The data were recorded by 797 Chinese children aged 3 to 5, of whom 39% were children aged 5. The recording content conforms to the characteristics of children, mainly storybooks, children's songs, spoken language. Around 120 sentences for each speaker. It is simultaneously recorded by hi-fi microphone and cellphone. The vaild data are 41.8 hours. Texts are manually transcribed with high accuracy.
For more details, please refer to the link: https://www.nexdata.ai/datasets/76?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Mandarin Chinese
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
datasets-examples/doc-formats-tsv-3 | ---
configs:
- config_name: default
data_files: "data.tsv"
names: ["kind", "sound"]
size_categories:
- n<1K
---
# [doc] formats - tsv - 3
This dataset contains one tsv file at the root:
- [data.tsv](./data.tsv)
```tsv
dog woof
cat meow
pokemon pika
human hello
```
We define the config name in the YAML config, the file's exact location, and the columns' name. As we provide the `names` option, but not the `header` one, the first row in the file is considered a row of values, not a row of column names. The delimiter is set to `"\t"` (tabulation) due to the file's extension. The reference for the options is the [documentation of pandas.read_csv()](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html).
```yaml
---
configs:
- config_name: default
data_files: "data.tsv"
names: ["kind", "sound"]
size_categories:
- n<1K
---
```
|
ColinCcz/combined_non_MH_dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 939124270.6964713
num_examples: 1298192
download_size: 598207611
dataset_size: 939124270.6964713
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
atmallen/quirky_bookrating_bob_hard | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: float64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 96619.75463773188
num_examples: 718
- name: validation
num_bytes: 63544.77
num_examples: 472
- name: test
num_bytes: 60446.35725
num_examples: 447
download_size: 75163
dataset_size: 220610.88188773187
---
# Dataset Card for "quirky_bookrating_bob_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atiranela/SubaruNatsuki | ---
license: openrail
---
|
gaurav-mac/dolly-databricks-mbrt | ---
license: cc-by-sa-3.0
---
|
joey234/mmlu-high_school_macroeconomics-neg-answer | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_answer
dtype: string
splits:
- name: test
num_bytes: 137273
num_examples: 390
download_size: 65743
dataset_size: 137273
---
# Dataset Card for "mmlu-high_school_macroeconomics-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/aloy_genshin | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of aloy/アーロイ/埃洛伊 (Genshin Impact)
This is the dataset of aloy/アーロイ/埃洛伊 (Genshin Impact), containing 261 images and their tags.
The core tags of this character are `long_hair, breasts, braid, freckles, green_eyes, brown_hair, lips, large_breasts, orange_hair, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 261 | 379.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aloy_genshin/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 261 | 331.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aloy_genshin/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 590 | 572.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aloy_genshin/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/aloy_genshin',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, navel, nipples, solo, pussy, body_freckles, completely_nude, looking_at_viewer, sitting, uncensored, blurry_background, female_pubic_hair, jewelry, abs, artist_name, blush, hair_ornament, outdoors, smile, sweat, thighs |
| 1 | 12 |  |  |  |  |  | 1girl, solo, simple_background, brown_eyes, red_hair, white_background, portrait |
| 2 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, nipples, solo, artist_name, completely_nude, navel, necklace, on_back, parted_lips, red_hair, armpits, mosaic_censoring, pillow, pussy, arms_behind_head, arms_up, bed_sheet, on_bed |
| 3 | 7 |  |  |  |  |  | 1girl, erection, futanari, large_penis, looking_at_viewer, uncensored, nipples, nose, solo, spread_legs, veiny_penis, navel, parted_lips, abs, breasts_apart, large_testicles, sitting, blue_eyes, completely_nude, huge_penis, jewelry, muscular_female, precum |
| 4 | 17 |  |  |  |  |  | 1girl, solo, necklace, arrow_(projectile), beads, holding_bow_(weapon), boots, quiver, fur_trim, pants, simple_background, full_body, multiple_braids, tribal, white_background |
| 5 | 7 |  |  |  |  |  | 1girl, from_behind, looking_back, solo, body_freckles, looking_at_viewer, completely_nude, blurry_background, blush, thighs, artist_name, blue_eyes, cowboy_shot, huge_ass, mole_on_ass, outdoors, red_hair, sideboob, sweat |
| 6 | 9 |  |  |  |  |  | 1girl, outdoors, solo, blue_sky, day, looking_at_viewer, bare_shoulders, cloud, red_hair, beach, ocean, thighs, twin_braids, bikini, cowboy_shot, navel, palm_tree, standing, cameltoe, cleavage |
| 7 | 9 |  |  |  |  |  | 1girl, solo, uncensored, nipples, anus, completely_nude, ass, female_masturbation, pussy_juice, spread_legs, blush, body_freckles, fingering, jewelry, looking_at_viewer, simple_background |
| 8 | 10 |  |  |  |  |  | 1girl, nipples, solo, pussy, spread_legs, vaginal_object_insertion, nude, uncensored, open_mouth, sex_machine, barefoot, bondage, red_hair, restrained, toes, clitoris, feet, necklace, sex_toy, soles |
| 9 | 17 |  |  |  |  |  | 1girl, hetero, penis, uncensored, sex, 1boy, pussy, solo_focus, nipples, vaginal, completely_nude, navel, open_mouth, outdoors, spread_legs, blush, looking_at_viewer, ass, body_freckles, testicles, straddling |
| 10 | 10 |  |  |  |  |  | arms_behind_back, bondage, gagged, nipples, 1girl, rope, solo, shibari, nipple_piercing, ball_gag, barefoot, collar, nude, feet, pussy, restrained |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | navel | nipples | solo | pussy | body_freckles | completely_nude | looking_at_viewer | sitting | uncensored | blurry_background | female_pubic_hair | jewelry | abs | artist_name | blush | hair_ornament | outdoors | smile | sweat | thighs | simple_background | brown_eyes | red_hair | white_background | portrait | necklace | on_back | parted_lips | armpits | mosaic_censoring | pillow | arms_behind_head | arms_up | bed_sheet | on_bed | erection | futanari | large_penis | nose | spread_legs | veiny_penis | breasts_apart | large_testicles | blue_eyes | huge_penis | muscular_female | precum | arrow_(projectile) | beads | holding_bow_(weapon) | boots | quiver | fur_trim | pants | full_body | multiple_braids | tribal | from_behind | looking_back | cowboy_shot | huge_ass | mole_on_ass | sideboob | blue_sky | day | bare_shoulders | cloud | beach | ocean | twin_braids | bikini | palm_tree | standing | cameltoe | cleavage | anus | ass | female_masturbation | pussy_juice | fingering | vaginal_object_insertion | nude | open_mouth | sex_machine | barefoot | bondage | restrained | toes | clitoris | feet | sex_toy | soles | hetero | penis | sex | 1boy | solo_focus | vaginal | testicles | straddling | arms_behind_back | gagged | rope | shibari | nipple_piercing | ball_gag | collar |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------|:----------|:-------|:--------|:----------------|:------------------|:--------------------|:----------|:-------------|:--------------------|:--------------------|:----------|:------|:--------------|:--------|:----------------|:-----------|:--------|:--------|:---------|:--------------------|:-------------|:-----------|:-------------------|:-----------|:-----------|:----------|:--------------|:----------|:-------------------|:---------|:-------------------|:----------|:------------|:---------|:-----------|:-----------|:--------------|:-------|:--------------|:--------------|:----------------|:------------------|:------------|:-------------|:------------------|:---------|:---------------------|:--------|:-----------------------|:--------|:---------|:-----------|:--------|:------------|:------------------|:---------|:--------------|:---------------|:--------------|:-----------|:--------------|:-----------|:-----------|:------|:-----------------|:--------|:--------|:--------|:--------------|:---------|:------------|:-----------|:-----------|:-----------|:-------|:------|:----------------------|:--------------|:------------|:---------------------------|:-------|:-------------|:--------------|:-----------|:----------|:-------------|:-------|:-----------|:-------|:----------|:--------|:---------|:--------|:------|:-------|:-------------|:----------|:------------|:-------------|:-------------------|:---------|:-------|:----------|:------------------|:-----------|:---------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | | X | X | | | | | | | X | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | | | X | X | X | X | | | X | X | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 17 |  |  |  |  |  | X | | | X | | | | | | | | | | | | | | | | | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | | X | | X | X | X | | | X | | | | X | X | | X | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 9 |  |  |  |  |  | X | X | | X | | | | X | | | | | | | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | X | X | | X | X | X | | X | | | X | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 10 |  |  |  |  |  | X | | X | X | X | | | | | X | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 9 | 17 |  |  |  |  |  | X | X | X | | X | X | X | X | | X | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | |
| 10 | 10 |  |  |  |  |  | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | X | | | X | | | | | | | | | | | X | X | X | X | X | X | X |
|
ChaiML/20240108_chai_prize_reward_model_data_season_v | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: labels
dtype: int64
- name: season
dtype: string
splits:
- name: train
num_bytes: 66684838
num_examples: 33867
download_size: 36785187
dataset_size: 66684838
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "20240108_chai_prize_reward_model_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
theblackcat102/oasst-red-team | ---
language:
- en
- de
- fr
- ru
- zh
- ja
- it
- pt
- th
- nl
- ro
- pl
- hu
- hr
---
Work in progress
Red team datasets for training and testing reward model for open assistant |
autoevaluate/autoeval-staging-eval-project-banking77-34727576-11425522 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- banking77
eval_info:
task: multi_class_classification
model: nickprock/distilbert-base-uncased-banking77-classification
metrics: []
dataset_name: banking77
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: nickprock/distilbert-base-uncased-banking77-classification
* Dataset: banking77
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model. |
aisuko/vqa | ---
license: apache-2.0
---
# Overview
The original code is from https://huggingface.co/datasets/Graphcore/vqa/tree/main
Adaptered by: Aisuko
# How to use it
```python
from datasets import load_dataset
dataset = load_dataset("aisuko/vqa", split="validation[:200]")
dataset
```
```
Dataset({
features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'],
num_rows: 200
})
```
## Remove the label column
```python
dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type'])
```
## Check the image
```python
from PIL import Image
image = Image.open(dataset[0]['image_id'])
image
```
|
clarin-knext/scifact-pl | ---
language:
- pl
pretty_name: BEIR-PL benchmark Scifact-PL
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl
|
sudarsa/tts_hindi | ---
license: apache-2.0
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 21241940.0
num_examples: 10
download_size: 15708375
dataset_size: 21241940.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
llm-aes/gpt-3.5_SummEval_gpt2-vs-others_analyze_rate | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: worker_id
dtype: string
- name: human_label
dtype: int64
- name: llm_label
dtype: int64
- name: generator_1
dtype: string
- name: generator_2
dtype: string
- name: premise
dtype: string
splits:
- name: train
num_bytes: 3292945
num_examples: 1500
download_size: 288733
dataset_size: 3292945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/lmind_hotpot_train300_eval100_v1_doc_qa | ---
configs:
- config_name: default
data_files:
- split: train_qa
path: data/train_qa-*
- split: train_recite_qa
path: data/train_recite_qa-*
- split: eval_qa
path: data/eval_qa-*
- split: eval_recite_qa
path: data/eval_recite_qa-*
- split: all_docs
path: data/all_docs-*
- split: all_docs_eval
path: data/all_docs_eval-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
splits:
- name: train_qa
num_bytes: 51441
num_examples: 300
- name: train_recite_qa
num_bytes: 312070
num_examples: 300
- name: eval_qa
num_bytes: 16148
num_examples: 100
- name: eval_recite_qa
num_bytes: 104950
num_examples: 100
- name: all_docs
num_bytes: 361191
num_examples: 797
- name: all_docs_eval
num_bytes: 361140
num_examples: 797
- name: train
num_bytes: 412632
num_examples: 1097
- name: validation
num_bytes: 16148
num_examples: 100
download_size: 813503
dataset_size: 1635720
---
# Dataset Card for "lmind_hotpot_train300_eval100_v1_doc_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
poorfish/fishdataset | ---
size_categories:
- 1K<n<10K
task_categories:
- text-classification
--- |
megantron/aesthetic_labeled | ---
dataset_info:
features:
- name: image
dtype: image
- name: 'Unnamed: 0'
dtype: int64
- name: label
dtype: int64
splits:
- name: test
num_bytes: 3101095.0
num_examples: 8
download_size: 1553003
dataset_size: 3101095.0
---
# Dataset Card for "aesthetic_labeled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sakshamrzt/medical_qa | ---
license: cc0-1.0
task_categories:
- table-question-answering
language:
- en
size_categories:
- 1K<n<10K
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_examples: 2048
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_examples: 2048
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_examples: 2048
configs:
- config_name: default
data_files:
- split: test
path: default.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
---
# Dataset Card for Dataset Name
## Dataset Details
The MedQuad dataset normalised for use with mteb. The dataset contains questions and answers related to medical conditions, treatments, and protocols
### Dataset Description
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
liuyanchen1015/MULTI_VALUE_qqp_relativizer_where | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 363726
num_examples: 1820
- name: test
num_bytes: 3454497
num_examples: 17679
- name: train
num_bytes: 3186941
num_examples: 15990
download_size: 4159938
dataset_size: 7005164
---
# Dataset Card for "MULTI_VALUE_qqp_relativizer_where"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_mathematics-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 40583
num_examples: 100
download_size: 25747
dataset_size: 40583
---
# Dataset Card for "mmlu-college_mathematics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/sharegpt4v-nowebimages | Invalid username or password. |
tasksource/lexcomp-nc-attributes | ---
license: apache-2.0
language:
- en
---
https://github.com/vered1986/lexcomp/tree/master
```
@article{shwartz-dagan-2019-still,
title = "Still a Pain in the Neck: Evaluating Text Representations on Lexical Composition",
author = "Shwartz, Vered and
Dagan, Ido",
journal = "Transactions of the Association for Computational Linguistics",
volume = "7",
year = "2019",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q19-1027",
doi = "10.1162/tacl_a_00277",
pages = "403--419",
abstract = "Building meaningful phrase representations is challenging because phrase meanings are not simply the sum of their constituent meanings. Lexical composition can shift the meanings of the constituent words and introduce implicit information. We tested a broad range of textual representations for their capacity to address these issues. We found that, as expected, contextualized word representations perform better than static word embeddings, more so on detecting meaning shift than in recovering implicit information, in which their performance is still far from that of humans. Our evaluation suite, consisting of six tasks related to lexical composition effects, can serve future research aiming to improve representations.",
}
``` |
CyberHarem/ratura_lapisrelights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Ratura (Lapis Re:LiGHTs)
This is the dataset of Ratura (Lapis Re:LiGHTs), containing 90 images and their tags.
The core tags of this character are `blonde_hair, long_hair, hair_ornament, x_hair_ornament, hair_between_eyes, blue_eyes, purple_eyes, bangs, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 90 | 57.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ratura_lapisrelights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 90 | 47.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ratura_lapisrelights/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 183 | 87.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ratura_lapisrelights/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 90 | 57.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ratura_lapisrelights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 183 | 103.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ratura_lapisrelights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ratura_lapisrelights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, solo, black_gloves, fingerless_gloves, closed_mouth, capelet, upper_body, outdoors, blush, medium_breasts, tree |
| 1 | 9 |  |  |  |  |  | blush, 2girls, school_uniform, solo_focus, collarbone, closed_mouth, short_sleeves, hairclip, outdoors, pink_hair, smile |
| 2 | 17 |  |  |  |  |  | 1girl, solo, closed_mouth, blush, school_uniform, smile, anime_coloring, blurry_background, collarbone, hairclip, indoors, portrait, shirt, low_twintails, looking_at_viewer, upper_body |
| 3 | 5 |  |  |  |  |  | 1girl, closed_mouth, hat, sailor_collar, smile, solo, white_headwear, sleeveless_dress, standing, white_dress, looking_at_viewer, sailor_dress, collarbone, full_body, short_dress, striped, twintails |
| 4 | 6 |  |  |  |  |  | 1girl, indoors, short_sleeves, solo, closed_mouth, collarbone, frills, sitting, skirt, smile, ascot, puffy_sleeves |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | black_gloves | fingerless_gloves | closed_mouth | capelet | upper_body | outdoors | blush | medium_breasts | tree | 2girls | school_uniform | solo_focus | collarbone | short_sleeves | hairclip | pink_hair | smile | anime_coloring | blurry_background | indoors | portrait | shirt | low_twintails | looking_at_viewer | hat | sailor_collar | white_headwear | sleeveless_dress | standing | white_dress | sailor_dress | full_body | short_dress | striped | twintails | frills | sitting | skirt | ascot | puffy_sleeves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------------|:--------------------|:---------------|:----------|:-------------|:-----------|:--------|:-----------------|:-------|:---------|:-----------------|:-------------|:-------------|:----------------|:-----------|:------------|:--------|:-----------------|:--------------------|:----------|:-----------|:--------|:----------------|:--------------------|:------|:----------------|:-----------------|:-------------------|:-----------|:--------------|:---------------|:------------|:--------------|:----------|:------------|:---------|:----------|:--------|:--------|:----------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | | | | | X | | | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 17 |  |  |  |  |  | X | X | | | X | | X | | X | | | | X | | X | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | | | X | | | | | | | | | | X | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | | | X | | | | | | | | | | X | X | | | X | | | X | | | | | | | | | | | | | | | | X | X | X | X | X |
|
stefan-it/co-funer | ---
license: mit
task_categories:
- token-classification
language:
- de
---
# CO-Fun: A German Dataset on Company Outsourcing in Fund Prospectuses for Named Entity Recognition and Relation Extraction
This inofficial dataset repository provides a CoNLL-like version of the CO-Fun **NER** dataset, that was proposed in the CO-Fun paper (https://arxiv.org/abs/2403.15322):
> The process of cyber mapping gives insights in relationships among financial entities and service providers. Centered around the outsourcing practices of companies within fund prospectuses in Germany, we introduce a dataset specifically designed for named entity recognition and relation extraction tasks. The labeling process on 948 sentences was carried out by three experts which yields to 5,969 annotations for four entity types (Outsourcing, Company, Location and Software) and 4,102 relation annotations (Outsourcing-Company, Company-Location). State-of-the-art deep learning models were trained to recognize entities and extract relations showing first promising results.
## Preprocessing
The notebook [Export-To-CoNLL.ipynb](Export-To-CoNLL.ipynb) performs the necessary steps to create a CoNLL-like version of the CO-Fun dataset, that could easily be used for fine-tuning NER models.
Additionally, the [FlairDatasetTest.ipynb](FlairDatasetTest.ipynb) notebooks loads the dataset with the Flair dataset loader and checks, if the number of parsed sentences is correct and identical to the number of sentences reported in the official CO-Fun paper.
## Named Entites
The CO-Fun dataset provides annotations for the following Named Entities:
* `Auslagerung` (engl. outsourcing)
* `Unternehmen` (engl. company)
* `Ort` (engl. location)
* `Software`
# Example: Load Dataset with Flair library
The notebooks [FlairDatasetExample.ipynb](FlairDatasetExample.ipynb) shows how to load the dataset with the awesome [Flair library](https://github.com/flairNLP/flair).
# Changelog
* 25.03.2024: Initial version of the preprocessed CO-Fun NER dataset is released.
# Licence
The original CO-Fun dataset is released under MIT license. Thus, this preprocessed version is also licenced under MIT. |
nateraw/quick-captioning-dataset-test | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 345244.0
num_examples: 4
download_size: 0
dataset_size: 345244.0
---
# Dataset Card for "quick-captioning-dataset-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BramVanroy/ultra_feedback_dutch_cleaned | ---
language:
- nl
dataset_info:
- config_name: default
features:
- name: prompt
dtype: string
- name: GEITje-7B-ultra
dtype: string
- name: gpt-4-turbo
dtype: string
- name: rating_conciseness_GEITje-7B-ultra
dtype: int64
- name: rating_conciseness_gpt-4-turbo
dtype: int64
- name: rating_dutchness_GEITje-7B-ultra
dtype: int64
- name: rating_dutchness_gpt-4-turbo
dtype: int64
- name: rating_helpfulness_GEITje-7B-ultra
dtype: int64
- name: rating_helpfulness_gpt-4-turbo
dtype: int64
- name: rating_avg_GEITje-7B-ultra
dtype: float64
- name: rating_avg_gpt-4-turbo
dtype: float64
splits:
- name: train
num_bytes: 238549993.0
num_examples: 50820
download_size: 136381277
dataset_size: 238549993.0
- config_name: dpo_all
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 276826879.25
num_examples: 48279
- name: test_prefs
num_bytes: 14569835.75
num_examples: 2541
download_size: 165576369
dataset_size: 291396715.0
- config_name: dpo_hq
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 55192382.49245088
num_examples: 9186
- name: test_prefs
num_bytes: 2908024.507549121
num_examples: 484
download_size: 33267119
dataset_size: 58100407.0
- config_name: sft_gpt4_all
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_sft
num_bytes: 145093644.4
num_examples: 48279
- name: test_sft
num_bytes: 7636507.6
num_examples: 2541
download_size: 87206558
dataset_size: 152730152.0
- config_name: sft_gpt4_hq
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_sft
num_bytes: 61513259.16137732
num_examples: 19726
- name: test_sft
num_bytes: 3240001.8386226823
num_examples: 1039
download_size: 37187813
dataset_size: 64753261.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: dpo_all
data_files:
- split: train_prefs
path: dpo_all/train_prefs-*
- split: test_prefs
path: dpo_all/test_prefs-*
- config_name: dpo_hq
data_files:
- split: train_prefs
path: dpo_hq/train_prefs-*
- split: test_prefs
path: dpo_hq/test_prefs-*
- config_name: sft_gpt4_all
data_files:
- split: train_sft
path: sft_gpt4_all/train_sft-*
- split: test_sft
path: sft_gpt4_all/test_sft-*
- config_name: sft_gpt4_hq
data_files:
- split: train_sft
path: sft_gpt4_hq/train_sft-*
- split: test_sft
path: sft_gpt4_hq/test_sft-*
---
# Ultra Feedback Dutch Cleaned
This is a cleaned version of [BramVanroy/ultra_feedback_dutch](https://huggingface.co/datasets/BramVanroy/ultra_feedback_dutch), based on the [cleaning](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned) done by Argilla on the original Ultra Feedback dataset.
After cleaning I also generated replies for other models (like TowerInstruct, Mistral), but the results were too poor (in Dutch) to include so we only kept the GEITje Ultra and gpt-4-turbo generations. For both of these models we then had gpt-4-1106-preview rate different aspects of the query responses, the Dutch-ness, Helpfulness, and Conciseness (see ""Prompts" below).
The motivation for this dataset was heavily community-inspired. Most thanks go out to [David Berenstein](https://huggingface.co/davidberenstein1957) and [Edwin Rijgersberg](https://huggingface.co/Rijgersberg)!
## Usage
The default dataset contains all the original information (after cleaning). For actually usage, you need to use one of the subsets. All subsets have a test split of 5%.
```python
from datasets import load_dataset
ds = load_dataset("BramVanroy/ultra_feedback_dutch_cleaned_rated", "sft_gpt4_hq")
```
- `sft_gpt4_all` (50.8k): for instruction tuning, only the GPT-4 generations are kept. No further filtering.
- `sft_gpt4_hq` (20.8k): for instruction tuning, only high-quality GPT-4 generations are kept. That means: an average score of at least 4.5 and no individual score can be less than 4.0.
- `dpo_all` (50.8k): for preference tuning, no further filtering. The model with the highest average score is chosen as `chosen`, the other as `rejected`. In case ofa tie, GPT4 wins.
- `dpo_hq` (9.67k): for preference tuning. Only contains data where the average score of both models is at least 4.0, and where no score can be less than 3.5. Furthermore, the absolute difference between the two models' average scores cannot be less than 0.25 or higher than 2.0. The model with the highest average score is chosen as `chosen`, the other as `rejected`. In case ofa tie, GPT4 wins.
## Preprocessing
First, the low-quality/contaminated samples [as removed in the English cleaned version](argilla/ultrafeedback-binarized-preferences-cleaned) were also removed here.
Second, the data was deduplicated on all three text columns individually (model 1, model 2, prompt).
Lastly, more specific filters were applied:
- samples that were not identified as Dutch by fastText were removed
- samples with non-Latin characters are removed (very strict filtering, removes any translation tasks with non-Latin languages)
- samples with occurrences of "AI-assistent" or "AI-taalmodel" (and other derivations) are removed because these are often responses in the sense of "As an AI model, I cannot ...", which is not too useful
- samples with mentions of ChatGPT, GPT 3/4, OpenAI or ShareGPT are removed
- samples with mentions of the typical "knowledge cutoff" are removed
- samples with apologies such as "spijt me" are removed, as we are more interested in factual information and content-filled responses
## Prompts
These were originally made by [David Berenstein](https://huggingface.co/davidberenstein1957) at [Argilla](https://huggingface.co/argilla). I modified those slightly and used my own querying library.
### System prompt
> Je bent een automatische annotator die de kwaliteit van de tekst van een AI-model beoordeelt aan de hand van gegeven criteria. De tekst van het AI-model is een reactie op een gegeven instructie en moet die instructie dus goed beantwoorden of volgen.
### User prompt
For every model we query GPT4 multiple times, once for each criterion. We investigated three criteria: Dutch-ness (how good is the model's Dutch output), Helpfulness (how relevant is the model's reply), and Conciseness (how to-the-point is the model).
Below you find the template and criteria. `criterion_options` is a formatted list of the given options for a given criterion according to `opt_template` for each option.
```python
template = """Het volgende is een instructie geschreven door een mens (`Instructie:`), en een reactie op de instructie geschreven door een AI-model (`Reactie:`). Beoordeel de kwaliteit van de reactie van het AI-model, rekening houdend met de gegeven opties (`Opties:`).
Instructie:
{prompt}
---
Reactie:
{response}
---
Criteria: {criterion_question}
Opties:
{criterion_options}
---
Je antwoord moet in het volgende formaat zijn:
<rating>[{{min_score}}-{{max_score}}]</rating>
bijvoorbeeld:
<rating>3</rating>
---
Beoordeel nu alsjeblieft de `Reactie:` met een rating op basis van de `Opties:`. Geef geen extra uitleg."""
opt_template = """\
- {score}: {beschrijving}\
"""
criteria = {
"dutchness": {
"criterion_question": "Is de reactie in vlot en gramaticaal correct Nederlands geschreven? Negeer code-fragmenten in je analyse en richt je enkel op de doorlopende tekst. Leenwoorden uit andere talen mogen gebruikt worden als dat gewoonlijk is in het domein (bv. bij software). Een hogere score duidt op beter Nederlands taalgebruik.",
"criterion_options": {
1: "De reactie is onleesbaar, bevat veel grammaticale fouten, of is in slecht Nederlands geschreven.",
2: "De reactie is moeilijk te begrijpen of bevat veel grammaticale fouten.",
3: "De reactie is begrijpelijk maar bevat enkele grammaticale fouten.",
4: "De reactie is goed geschreven en bevat weinig grammaticale fouten.",
5: "De reactie is uitstekend geschreven, vlot leesbaar en bevat geen grammaticale fouten.",
},
},
"helpfulness": {
"criterion_question": "Is de reactie relevant en behulpzaam? Beantwoordt het model de instructie goed? Een hogere score duidt op een relevantere en behulpzamere reactie.",
"criterion_options": {
1: "De reactie is helemaal niet relevant of heeft aanzienlijke afwijkingen.",
2: "De reactie is slechts enigszins relevant maar is niet concreet.",
3: "De reactie is min of meer relevant en geeft een relevant antwoord.",
4: "De reactie is grotendeels relevant en lijkt zeer nuttig.",
5: "De reactie biedt briljante ideeën die de taak nauwkeurig aanpakken.",
},
},
"conciseness": {
"criterion_question": "Is de reactie beknopt en ter zake, zonder onnodige herhaling of uitweiding? Een hogere score duidt op een beknoptere, duidelijkere reactie.",
"criterion_options": {
1: "De reactie bevat overmatige herhaling of onnodige uitweiding.",
2: "De reactie is nogal omslachtig.",
3: "De reactie is redelijk beknopt met minimaal onnodige inhoud.",
4: "De reactie is beknopt en ter zake, met minimaal onnodige inhoud.",
5: "De reactie is uitzonderlijk positief beknopt, verstrekt informatie efficiënt.",
},
},
}
```
## Rating segmentation script
Note that data filtering and deduplication was done separately, based on [`interactive-filter-dutch`](https://github.com/BramVanroy/dutch-instruction-datasets). The following script is simply to create the configs.
```python
from typing import Literal
from datasets import load_dataset
ds = load_dataset("BramVanroy/ultra_feedback_dutch_cleaned", split="train")
model_cols = ["GEITje-7B-ultra", "gpt-4-turbo"]
model_ratings_no_avg_cols = {m: [c for c in ds.column_names if m in c and "rating" in c and "avg" not in c] for m in model_cols}
model_ratings_avg_cols = {m: f"rating_avg_{m}" for m in model_cols}
print("original dataset", ds.shape)
def filter_score_single(sample, model_name: str, rating_type: Literal["any", "all", "avg"], threshold: float = 3.5):
if rating_type == "all":
return all(sample[r] >= threshold for r in model_ratings_no_avg_cols[model_name])
elif rating_type == "avg":
return sample[model_ratings_avg_cols[model_name]] >= threshold
else:
raise ValueError(f"Invalid rating_type: {rating_type}")
def as_messages(sample, model_name: str):
messages = [
{"role": "user", "content": sample["prompt"]},
{"role": "assistant", "content": sample[model_name]},
]
return {"messages": messages}
def as_chosen_reject(sample):
model_chosen = "GEITje-7B-ultra" if sample["rating_avg_GEITje-7B-ultra"] > sample["rating_avg_gpt-4-turbo"] else "gpt-4-turbo"
model_rejected = "GEITje-7B-ultra" if model_chosen == "gpt-4-turbo" else "gpt-4-turbo"
chosen = [
{"role": "user", "content": sample["prompt"]},
{"role": "assistant", "content": sample[model_chosen]},
]
rejected = [
{"role": "user", "content": sample["prompt"]},
{"role": "assistant", "content": sample[model_rejected]},
]
return {"chosen": chosen, "rejected": rejected}
def diff_filter(sample, min_diff: float, max_diff: float):
rating1 = sample[model_ratings_avg_cols["gpt-4-turbo"]]
rating2 = sample[model_ratings_avg_cols["GEITje-7B-ultra"]]
diff = abs(rating1 - rating2)
return min_diff <= diff <= max_diff
# FOR SFT: ALL
# ds_all_sft = ds.map(lambda x: as_messages(x, "gpt-4-turbo"), num_proc=64)
# ds_all_sft = ds_all_sft.train_test_split(test_size=0.05, seed=42)
# ds_all_sft["train_sft"] = ds_all_sft["train"]
# ds_all_sft["test_sft"] = ds_all_sft["test"]
# del ds_all_sft["train"]
# del ds_all_sft["test"]
# ds_all_sft = ds_all_sft.select_columns(["prompt", "messages"])
# ds_all_sft.push_to_hub("BramVanroy/ultra_feedback_dutch_cleaned", config_name="sft_gpt4_all")
# FOR SFT: High quality GPT-4 generations
ds_gpt4_hq = ds.filter(lambda x: filter_score_single(x, "gpt-4-turbo", "avg", 4.5), num_proc=64)
ds_gpt4_hq = ds_gpt4_hq.filter(lambda x: filter_score_single(x, "gpt-4-turbo", "all", 4.0), num_proc=64)
ds_gpt4_hq = ds_gpt4_hq.map(lambda x: as_messages(x, "gpt-4-turbo"), num_proc=64)
ds_gpt4_hq = ds_gpt4_hq.select_columns(["prompt", "messages"])
ds_gpt4_hq = ds_gpt4_hq.train_test_split(test_size=0.05, seed=42)
ds_gpt4_hq["train_sft"] = ds_gpt4_hq["train"]
ds_gpt4_hq["test_sft"] = ds_gpt4_hq["test"]
del ds_gpt4_hq["train"]
del ds_gpt4_hq["test"]
ds_gpt4_hq.push_to_hub("BramVanroy/ultra_feedback_dutch_cleaned", config_name="sft_gpt4_hq")
print("gpt4_hq", ds_gpt4_hq.shape)
# FOR DPO: ALL - highest avg model is picked
ds_all_dpo = ds.map(as_chosen_reject, num_proc=64)
ds_all_dpo = ds_all_dpo.select_columns(["prompt", "chosen", "rejected"])
ds_all_dpo = ds_all_dpo.train_test_split(test_size=0.05, seed=42)
ds_all_dpo["train_prefs"] = ds_all_dpo["train"]
ds_all_dpo["test_prefs"] = ds_all_dpo["test"]
del ds_all_dpo["train"]
del ds_all_dpo["test"]
ds_all_dpo.push_to_hub("BramVanroy/ultra_feedback_dutch_cleaned", config_name="dpo_all")
# FOR DPO: High quality - highest avg model is picked
# + Min. avg score of 4.0, min. all scores of 3.5. Min diff. of 0.25, max diff. of 2.
ds_dpo_hq = ds.filter(lambda x: filter_score_single(x, "gpt-4-turbo", "avg", 4.0), num_proc=64)
ds_dpo_hq = ds_dpo_hq.filter(lambda x: filter_score_single(x, "gpt-4-turbo", "all", 3.5), num_proc=64)
ds_dpo_hq = ds_dpo_hq.filter(lambda x: filter_score_single(x, "GEITje-7B-ultra", "avg", 4.0), num_proc=64)
ds_dpo_hq = ds_dpo_hq.filter(lambda x: filter_score_single(x, "GEITje-7B-ultra", "all", 3.5), num_proc=64)
ds_dpo_hq = ds_dpo_hq.filter(lambda x: diff_filter(x, 0.25, 2), num_proc=64)
ds_dpo_hq = ds_dpo_hq.map(as_chosen_reject, num_proc=64)
ds_dpo_hq = ds_dpo_hq.select_columns(["prompt", "chosen", "rejected"])
ds_dpo_hq = ds_dpo_hq.train_test_split(test_size=0.05, seed=42)
ds_dpo_hq["train_prefs"] = ds_dpo_hq["train"]
ds_dpo_hq["test_prefs"] = ds_dpo_hq["test"]
del ds_dpo_hq["train"]
del ds_dpo_hq["test"]
ds_dpo_hq.push_to_hub("BramVanroy/ultra_feedback_dutch_cleaned", config_name="dpo_hq")
# Geitje avg score higher than gpt 4 avg score
# ds_geitje_higher = ds.filter(lambda x: x[model_ratings_avg_cols["GEITje-7B-ultra"]] > x[model_ratings_avg_cols["gpt-4-turbo"]], num_proc=64)
# print(ds_geitje_higher.shape)
```
|
saibo/bookcorpus_compact_1024_shard1_of_10_meta | ---
dataset_info:
features:
- name: text
dtype: string
- name: concept_with_offset
dtype: string
- name: cid_arrangement
sequence: int32
- name: schema_lengths
sequence: int64
- name: topic_entity_mask
sequence: int64
- name: text_lengths
sequence: int64
splits:
- name: train
num_bytes: 7450626244
num_examples: 61605
download_size: 1631069561
dataset_size: 7450626244
---
# Dataset Card for "bookcorpus_compact_1024_shard1_of_10_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pk1762006/Realty2 | ---
license: mit
---
|
deu05232/multiwoz_v23_2 | ---
dataset_info:
features:
- name: intent
sequence: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5836889
num_examples: 54176
- name: validation
num_bytes: 777785
num_examples: 7084
- name: test
num_bytes: 772136
num_examples: 7056
download_size: 2518039
dataset_size: 7386810
---
# Dataset Card for "multiwoz_v23_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/muskV2 | ---
language:
- en
tags:
- musk
- tabular_classification
- binary_classification
- multiclass_classification
pretty_name: Musk
size_categories:
- 100<n<1K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- musk
---
# Musk
The [Musk dataset](https://archive.ics.uci.edu/ml/datasets/Musk) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their income threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------|
| musk | Binary classification | Is the molecule a musk?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/muskV2")["train"]
```
|
yan1984/pegasus-samsum | ---
license: mit
---
|
rai-sandeep/whitepaper-data | ---
dataset_info:
features:
- name: task
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 340930
num_examples: 22
download_size: 179210
dataset_size: 340930
---
# Dataset Card for "whitepaper-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yeniceriSGK/Falcon1BTestingDataSet | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 19657
num_examples: 10
download_size: 20918
dataset_size: 19657
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BevenRozario/job_desc_5k | ---
dataset_info:
features:
- name: Instruction
dtype: string
- name: Response
dtype: string
splits:
- name: train_dataset
num_bytes: 8140016.7
num_examples: 4500
- name: eval_dataset
num_bytes: 904446.3
num_examples: 500
download_size: 2283111
dataset_size: 9044463.0
configs:
- config_name: default
data_files:
- split: train_dataset
path: data/train_dataset-*
- split: eval_dataset
path: data/eval_dataset-*
---
|
DylanonWic/common_voice_10_1_th_augmented_pitch | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: input_ids
sequence: int32
- name: input_values
sequence: float32
splits:
- name: train
num_bytes: 7093139791
num_examples: 28696
- name: test
num_bytes: 3163850075.5886087
num_examples: 10123
- name: validation
num_bytes: 2976158781.6036987
num_examples: 10009
download_size: 12714099625
dataset_size: 13233148648.192307
---
# Dataset Card for "common_voice_10_1_th_augmented_pitch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vvtq/control_val_10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: noised
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 15015921.0
num_examples: 11
download_size: 15018492
dataset_size: 15015921.0
---
# Dataset Card for "control_val_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MAdAiLab/lex_glue_scotus | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
'5': '6'
'6': '7'
'7': '8'
'8': '9'
'9': '10'
'10': '11'
'11': '12'
'12': '13'
splits:
- name: train
num_bytes: 178959316
num_examples: 5000
- name: test
num_bytes: 76213279
num_examples: 1400
- name: validation
num_bytes: 75600243
num_examples: 1400
download_size: 173411381
dataset_size: 330772838
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
eswardivi/1_MSA_PHASE | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: Name
dtype: string
- name: Label
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 384226713.0
num_examples: 116
download_size: 382442220
dataset_size: 384226713.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
phongmt184172/python_data_27k | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 39244063.17425801
num_examples: 19056
- name: test
num_bytes: 8410618.912870996
num_examples: 4084
- name: val
num_bytes: 8410618.912870996
num_examples: 4084
download_size: 23588770
dataset_size: 56065301.0
---
# Dataset Card for "python_data_27k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cburger/md_cleaned | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ' Allergy / Immunology'
'1': ' Autopsy'
'2': ' Bariatrics'
'3': ' Cardiovascular / Pulmonary'
'4': ' Chiropractic'
'5': ' Consult - History and Phy.'
'6': ' Cosmetic / Plastic Surgery'
'7': ' Dentistry'
'8': ' Dermatology'
'9': ' Diets and Nutritions'
'10': ' Discharge Summary'
'11': ' ENT - Otolaryngology'
'12': ' Emergency Room Reports'
'13': ' Endocrinology'
'14': ' Gastroenterology'
'15': ' General Medicine'
'16': ' Hematology - Oncology'
'17': ' Hospice - Palliative Care'
'18': ' IME-QME-Work Comp etc.'
'19': ' Lab Medicine - Pathology'
'20': ' Letters'
'21': ' Nephrology'
'22': ' Neurology'
'23': ' Neurosurgery'
'24': ' Obstetrics / Gynecology'
'25': ' Office Notes'
'26': ' Ophthalmology'
'27': ' Orthopedic'
'28': ' Pain Management'
'29': ' Pediatrics - Neonatal'
'30': ' Physical Medicine - Rehab'
'31': ' Podiatry'
'32': ' Psychiatry / Psychology'
'33': ' Radiology'
'34': ' Rheumatology'
'35': ' SOAP / Chart / Progress Notes'
'36': ' Sleep Medicine'
'37': ' Speech - Language'
'38': ' Surgery'
'39': ' Urology'
splits:
- name: train
num_bytes: 15217210
num_examples: 4948
download_size: 7196712
dataset_size: 15217210
---
# Dataset Card for "md_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/rbrt_eval_sur_full_lrg | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 58030544
num_examples: 22480
download_size: 16743699
dataset_size: 58030544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_eval_sur_full_lrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AI4EPS/quakeflow_nc | ---
license: mit
---
# Quakeflow_NC
## Introduction
This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
Cite the NCEDC and PhaseNet:
Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
Acknowledge the NCEDC:
Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
```
Group: / len:16227
|- Group: /nc71111584 len:2
| |-* begin_time = 2020-01-02T07:01:19.620
| |-* depth_km = 3.69
| |-* end_time = 2020-01-02T07:03:19.620
| |-* event_id = nc71111584
| |-* event_time = 2020-01-02T07:01:48.240
| |-* event_time_index = 2862
| |-* latitude = 37.6545
| |-* longitude = -118.8798
| |-* magnitude = -0.15
| |-* magnitude_type = D
| |-* num_stations = 2
| |- Dataset: /nc71111584/NC.MCB..HH (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
| | | |-* distance_km = 1.9
| | | |-* dt_s = 0.01
| | | |-* elevation_m = 2391.0
| | | |-* emergence_angle = 159.0
| | | |-* event_id = ['nc71111584' 'nc71111584']
| | | |-* latitude = 37.6444
| | | |-* location =
| | | |-* longitude = -118.8968
| | | |-* network = NC
| | | |-* phase_index = [3000 3101]
| | | |-* phase_polarity = ['U' 'N']
| | | |-* phase_remark = ['IP' 'ES']
| | | |-* phase_score = [1 2]
| | | |-* phase_time = ['2020-01-02T07:01:49.620' '2020-01-02T07:01:50.630']
| | | |-* phase_type = ['P' 'S']
| | | |-* snr = [2.82143 3.055604 1.8412642]
| | | |-* station = MCB
| | | |-* unit = 1e-6m/s
| |- Dataset: /nc71111584/NC.MCB..HN (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
......
```
## How to use
### Requirements
- datasets
- h5py
- fsspec
- torch (for PyTorch)
### Usage
Import the necessary packages:
```python
import h5py
import numpy as np
import torch
from torch.utils.data import Dataset, IterableDataset, DataLoader
from datasets import load_dataset
```
We have 6 configurations for the dataset:
- "station"
- "event"
- "station_train"
- "event_train"
- "station_test"
- "event_test"
"station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
The sample of `station` is a dictionary with the following keys:
- `data`: the waveform with shape `(3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(3, nt)`, the first dimension is noise, P and S
- `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
- `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
The sample of `event` is a dictionary with the following keys:
- `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(n_station, 3, nt)`, the first dimension is noise, P and S
- `event_center`: the probability of the event time with shape `(n_station, feature_nt)`, default feature time length is 512
- `event_location`: the space-time coordinates of the event with shape `(n_staion, 4, feature_nt)`
- `event_location_mask`: the probability mask of the event time with shape `(n_station, feature_nt)`
- `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
```python
# load dataset
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
# So we recommend to directly load the dataset and convert it into iterable later
# The dataset is very large, so you need to wait for some time at the first time
# to load "station_test" with test split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="test")
# or
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# to load "event" with train split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
```
#### Usage for `station`
Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
```python
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting when using iterable dataset
# if you want to use dataset directly, just use
# quakeflow_nc.with_format("torch")
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=4, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
```
#### Usage for `event`
Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
```python
quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="test", name="event_test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=1, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
``` |
slushily/autotrain-data-hannah-jpg-test | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: hannah-jpg-test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project hannah-jpg-test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x256 RGB PIL image>",
"target": 0
},
{
"image": "<256x256 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['hannah'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7 |
| valid | 7 |
|
gryffindor-ISWS/1500_dbp_abs_withoutIMG | ---
license: gpl-3.0
language:
- en
tags:
- art
size_categories:
- 1K<n<10K
--- |
adityarra07/czech_train_data | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 669027003.0330192
num_examples: 12613
- name: test
num_bytes: 26521327.322326932
num_examples: 500
download_size: 658874865
dataset_size: 695548330.3553461
---
# Dataset Card for "czech_train_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.