datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
felgryn/mini-platypus | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245921
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pankajemplay/mistral-intent-data | ---
dataset_info:
features:
- name: User Query
dtype: string
- name: Intent
dtype: string
- name: id type
dtype: string
- name: id value
dtype: string
- name: id slot filled
dtype: bool
- name: Task
dtype: string
- name: task slot filled
dtype: bool
- name: Bot Response
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 853957
num_examples: 1171
download_size: 188944
dataset_size: 853957
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mistral-intent-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mitsuki-Sakamoto/alpaca_farm-deberta-re-pref-64-fil_self_160m_bo16_2_mix_50_kl_0.1_prm_70m_thr_0.0_seed_1_tp_0.5 | ---
dataset_info:
config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: index
dtype: int64
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: filtered_epoch
dtype: int64
- name: gen_reward
dtype: float64
- name: gen_response
dtype: string
splits:
- name: epoch_0
num_bytes: 43627296
num_examples: 18928
- name: epoch_1
num_bytes: 44147982
num_examples: 18928
- name: epoch_2
num_bytes: 44213397
num_examples: 18928
- name: epoch_3
num_bytes: 44256973
num_examples: 18928
- name: epoch_4
num_bytes: 44271592
num_examples: 18928
- name: epoch_5
num_bytes: 44273341
num_examples: 18928
- name: epoch_6
num_bytes: 44262994
num_examples: 18928
- name: epoch_7
num_bytes: 44253349
num_examples: 18928
- name: epoch_8
num_bytes: 44250328
num_examples: 18928
- name: epoch_9
num_bytes: 44244920
num_examples: 18928
- name: epoch_10
num_bytes: 44245188
num_examples: 18928
- name: epoch_11
num_bytes: 44245798
num_examples: 18928
- name: epoch_12
num_bytes: 44244347
num_examples: 18928
- name: epoch_13
num_bytes: 44244531
num_examples: 18928
- name: epoch_14
num_bytes: 44244291
num_examples: 18928
- name: epoch_15
num_bytes: 44243364
num_examples: 18928
- name: epoch_16
num_bytes: 44245334
num_examples: 18928
- name: epoch_17
num_bytes: 44244087
num_examples: 18928
- name: epoch_18
num_bytes: 44245204
num_examples: 18928
- name: epoch_19
num_bytes: 44244918
num_examples: 18928
- name: epoch_20
num_bytes: 44243496
num_examples: 18928
- name: epoch_21
num_bytes: 44245922
num_examples: 18928
- name: epoch_22
num_bytes: 44244974
num_examples: 18928
- name: epoch_23
num_bytes: 44245847
num_examples: 18928
- name: epoch_24
num_bytes: 44245653
num_examples: 18928
- name: epoch_25
num_bytes: 44245656
num_examples: 18928
- name: epoch_26
num_bytes: 44245912
num_examples: 18928
- name: epoch_27
num_bytes: 44246318
num_examples: 18928
- name: epoch_28
num_bytes: 44246995
num_examples: 18928
- name: epoch_29
num_bytes: 44247062
num_examples: 18928
download_size: 684059925
dataset_size: 1326707069
configs:
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: epoch_0
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_0-*
- split: epoch_1
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_1-*
- split: epoch_2
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_2-*
- split: epoch_3
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_3-*
- split: epoch_4
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_4-*
- split: epoch_5
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_5-*
- split: epoch_6
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_6-*
- split: epoch_7
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_7-*
- split: epoch_8
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_8-*
- split: epoch_9
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_9-*
- split: epoch_10
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_10-*
- split: epoch_11
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_11-*
- split: epoch_12
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_12-*
- split: epoch_13
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_13-*
- split: epoch_14
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_14-*
- split: epoch_15
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_15-*
- split: epoch_16
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_16-*
- split: epoch_17
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_17-*
- split: epoch_18
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_18-*
- split: epoch_19
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_19-*
- split: epoch_20
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_20-*
- split: epoch_21
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_21-*
- split: epoch_22
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_22-*
- split: epoch_23
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_23-*
- split: epoch_24
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_24-*
- split: epoch_25
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_25-*
- split: epoch_26
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_26-*
- split: epoch_27
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_27-*
- split: epoch_28
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_28-*
- split: epoch_29
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_29-*
---
|
foilfoilfoil/GGB-Discord-Data-top-6 | ---
license: other
---
|
sc3069/zx | ---
dataset_info:
features:
- name: input
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 10329536
num_examples: 350
download_size: 1991265
dataset_size: 10329536
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "zx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DigitalUmuganda/NMT_Rwandan-Gazette_parallel_data_en_kin | ---
license: cc
task_categories:
- translation
language:
- rw
- en
tags:
- kinyarwanda
- english
- machine-translation
- low-ressourced languages
pretty_name: 'NMT Rwanda Gazette parallel data '
size_categories:
- 100K<n<1M
---
## Dataset Details
### Dataset Description
This is a curated parallel dataset from the Official Gazette of the Republic of Rwanda. It has been curated to extract corresponding English and Kinyarwanda text and in the future we shall add French to the mix
- **Curated by:** Digital Umuganda
- **Language(s) (NLP):** Kinyarwanda and English
- **License:** cc-by-4.0
### Dataset Sources [optional]
The dataset original content was retrieved from the Rwandan ministry of Justice [website](https://www.minijust.gov.rw/official-gazette)
<!-- Provide the basic links for the dataset. -->
## Uses
The dataset is mainly used for machine translation, however it can be used for other NLP tasks such as text generation and NER
|
AntoineBlanot/alpaca-llama2-chat | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 46095859
num_examples: 52002
download_size: 0
dataset_size: 46095859
---
# Dataset Card for "alpaca-llama2-chat"
This dataset is the [alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset formatted for [llama2-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
The default system prompt, as well as special tokens has all been added for a ready-to-train dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reformatco/sd1_5-regularization-images | ---
license: mit
---
A collection of regularization / class instance datasets for the [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename.
There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked, they are simple ZIP files containing the images.
This dataset is based on the conventions setup by [Progamergov](https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images) and their very useful regularization images.
Currently this repository contains the following datasets (datasets are named after the prompt they used):
* "**interior design**": 2354 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. |
dog/fuego-20230222-154818-7e82ca | ---
tags:
- fuego
fuego:
id: 20230222-154818-7e82ca
status: done
script: run.py
requirements_file: requirements.txt
space_id: dog/fuego-runner
space_hardware: cpu-basic
---
|
davanstrien/raw-tldr-dataset-sft | ---
dataset_info:
features:
- name: datasetId
dtype: string
- name: author
dtype: string
- name: last_modified
dtype: timestamp[us, tz=UTC]
- name: downloads
dtype: int64
- name: likes
dtype: int64
- name: tags
sequence: string
- name: task_categories
sequence: string
- name: createdAt
dtype: timestamp[us, tz=UTC]
- name: card
dtype: string
- name: parsed_card
dtype: string
- name: length
dtype: int64
- name: input
dtype: string
- name: generation_model
sequence: string
- name: generation_prompt
sequence: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
splits:
- name: train
num_bytes: 83205399
num_examples: 3132
download_size: 35242193
dataset_size: 83205399
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shredder-31/Min_Sum_SummarizationData | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 28776440
num_examples: 1500
- name: dev
num_bytes: 5861659
num_examples: 300
- name: test
num_bytes: 3887278
num_examples: 200
download_size: 17684981
dataset_size: 38525377
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
|
liuyanchen1015/MULTI_VALUE_wnli_reduplicate_interrogative | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 127
num_examples: 1
- name: train
num_bytes: 1774
num_examples: 10
download_size: 5923
dataset_size: 1901
---
# Dataset Card for "MULTI_VALUE_wnli_reduplicate_interrogative"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Francesco/peixos-fish | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': peixos
'1': peix
'2': taca
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: peixos-fish
tags:
- rf100
---
# Dataset Card for peixos-fish
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/peixos-fish
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
peixos-fish
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/peixos-fish
### Citation Information
```
@misc{ peixos-fish,
title = { peixos fish Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/peixos-fish } },
url = { https://universe.roboflow.com/object-detection/peixos-fish },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
jlbaker361/multiplication_whole | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1255311.0
num_examples: 29376
- name: test
num_bytes: 139479.0
num_examples: 3264
download_size: 896516
dataset_size: 1394790.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "multiplication_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Enno-Ai/fr-instructs | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 5904510661
num_examples: 11794112
download_size: 1623654660
dataset_size: 5904510661
license: cc-by-2.5
task_categories:
- text2text-generation
- table-question-answering
language:
- fr
size_categories:
- 10M<n<100M
---
# A collection of 12 million french-only instructions deduplicated from various sources
Source :
- clips/mqa-fr-faq
- multilingual-wikihow-qa-16k
- MBZUAI/Bactrian-X
- argilla/databricks-dolly-15k-curated-multilingual
- innermost47/alpaca-fr
- etalab-ia/piaf |
strombergnlp/twitter_pos_vcb | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- part-of-speech
paperswithcode_id: twitter-pos-vcb
pretty_name: Twitter PoS VCB
---
# Dataset Card for "twitter-pos-vcb"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://gate.ac.uk/wiki/twitter-postagger.html](https://gate.ac.uk/wiki/twitter-postagger.html)
- **Repository:** [https://github.com/GateNLP/gateplugin-Twitter](https://github.com/GateNLP/gateplugin-Twitter)
- **Paper:** [https://aclanthology.org/R13-1026.pdf](https://aclanthology.org/R13-1026.pdf)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 4.51 MiB
- **Size of the generated dataset:** 26.88 MB
- **Total amount of disk used:** 31.39 MB
### Dataset Summary
Part-of-speech information is basic NLP task. However, Twitter text
is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
This data is the vote-constrained bootstrapped data generate to support state-of-the-art results.
The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.
The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged
jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs
are completely compatible over a whole tweet, is that tweet added to the dataset.
This data is recommend for use a training data **only**, and not evaluation data.
For more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English, non-region-specific. `bcp47:en`
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### twitter_pos_vcb
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
```
### Data Splits
| name |tokens|sentences|
|---------|----:|---------:|
|twitter-pos-vcb|1 543 126| 159 492|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Creative Commons Attribution 4.0 (CC-BY)
### Citation Information
```
@inproceedings{derczynski2013twitter,
title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
pages={198--206},
year={2013}
}
```
### Contributions
Author uploaded ([@leondz](https://github.com/leondz)) |
Technoculture/riddle_sense | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 829501
num_examples: 3510
- name: validation
num_bytes: 239903
num_examples: 1021
- name: test
num_bytes: 249470
num_examples: 1184
download_size: 651507
dataset_size: 1318874
task_categories:
- question-answering
language:
- en
tags:
- reasoning
pretty_name: Riddle Sen
size_categories:
- 1K<n<10K
---
[riddle_sense](https://huggingface.co/datasets/riddle_sense) dataset formatted into an alpaca format dataset for instruction tuning LLMs for reasoning capabilities. |
polinaeterna/yet_another_test | ---
dataset_info:
features:
- name: x
dtype: int64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 1600000
num_examples: 100000
- name: test
num_bytes: 112000
num_examples: 7000
download_size: 1192989
dataset_size: 1712000
builder_config:
data_dir: data
---
# Dataset Card for "yet_another_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_fblgit__UNA-SimpleSmaug-34b-v1beta | ---
pretty_name: Evaluation run of fblgit/UNA-SimpleSmaug-34b-v1beta
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [fblgit/UNA-SimpleSmaug-34b-v1beta](https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fblgit__UNA-SimpleSmaug-34b-v1beta\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-09T14:36:13.989348](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNA-SimpleSmaug-34b-v1beta/blob/main/results_2024-02-09T14-36-13.989348.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7649553475572979,\n\
\ \"acc_stderr\": 0.02829491282350785,\n \"acc_norm\": 0.7681713551647662,\n\
\ \"acc_norm_stderr\": 0.028841138819719683,\n \"mc1\": 0.5299877600979193,\n\
\ \"mc1_stderr\": 0.017471992091697534,\n \"mc2\": 0.7016557407771556,\n\
\ \"mc2_stderr\": 0.014224339474805845\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7192832764505119,\n \"acc_stderr\": 0.013131238126975583,\n\
\ \"acc_norm\": 0.7457337883959044,\n \"acc_norm_stderr\": 0.012724999945157736\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6709818761202948,\n\
\ \"acc_stderr\": 0.004688963175758129,\n \"acc_norm\": 0.8673571001792472,\n\
\ \"acc_norm_stderr\": 0.003384951803213472\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.03785714465066653,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.03785714465066653\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.881578947368421,\n \"acc_stderr\": 0.02629399585547494,\n\
\ \"acc_norm\": 0.881578947368421,\n \"acc_norm_stderr\": 0.02629399585547494\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.78,\n\
\ \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \
\ \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.024618298195866514,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.024618298195866514\n \
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9027777777777778,\n\
\ \"acc_stderr\": 0.024774516250440182,\n \"acc_norm\": 0.9027777777777778,\n\
\ \"acc_norm_stderr\": 0.024774516250440182\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.59,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\": 0.59,\n\
\ \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7225433526011561,\n\
\ \"acc_stderr\": 0.034140140070440354,\n \"acc_norm\": 0.7225433526011561,\n\
\ \"acc_norm_stderr\": 0.034140140070440354\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.04959859966384181,\n\
\ \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.04959859966384181\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.774468085106383,\n \"acc_stderr\": 0.027321078417387533,\n\
\ \"acc_norm\": 0.774468085106383,\n \"acc_norm_stderr\": 0.027321078417387533\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5964912280701754,\n\
\ \"acc_stderr\": 0.04615186962583707,\n \"acc_norm\": 0.5964912280701754,\n\
\ \"acc_norm_stderr\": 0.04615186962583707\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.7586206896551724,\n \"acc_stderr\": 0.03565998174135302,\n\
\ \"acc_norm\": 0.7586206896551724,\n \"acc_norm_stderr\": 0.03565998174135302\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.7380952380952381,\n \"acc_stderr\": 0.02264421261552521,\n \"\
acc_norm\": 0.7380952380952381,\n \"acc_norm_stderr\": 0.02264421261552521\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5476190476190477,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.5476190476190477,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9064516129032258,\n\
\ \"acc_stderr\": 0.016565754668270982,\n \"acc_norm\": 0.9064516129032258,\n\
\ \"acc_norm_stderr\": 0.016565754668270982\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6847290640394089,\n \"acc_stderr\": 0.03269080871970186,\n\
\ \"acc_norm\": 0.6847290640394089,\n \"acc_norm_stderr\": 0.03269080871970186\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\"\
: 0.77,\n \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8666666666666667,\n \"acc_stderr\": 0.026544435312706467,\n\
\ \"acc_norm\": 0.8666666666666667,\n \"acc_norm_stderr\": 0.026544435312706467\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9242424242424242,\n \"acc_stderr\": 0.018852670234993093,\n \"\
acc_norm\": 0.9242424242424242,\n \"acc_norm_stderr\": 0.018852670234993093\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9792746113989638,\n \"acc_stderr\": 0.010281417011909025,\n\
\ \"acc_norm\": 0.9792746113989638,\n \"acc_norm_stderr\": 0.010281417011909025\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8153846153846154,\n \"acc_stderr\": 0.019671632413100295,\n\
\ \"acc_norm\": 0.8153846153846154,\n \"acc_norm_stderr\": 0.019671632413100295\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.46296296296296297,\n \"acc_stderr\": 0.030401786406101507,\n \
\ \"acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.030401786406101507\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.023005459446673936,\n\
\ \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.023005459446673936\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.5165562913907285,\n \"acc_stderr\": 0.04080244185628972,\n \"\
acc_norm\": 0.5165562913907285,\n \"acc_norm_stderr\": 0.04080244185628972\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9247706422018349,\n \"acc_stderr\": 0.011308662537571727,\n \"\
acc_norm\": 0.9247706422018349,\n \"acc_norm_stderr\": 0.011308662537571727\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6574074074074074,\n \"acc_stderr\": 0.032365852526021574,\n \"\
acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.032365852526021574\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9264705882352942,\n \"acc_stderr\": 0.018318855850089678,\n \"\
acc_norm\": 0.9264705882352942,\n \"acc_norm_stderr\": 0.018318855850089678\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9113924050632911,\n \"acc_stderr\": 0.018498315206865384,\n \
\ \"acc_norm\": 0.9113924050632911,\n \"acc_norm_stderr\": 0.018498315206865384\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.820627802690583,\n\
\ \"acc_stderr\": 0.0257498195691928,\n \"acc_norm\": 0.820627802690583,\n\
\ \"acc_norm_stderr\": 0.0257498195691928\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8625954198473282,\n \"acc_stderr\": 0.030194823996804475,\n\
\ \"acc_norm\": 0.8625954198473282,\n \"acc_norm_stderr\": 0.030194823996804475\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035216,\n \"\
acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035216\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8981481481481481,\n\
\ \"acc_stderr\": 0.02923927267563275,\n \"acc_norm\": 0.8981481481481481,\n\
\ \"acc_norm_stderr\": 0.02923927267563275\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8711656441717791,\n \"acc_stderr\": 0.026321383198783674,\n\
\ \"acc_norm\": 0.8711656441717791,\n \"acc_norm_stderr\": 0.026321383198783674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5625,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.5625,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.033932957297610096,\n\
\ \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.033932957297610096\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9444444444444444,\n\
\ \"acc_stderr\": 0.01500631280644693,\n \"acc_norm\": 0.9444444444444444,\n\
\ \"acc_norm_stderr\": 0.01500631280644693\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9169859514687101,\n\
\ \"acc_stderr\": 0.009866287394639541,\n \"acc_norm\": 0.9169859514687101,\n\
\ \"acc_norm_stderr\": 0.009866287394639541\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8265895953757225,\n \"acc_stderr\": 0.02038322955113502,\n\
\ \"acc_norm\": 0.8265895953757225,\n \"acc_norm_stderr\": 0.02038322955113502\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7921787709497207,\n\
\ \"acc_stderr\": 0.01357024832508134,\n \"acc_norm\": 0.7921787709497207,\n\
\ \"acc_norm_stderr\": 0.01357024832508134\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8594771241830066,\n \"acc_stderr\": 0.019899435463539946,\n\
\ \"acc_norm\": 0.8594771241830066,\n \"acc_norm_stderr\": 0.019899435463539946\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8038585209003215,\n\
\ \"acc_stderr\": 0.022552447780478033,\n \"acc_norm\": 0.8038585209003215,\n\
\ \"acc_norm_stderr\": 0.022552447780478033\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8703703703703703,\n \"acc_stderr\": 0.018689725721062072,\n\
\ \"acc_norm\": 0.8703703703703703,\n \"acc_norm_stderr\": 0.018689725721062072\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6347517730496454,\n \"acc_stderr\": 0.02872386385328127,\n \
\ \"acc_norm\": 0.6347517730496454,\n \"acc_norm_stderr\": 0.02872386385328127\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5951760104302477,\n\
\ \"acc_stderr\": 0.012536743830953986,\n \"acc_norm\": 0.5951760104302477,\n\
\ \"acc_norm_stderr\": 0.012536743830953986\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.023157468308559345,\n\
\ \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.023157468308559345\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8186274509803921,\n \"acc_stderr\": 0.015588643495370463,\n \
\ \"acc_norm\": 0.8186274509803921,\n \"acc_norm_stderr\": 0.015588643495370463\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8489795918367347,\n \"acc_stderr\": 0.022923004094736847,\n\
\ \"acc_norm\": 0.8489795918367347,\n \"acc_norm_stderr\": 0.022923004094736847\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9104477611940298,\n\
\ \"acc_stderr\": 0.02019067053502792,\n \"acc_norm\": 0.9104477611940298,\n\
\ \"acc_norm_stderr\": 0.02019067053502792\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.02876234912646613,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.02876234912646613\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n\
\ \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.5783132530120482,\n\
\ \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8596491228070176,\n \"acc_stderr\": 0.026640582539133196,\n\
\ \"acc_norm\": 0.8596491228070176,\n \"acc_norm_stderr\": 0.026640582539133196\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5299877600979193,\n\
\ \"mc1_stderr\": 0.017471992091697534,\n \"mc2\": 0.7016557407771556,\n\
\ \"mc2_stderr\": 0.014224339474805845\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8382004735595896,\n \"acc_stderr\": 0.010350128010292404\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7247915087187263,\n \
\ \"acc_stderr\": 0.012302114305862656\n }\n}\n```"
repo_url: https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|arc:challenge|25_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|gsm8k|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hellaswag|10_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T14-36-13.989348.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T14-36-13.989348.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- '**/details_harness|winogrande|5_2024-02-09T14-36-13.989348.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-09T14-36-13.989348.parquet'
- config_name: results
data_files:
- split: 2024_02_09T14_36_13.989348
path:
- results_2024-02-09T14-36-13.989348.parquet
- split: latest
path:
- results_2024-02-09T14-36-13.989348.parquet
---
# Dataset Card for Evaluation run of fblgit/UNA-SimpleSmaug-34b-v1beta
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [fblgit/UNA-SimpleSmaug-34b-v1beta](https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fblgit__UNA-SimpleSmaug-34b-v1beta",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-09T14:36:13.989348](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNA-SimpleSmaug-34b-v1beta/blob/main/results_2024-02-09T14-36-13.989348.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7649553475572979,
"acc_stderr": 0.02829491282350785,
"acc_norm": 0.7681713551647662,
"acc_norm_stderr": 0.028841138819719683,
"mc1": 0.5299877600979193,
"mc1_stderr": 0.017471992091697534,
"mc2": 0.7016557407771556,
"mc2_stderr": 0.014224339474805845
},
"harness|arc:challenge|25": {
"acc": 0.7192832764505119,
"acc_stderr": 0.013131238126975583,
"acc_norm": 0.7457337883959044,
"acc_norm_stderr": 0.012724999945157736
},
"harness|hellaswag|10": {
"acc": 0.6709818761202948,
"acc_stderr": 0.004688963175758129,
"acc_norm": 0.8673571001792472,
"acc_norm_stderr": 0.003384951803213472
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.03785714465066653,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.03785714465066653
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.881578947368421,
"acc_stderr": 0.02629399585547494,
"acc_norm": 0.881578947368421,
"acc_norm_stderr": 0.02629399585547494
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932261,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932261
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8,
"acc_stderr": 0.024618298195866514,
"acc_norm": 0.8,
"acc_norm_stderr": 0.024618298195866514
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9027777777777778,
"acc_stderr": 0.024774516250440182,
"acc_norm": 0.9027777777777778,
"acc_norm_stderr": 0.024774516250440182
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.034140140070440354,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.034140140070440354
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5392156862745098,
"acc_stderr": 0.04959859966384181,
"acc_norm": 0.5392156862745098,
"acc_norm_stderr": 0.04959859966384181
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.774468085106383,
"acc_stderr": 0.027321078417387533,
"acc_norm": 0.774468085106383,
"acc_norm_stderr": 0.027321078417387533
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5964912280701754,
"acc_stderr": 0.04615186962583707,
"acc_norm": 0.5964912280701754,
"acc_norm_stderr": 0.04615186962583707
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7586206896551724,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.7586206896551724,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.7380952380952381,
"acc_stderr": 0.02264421261552521,
"acc_norm": 0.7380952380952381,
"acc_norm_stderr": 0.02264421261552521
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5476190476190477,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.5476190476190477,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9064516129032258,
"acc_stderr": 0.016565754668270982,
"acc_norm": 0.9064516129032258,
"acc_norm_stderr": 0.016565754668270982
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6847290640394089,
"acc_stderr": 0.03269080871970186,
"acc_norm": 0.6847290640394089,
"acc_norm_stderr": 0.03269080871970186
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165044,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165044
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8666666666666667,
"acc_stderr": 0.026544435312706467,
"acc_norm": 0.8666666666666667,
"acc_norm_stderr": 0.026544435312706467
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9242424242424242,
"acc_stderr": 0.018852670234993093,
"acc_norm": 0.9242424242424242,
"acc_norm_stderr": 0.018852670234993093
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9792746113989638,
"acc_stderr": 0.010281417011909025,
"acc_norm": 0.9792746113989638,
"acc_norm_stderr": 0.010281417011909025
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8153846153846154,
"acc_stderr": 0.019671632413100295,
"acc_norm": 0.8153846153846154,
"acc_norm_stderr": 0.019671632413100295
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.030401786406101507,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.030401786406101507
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.023005459446673936,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.023005459446673936
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5165562913907285,
"acc_stderr": 0.04080244185628972,
"acc_norm": 0.5165562913907285,
"acc_norm_stderr": 0.04080244185628972
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9247706422018349,
"acc_stderr": 0.011308662537571727,
"acc_norm": 0.9247706422018349,
"acc_norm_stderr": 0.011308662537571727
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.032365852526021574,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.032365852526021574
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9264705882352942,
"acc_stderr": 0.018318855850089678,
"acc_norm": 0.9264705882352942,
"acc_norm_stderr": 0.018318855850089678
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9113924050632911,
"acc_stderr": 0.018498315206865384,
"acc_norm": 0.9113924050632911,
"acc_norm_stderr": 0.018498315206865384
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.820627802690583,
"acc_stderr": 0.0257498195691928,
"acc_norm": 0.820627802690583,
"acc_norm_stderr": 0.0257498195691928
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8625954198473282,
"acc_stderr": 0.030194823996804475,
"acc_norm": 0.8625954198473282,
"acc_norm_stderr": 0.030194823996804475
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.030083098716035216,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.030083098716035216
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8981481481481481,
"acc_stderr": 0.02923927267563275,
"acc_norm": 0.8981481481481481,
"acc_norm_stderr": 0.02923927267563275
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8711656441717791,
"acc_stderr": 0.026321383198783674,
"acc_norm": 0.8711656441717791,
"acc_norm_stderr": 0.026321383198783674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5625,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.8640776699029126,
"acc_stderr": 0.033932957297610096,
"acc_norm": 0.8640776699029126,
"acc_norm_stderr": 0.033932957297610096
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9444444444444444,
"acc_stderr": 0.01500631280644693,
"acc_norm": 0.9444444444444444,
"acc_norm_stderr": 0.01500631280644693
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9169859514687101,
"acc_stderr": 0.009866287394639541,
"acc_norm": 0.9169859514687101,
"acc_norm_stderr": 0.009866287394639541
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8265895953757225,
"acc_stderr": 0.02038322955113502,
"acc_norm": 0.8265895953757225,
"acc_norm_stderr": 0.02038322955113502
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7921787709497207,
"acc_stderr": 0.01357024832508134,
"acc_norm": 0.7921787709497207,
"acc_norm_stderr": 0.01357024832508134
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8594771241830066,
"acc_stderr": 0.019899435463539946,
"acc_norm": 0.8594771241830066,
"acc_norm_stderr": 0.019899435463539946
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8038585209003215,
"acc_stderr": 0.022552447780478033,
"acc_norm": 0.8038585209003215,
"acc_norm_stderr": 0.022552447780478033
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8703703703703703,
"acc_stderr": 0.018689725721062072,
"acc_norm": 0.8703703703703703,
"acc_norm_stderr": 0.018689725721062072
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6347517730496454,
"acc_stderr": 0.02872386385328127,
"acc_norm": 0.6347517730496454,
"acc_norm_stderr": 0.02872386385328127
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5951760104302477,
"acc_stderr": 0.012536743830953986,
"acc_norm": 0.5951760104302477,
"acc_norm_stderr": 0.012536743830953986
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.023157468308559345,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.023157468308559345
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8186274509803921,
"acc_stderr": 0.015588643495370463,
"acc_norm": 0.8186274509803921,
"acc_norm_stderr": 0.015588643495370463
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8489795918367347,
"acc_stderr": 0.022923004094736847,
"acc_norm": 0.8489795918367347,
"acc_norm_stderr": 0.022923004094736847
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.9104477611940298,
"acc_stderr": 0.02019067053502792,
"acc_norm": 0.9104477611940298,
"acc_norm_stderr": 0.02019067053502792
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.02876234912646613,
"acc_norm": 0.91,
"acc_norm_stderr": 0.02876234912646613
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5783132530120482,
"acc_stderr": 0.03844453181770917,
"acc_norm": 0.5783132530120482,
"acc_norm_stderr": 0.03844453181770917
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8596491228070176,
"acc_stderr": 0.026640582539133196,
"acc_norm": 0.8596491228070176,
"acc_norm_stderr": 0.026640582539133196
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5299877600979193,
"mc1_stderr": 0.017471992091697534,
"mc2": 0.7016557407771556,
"mc2_stderr": 0.014224339474805845
},
"harness|winogrande|5": {
"acc": 0.8382004735595896,
"acc_stderr": 0.010350128010292404
},
"harness|gsm8k|5": {
"acc": 0.7247915087187263,
"acc_stderr": 0.012302114305862656
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
antonixe/river_source | ---
task_categories:
- question-answering
tags:
- art
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
cmani/celeb-identities | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Dua_Lipa
'1': Emma_Watson
'2': Kamal_Hassan
'3': Kim_Kardashian
'4': Morgan_Freeman
'5': Rajanikanth
'6': Robert_Downey_Jr
'7': Salma_Hayek
'8': Tom_Cruise
splits:
- name: train
num_bytes: 2063036.0
num_examples: 33
download_size: 2061066
dataset_size: 2063036.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WforGodot/addtrain345 | ---
license: openrail
---
|
louisbrulenaudet/code-famille-aide-sociale | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code de la famille et de l'aide sociale
source_datasets:
- original
pretty_name: Code de la famille et de l'aide sociale
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code de la famille et de l'aide sociale, non-instruct (2024-04-15)
This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice.
Fine-tuning is the process of adapting a pre-trained model to perform specific tasks or cater to particular domains. It involves adjusting the model's parameters through a further round of training on task-specific or domain-specific data. While conventional fine-tuning strategies involve supervised learning with labeled data, instruction-based fine-tuning introduces a more structured and interpretable approach.
Instruction-based fine-tuning leverages the power of human-provided instructions to guide the model's behavior. These instructions can be in the form of text prompts, prompts with explicit task descriptions, or a combination of both. This approach allows for a more controlled and context-aware interaction with the LLM, making it adaptable to a multitude of specialized tasks.
Instruction-based fine-tuning significantly enhances the performance of LLMs in the following ways:
- Task-Specific Adaptation: LLMs, when fine-tuned with specific instructions, exhibit remarkable adaptability to diverse tasks. They can switch seamlessly between translation, summarization, and question-answering, guided by the provided instructions.
- Reduced Ambiguity: Traditional LLMs might generate ambiguous or contextually inappropriate responses. Instruction-based fine-tuning allows for a clearer and more context-aware generation, reducing the likelihood of nonsensical outputs.
- Efficient Knowledge Transfer: Instructions can encapsulate domain-specific knowledge, enabling LLMs to benefit from expert guidance. This knowledge transfer is particularly valuable in fields like tax practice, law, medicine, and more.
- Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
- Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
## Concurrent reading of the LegalKit
To use all the legal data published on LegalKit, you can use this code snippet:
```python
# -*- coding: utf-8 -*-
import concurrent.futures
import os
import datasets
from tqdm.notebook import tqdm
def dataset_loader(
name:str,
streaming:bool=True
) -> datasets.Dataset:
"""
Helper function to load a single dataset in parallel.
Parameters
----------
name : str
Name of the dataset to be loaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
dataset : datasets.Dataset
Loaded dataset object.
Raises
------
Exception
If an error occurs during dataset loading.
"""
try:
return datasets.load_dataset(
name,
split="train",
streaming=streaming
)
except Exception as exc:
logging.error(f"Error loading dataset {name}: {exc}")
return None
def load_datasets(
req:list,
streaming:bool=True
) -> list:
"""
Downloads datasets specified in a list and creates a list of loaded datasets.
Parameters
----------
req : list
A list containing the names of datasets to be downloaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
datasets_list : list
A list containing loaded datasets as per the requested names provided in 'req'.
Raises
------
Exception
If an error occurs during dataset loading or processing.
Examples
--------
>>> datasets = load_datasets(["dataset1", "dataset2"], streaming=False)
"""
datasets_list = []
with concurrent.futures.ThreadPoolExecutor() as executor:
future_to_dataset = {executor.submit(dataset_loader, name): name for name in req}
for future in tqdm(concurrent.futures.as_completed(future_to_dataset), total=len(req)):
name = future_to_dataset[future]
try:
dataset = future.result()
if dataset:
datasets_list.append(dataset)
except Exception as exc:
logging.error(f"Error processing dataset {name}: {exc}")
return datasets_list
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=True
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
## Dataset generation
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `string`, presenting the instruction linked to the element.
- `input`: `string`, signifying the input details for the element.
- `output`: `string`, indicating the output information for the element.
- `start`: `string`, the date of entry into force of the article.
- `expiration`: `string`, the date of expiration of the article.
- `num`: `string`, the id of the article.
We used the following list of instructions for generating the dataset:
```python
instructions = [
"Compose l'intégralité de l'article sous forme écrite.",
"Écris la totalité du contenu de l'article.",
"Formule la totalité du texte présent dans l'article.",
"Produis l'intégralité de l'article en écriture.",
"Développe l'article dans son ensemble par écrit.",
"Génère l'ensemble du texte contenu dans l'article.",
"Formule le contenu intégral de l'article en entier.",
"Rédige la totalité du texte de l'article en entier.",
"Compose l'intégralité du contenu textuel de l'article.",
"Rédige l'ensemble du texte qui constitue l'article.",
"Formule l'article entier dans son contenu écrit.",
"Composez l'intégralité de l'article sous forme écrite.",
"Écrivez la totalité du contenu de l'article.",
"Formulez la totalité du texte présent dans l'article.",
"Développez l'article dans son ensemble par écrit.",
"Générez l'ensemble du texte contenu dans l'article.",
"Formulez le contenu intégral de l'article en entier.",
"Rédigez la totalité du texte de l'article en entier.",
"Composez l'intégralité du contenu textuel de l'article.",
"Écrivez l'article dans son intégralité en termes de texte.",
"Rédigez l'ensemble du texte qui constitue l'article.",
"Formulez l'article entier dans son contenu écrit.",
"Composer l'intégralité de l'article sous forme écrite.",
"Écrire la totalité du contenu de l'article.",
"Formuler la totalité du texte présent dans l'article.",
"Produire l'intégralité de l'article en écriture.",
"Développer l'article dans son ensemble par écrit.",
"Générer l'ensemble du texte contenu dans l'article.",
"Formuler le contenu intégral de l'article en entier.",
"Rédiger la totalité du texte de l'article en entier.",
"Composer l'intégralité du contenu textuel de l'article.",
"Rédiger l'ensemble du texte qui constitue l'article.",
"Formuler l'article entier dans son contenu écrit.",
"Quelles sont les dispositions de l'article ?",
"Quelles dispositions sont incluses dans l'article ?",
"Quelles sont les dispositions énoncées dans l'article ?",
"Quel est le texte intégral de l'article ?",
"Quelle est la lettre de l'article ?"
]
```
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
distil-whisper/tedlium-long-form | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: string
splits:
- name: validation
num_bytes: 180166870.0
num_examples: 8
- name: test
num_bytes: 285107770.0
num_examples: 11
download_size: 284926490
dataset_size: 465274640.0
---
# Dataset Card for "tedlium-long-form"
To create the dataset:
```python
import os
import numpy as np
from datasets import load_dataset, DatasetDict, Dataset, Audio
import soundfile as sf
from tqdm import tqdm
tedlium = load_dataset("LIUM/tedlium", "release3")
merged_dataset = DatasetDict()
validation_speaker_ids = [
"Al_Gore",
"Barry_Schwartz",
"Blaise_Agueray_Arcas",
"Brian_Cox",
"Craig_Venter",
"David_Merrill",
"Elizabeth_Gilbert",
"Wade_Davis",
]
validation_dataset_merged = {speaker_id: {"audio": [], "text": ""} for speaker_id in validation_speaker_ids}
test_speaker_ids = [
"AimeeMullins",
"BillGates",
"DanBarber",
"DanBarber_2010_S103",
"DanielKahneman",
"EricMead_2009P_EricMead",
"GaryFlake",
"JamesCameron",
"JaneMcGonigal",
"MichaelSpecter",
"RobertGupta",
]
test_dataset_merged = {speaker_id: {"audio": [], "text": ""} for speaker_id in test_speaker_ids}
for split, dataset in zip(["validation", "test"], [validation_dataset_merged, test_dataset_merged]):
sampling_rate = tedlium[split].features["audio"].sampling_rate
for sample in tqdm(tedlium[split]):
if sample["speaker_id"] in dataset:
dataset[sample["speaker_id"]]["audio"].extend(sample["audio"]["array"])
dataset[sample["speaker_id"]]["text"] += " " + sample["text"]
audio_paths = []
os.makedirs(split, exist_ok=True)
for speaker in dataset:
path = os.path.join(split, f"{speaker}-merged.wav")
audio_paths.append(path)
sf.write(path, np.asarray(dataset[speaker]["audio"]), samplerate=sampling_rate)
merged_dataset[split] = Dataset.from_dict({"audio": audio_paths}).cast_column("audio", Audio())
# remove spaced apostrophes (e.g. it 's -> it's)
merged_dataset[split] = merged_dataset[split].add_column("text", [dataset[speaker]["text"].replace(" '", "'") for speaker in dataset])
merged_dataset[split] = merged_dataset[split].add_column("speaker_id", dataset.keys())
``` |
Partha117/apache_bugs_with_content | ---
dataset_info:
features:
- name: issue_id
dtype: int64
- name: title
dtype: string
- name: body
dtype: string
- name: status
dtype: string
- name: after_fix_sha
dtype: string
- name: project_name
dtype: string
- name: repo_url
dtype: string
- name: repo_name
dtype: string
- name: language
dtype: string
- name: issue_url
dtype: 'null'
- name: before_fix_sha
dtype: 'null'
- name: pull_url
dtype: 'null'
- name: commit_datetime
dtype: timestamp[us, tz=UTC]
- name: report_datetime
dtype: timestamp[us, tz=UTC]
- name: updated_file
dtype: string
- name: file_content
dtype: string
splits:
- name: train
num_bytes: 767059150
num_examples: 86060
download_size: 200457526
dataset_size: 767059150
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sayakpaul/pokemon-blip-original-version | ---
license: cc-by-nc-sa-4.0
---
Dataset homepage: https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
The purpose of hosting the archive is to play with the original files. The archive was generated using [this Colab Notebook](https://colab.research.google.com/gist/sayakpaul/98f9ff3bd258a5c1107898422447b581/scratchpad.ipynb). |
AgentWaller/dutch-oasst1-qlora-format | ---
license: artistic-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 11250127
num_examples: 9843
- name: validation
num_bytes: 583463
num_examples: 517
download_size: 6602619
dataset_size: 11833590
---
|
Mxode/Chinese-Classics-Partial | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- classics
size_categories:
- 100K<n<1M
---
偶然找到的 200 多篇古籍相关的**纯 txt 文件**,简单洗了一下,去除了部分噪声和空白行。
一篇样例如下:
```
古训《增广贤文》
昔时贤文,诲汝谆谆,集韵增文,多见多闻。
观今宜鉴古,无古不成今。
知己知彼,将心比心。
酒逢知己饮,诗向会人吟。
相识满天下,知心能几人。
相逢好似初相识,到老终无怨恨心。
近水知鱼性,近山识鸟音。
易涨易退山溪水,易反易覆小人心。
运去金成铁,时来铁似金,读书须用意,一字值千金。
``` |
CATIE-AQ/newsquadfr_fr_prompt_context_generation_with_answer | ---
language:
- fr
license: cc-by-nc-sa-4.0
size_categories:
- 100K<n<1M
task_categories:
- text-generation
tags:
- DFP
- french prompts
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
source_datasets:
- newsquadfr
---
# newsquadfr_fr_prompt_context_generation_with_answer
## Summary
**newsquadfr_fr_prompt_context_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **101,040** rows that can be used for a context-generation (with answer)task.
The original data (without prompts) comes from the dataset [newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la réponse "'+ answer+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écris un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", génère un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", crée un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecris un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecrivez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédiger un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédige un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédigez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Génère un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Crée un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
```
# Splits
- `train` with 79,200 samples
- `valid` with 21,800 samples
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/newsquadfr_fr_prompt_context_generation_with_answer")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/lincoln/newsquadfr
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0 |
bigbio/n2c2_2018_track1 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2018 Selection Criteria
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for n2c2 2018 Selection Criteria
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TXTCLASS
Track 1 of the 2018 National NLP Clinical Challenges shared tasks focused
on identifying which patients in a corpus of longitudinal medical records
meet and do not meet identified selection criteria.
This shared task aimed to determine whether NLP systems could be trained to identify if patients met or did not meet
a set of selection criteria taken from real clinical trials. The selected criteria required measurement detection (
“Any HbA1c value between 6.5 and 9.5%”), inference (“Use of aspirin to prevent myocardial infarction”),
temporal reasoning (“Diagnosis of ketoacidosis in the past year”), and expert judgment to assess (“Major
diabetes-related complication”). For the corpus, we used the dataset of American English, longitudinal clinical
narratives from the 2014 i2b2/UTHealth shared task 4.
The final selected 13 selection criteria are as follows:
1. DRUG-ABUSE: Drug abuse, current or past
2. ALCOHOL-ABUSE: Current alcohol use over weekly recommended limits
3. ENGLISH: Patient must speak English
4. MAKES-DECISIONS: Patient must make their own medical decisions
5. ABDOMINAL: History of intra-abdominal surgery, small or large intestine
resection, or small bowel obstruction.
6. MAJOR-DIABETES: Major diabetes-related complication. For the purposes of
this annotation, we define “major complication” (as opposed to “minor complication”)
as any of the following that are a result of (or strongly correlated with) uncontrolled diabetes:
a. Amputation
b. Kidney damage
c. Skin conditions
d. Retinopathy
e. nephropathy
f. neuropathy
7. ADVANCED-CAD: Advanced cardiovascular disease (CAD).
For the purposes of this annotation, we define “advanced” as having 2 or more of the following:
a. Taking 2 or more medications to treat CAD
b. History of myocardial infarction (MI)
c. Currently experiencing angina
d. Ischemia, past or present
8. MI-6MOS: MI in the past 6 months
9. KETO-1YR: Diagnosis of ketoacidosis in the past year
10. DIETSUPP-2MOS: Taken a dietary supplement (excluding vitamin D) in the past 2 months
11. ASP-FOR-MI: Use of aspirin to prevent MI
12. HBA1C: Any hemoglobin A1c (HbA1c) value between 6.5% and 9.5%
13. CREATININE: Serum creatinine > upper limit of normal
The training consists of 202 patient records with document-level annotations, 10 records
with textual spans indicating annotator’s evidence for their annotations while test set contains 86.
Note:
* The inter-annotator average agreement is 84.9%
* Whereabouts of 10 records with textual spans indicating annotator’s evidence are unknown.
However, author did a simple script based validation to check if any of the tags contained any text
in any of the training set and they do not, which confirms that atleast train and test do not
have any evidence tagged alongside corresponding tags.
## Citation Information
```
@article{DBLP:journals/jamia/StubbsFSHU19,
author = {
Amber Stubbs and
Michele Filannino and
Ergin Soysal and
Samuel Henry and
Ozlem Uzuner
},
title = {Cohort selection for clinical trials: n2c2 2018 shared task track 1},
journal = {J. Am. Medical Informatics Assoc.},
volume = {26},
number = {11},
pages = {1163--1171},
year = {2019},
url = {https://doi.org/10.1093/jamia/ocz163},
doi = {10.1093/jamia/ocz163},
timestamp = {Mon, 15 Jun 2020 16:56:11 +0200},
biburl = {https://dblp.org/rec/journals/jamia/StubbsFSHU19.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
SkyWR/Wagner | ---
license: openrail
---
|
gimmaru/SetFit-sst5 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_text
dtype: string
splits:
- name: test
num_bytes: 128571
num_examples: 1000
download_size: 0
dataset_size: 128571
---
# Dataset Card for "SetFit-sst5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Note: This dataset was utilized for the evaluation of probability-based prompt selection techniques in the paper '[Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis](https://arxiv.org/abs/2305.14877)'. It differs from the actual benchmark dataset. |
CerebralAI/ActionRoutes | ---
dataset_info:
features:
- name: routes
sequence: string
- name: input
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1243126
num_examples: 5020
download_size: 474290
dataset_size: 1243126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713195079 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 17617
num_examples: 51
download_size: 17290
dataset_size: 17617
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713195079"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
allganize/fpb-ko-formatted | ---
dataset_info:
features:
- name: question
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 162381
num_examples: 944
download_size: 88251
dataset_size: 162381
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ | ---
pretty_name: Evaluation run of TheBloke/neural-chat-7B-v3-2-GPTQ
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheBloke/neural-chat-7B-v3-2-GPTQ](https://huggingface.co/TheBloke/neural-chat-7B-v3-2-GPTQ)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-11T00:12:21.907526](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ/blob/main/results_2023-12-11T00-12-21.907526.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6058481456466821,\n\
\ \"acc_stderr\": 0.03323160720607251,\n \"acc_norm\": 0.6077924426433228,\n\
\ \"acc_norm_stderr\": 0.033909992378155715,\n \"mc1\": 0.4541003671970624,\n\
\ \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.5979099902582387,\n\
\ \"mc2_stderr\": 0.01509977856693472\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6296928327645052,\n \"acc_stderr\": 0.01411129875167495,\n\
\ \"acc_norm\": 0.659556313993174,\n \"acc_norm_stderr\": 0.013847460518892978\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6360286795459071,\n\
\ \"acc_stderr\": 0.004801572028920794,\n \"acc_norm\": 0.8324039036048596,\n\
\ \"acc_norm_stderr\": 0.003727438786513393\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5407407407407407,\n\
\ \"acc_stderr\": 0.04304979692464241,\n \"acc_norm\": 0.5407407407407407,\n\
\ \"acc_norm_stderr\": 0.04304979692464241\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926605,\n\
\ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926605\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.660377358490566,\n \"acc_stderr\": 0.02914690474779833,\n\
\ \"acc_norm\": 0.660377358490566,\n \"acc_norm_stderr\": 0.02914690474779833\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n\
\ \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n\
\ \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n\
\ \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5722543352601156,\n\
\ \"acc_stderr\": 0.03772446857518027,\n \"acc_norm\": 0.5722543352601156,\n\
\ \"acc_norm_stderr\": 0.03772446857518027\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.49019607843137253,\n \"acc_stderr\": 0.04974229460422817,\n\
\ \"acc_norm\": 0.49019607843137253,\n \"acc_norm_stderr\": 0.04974229460422817\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n\
\ \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.03267151848924777,\n\
\ \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.03267151848924777\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n\
\ \"acc_stderr\": 0.04657047260594964,\n \"acc_norm\": 0.4298245614035088,\n\
\ \"acc_norm_stderr\": 0.04657047260594964\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n\
\ \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3544973544973545,\n \"acc_stderr\": 0.024636830602842,\n \"acc_norm\"\
: 0.3544973544973545,\n \"acc_norm_stderr\": 0.024636830602842\n },\n\
\ \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n\
\ \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n\
\ \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7419354838709677,\n\
\ \"acc_stderr\": 0.02489246917246283,\n \"acc_norm\": 0.7419354838709677,\n\
\ \"acc_norm_stderr\": 0.02489246917246283\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.03514528562175007,\n\
\ \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.03514528562175007\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n\
\ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7626262626262627,\n \"acc_stderr\": 0.0303137105381989,\n \"acc_norm\"\
: 0.7626262626262627,\n \"acc_norm_stderr\": 0.0303137105381989\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\
\ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5948717948717949,\n \"acc_stderr\": 0.024890471769938145,\n\
\ \"acc_norm\": 0.5948717948717949,\n \"acc_norm_stderr\": 0.024890471769938145\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2962962962962963,\n \"acc_stderr\": 0.027840811495871923,\n \
\ \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.027840811495871923\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135356,\n\
\ \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135356\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7963302752293578,\n \"acc_stderr\": 0.017266742087630783,\n \"\
acc_norm\": 0.7963302752293578,\n \"acc_norm_stderr\": 0.017266742087630783\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4861111111111111,\n \"acc_stderr\": 0.03408655867977749,\n \"\
acc_norm\": 0.4861111111111111,\n \"acc_norm_stderr\": 0.03408655867977749\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.75,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.02675082699467617,\n\
\ \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.02675082699467617\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6564885496183206,\n \"acc_stderr\": 0.041649760719448786,\n\
\ \"acc_norm\": 0.6564885496183206,\n \"acc_norm_stderr\": 0.041649760719448786\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7037037037037037,\n\
\ \"acc_stderr\": 0.04414343666854934,\n \"acc_norm\": 0.7037037037037037,\n\
\ \"acc_norm_stderr\": 0.04414343666854934\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6319018404907976,\n \"acc_stderr\": 0.03789213935838396,\n\
\ \"acc_norm\": 0.6319018404907976,\n \"acc_norm_stderr\": 0.03789213935838396\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8205128205128205,\n\
\ \"acc_stderr\": 0.02514093595033544,\n \"acc_norm\": 0.8205128205128205,\n\
\ \"acc_norm_stderr\": 0.02514093595033544\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7841634738186463,\n\
\ \"acc_stderr\": 0.014711684386139953,\n \"acc_norm\": 0.7841634738186463,\n\
\ \"acc_norm_stderr\": 0.014711684386139953\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.02590663263101613,\n\
\ \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.02590663263101613\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39106145251396646,\n\
\ \"acc_stderr\": 0.016320763763808383,\n \"acc_norm\": 0.39106145251396646,\n\
\ \"acc_norm_stderr\": 0.016320763763808383\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.026643278474508755,\n\
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.026643278474508755\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n\
\ \"acc_stderr\": 0.026311858071854155,\n \"acc_norm\": 0.6881028938906752,\n\
\ \"acc_norm_stderr\": 0.026311858071854155\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.02610567386140983,\n\
\ \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.02610567386140983\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.42907801418439717,\n \"acc_stderr\": 0.029525914302558555,\n \
\ \"acc_norm\": 0.42907801418439717,\n \"acc_norm_stderr\": 0.029525914302558555\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4165580182529335,\n\
\ \"acc_stderr\": 0.012591153245057392,\n \"acc_norm\": 0.4165580182529335,\n\
\ \"acc_norm_stderr\": 0.012591153245057392\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5625,\n \"acc_stderr\": 0.030134614954403924,\n \
\ \"acc_norm\": 0.5625,\n \"acc_norm_stderr\": 0.030134614954403924\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6045751633986928,\n \"acc_stderr\": 0.01978046595477751,\n \
\ \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.01978046595477751\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n\
\ \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n\
\ \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.029162738410249772,\n\
\ \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.029162738410249772\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7860696517412935,\n\
\ \"acc_stderr\": 0.02899690969332891,\n \"acc_norm\": 0.7860696517412935,\n\
\ \"acc_norm_stderr\": 0.02899690969332891\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n\
\ \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n\
\ \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.03218093795602357,\n\
\ \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.03218093795602357\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4541003671970624,\n\
\ \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.5979099902582387,\n\
\ \"mc2_stderr\": 0.01509977856693472\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7947908445146015,\n \"acc_stderr\": 0.01135031570746206\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5284306292645944,\n \
\ \"acc_stderr\": 0.013750202076584419\n }\n}\n```"
repo_url: https://huggingface.co/TheBloke/neural-chat-7B-v3-2-GPTQ
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|arc:challenge|25_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|gsm8k|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hellaswag|10_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-11T00-12-21.907526.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-11T00-12-21.907526.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- '**/details_harness|winogrande|5_2023-12-11T00-12-21.907526.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-11T00-12-21.907526.parquet'
- config_name: results
data_files:
- split: 2023_12_11T00_12_21.907526
path:
- results_2023-12-11T00-12-21.907526.parquet
- split: latest
path:
- results_2023-12-11T00-12-21.907526.parquet
---
# Dataset Card for Evaluation run of TheBloke/neural-chat-7B-v3-2-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/neural-chat-7B-v3-2-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/neural-chat-7B-v3-2-GPTQ](https://huggingface.co/TheBloke/neural-chat-7B-v3-2-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T00:12:21.907526](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ/blob/main/results_2023-12-11T00-12-21.907526.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6058481456466821,
"acc_stderr": 0.03323160720607251,
"acc_norm": 0.6077924426433228,
"acc_norm_stderr": 0.033909992378155715,
"mc1": 0.4541003671970624,
"mc1_stderr": 0.017429593091323522,
"mc2": 0.5979099902582387,
"mc2_stderr": 0.01509977856693472
},
"harness|arc:challenge|25": {
"acc": 0.6296928327645052,
"acc_stderr": 0.01411129875167495,
"acc_norm": 0.659556313993174,
"acc_norm_stderr": 0.013847460518892978
},
"harness|hellaswag|10": {
"acc": 0.6360286795459071,
"acc_stderr": 0.004801572028920794,
"acc_norm": 0.8324039036048596,
"acc_norm_stderr": 0.003727438786513393
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5407407407407407,
"acc_stderr": 0.04304979692464241,
"acc_norm": 0.5407407407407407,
"acc_norm_stderr": 0.04304979692464241
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.03823428969926605,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.03823428969926605
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.660377358490566,
"acc_stderr": 0.02914690474779833,
"acc_norm": 0.660377358490566,
"acc_norm_stderr": 0.02914690474779833
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5722543352601156,
"acc_stderr": 0.03772446857518027,
"acc_norm": 0.5722543352601156,
"acc_norm_stderr": 0.03772446857518027
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.49019607843137253,
"acc_stderr": 0.04974229460422817,
"acc_norm": 0.49019607843137253,
"acc_norm_stderr": 0.04974229460422817
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5148936170212766,
"acc_stderr": 0.03267151848924777,
"acc_norm": 0.5148936170212766,
"acc_norm_stderr": 0.03267151848924777
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.04657047260594964,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.04657047260594964
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3544973544973545,
"acc_stderr": 0.024636830602842,
"acc_norm": 0.3544973544973545,
"acc_norm_stderr": 0.024636830602842
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.02489246917246283,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.02489246917246283
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.03514528562175007,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.03514528562175007
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.0303137105381989,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.0303137105381989
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5948717948717949,
"acc_stderr": 0.024890471769938145,
"acc_norm": 0.5948717948717949,
"acc_norm_stderr": 0.024890471769938145
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2962962962962963,
"acc_stderr": 0.027840811495871923,
"acc_norm": 0.2962962962962963,
"acc_norm_stderr": 0.027840811495871923
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135356,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135356
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7963302752293578,
"acc_stderr": 0.017266742087630783,
"acc_norm": 0.7963302752293578,
"acc_norm_stderr": 0.017266742087630783
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4861111111111111,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.4861111111111111,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.75,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.02675082699467617,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.02675082699467617
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6564885496183206,
"acc_stderr": 0.041649760719448786,
"acc_norm": 0.6564885496183206,
"acc_norm_stderr": 0.041649760719448786
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.04414343666854934,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.04414343666854934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6319018404907976,
"acc_stderr": 0.03789213935838396,
"acc_norm": 0.6319018404907976,
"acc_norm_stderr": 0.03789213935838396
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8205128205128205,
"acc_stderr": 0.02514093595033544,
"acc_norm": 0.8205128205128205,
"acc_norm_stderr": 0.02514093595033544
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7841634738186463,
"acc_stderr": 0.014711684386139953,
"acc_norm": 0.7841634738186463,
"acc_norm_stderr": 0.014711684386139953
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.02590663263101613,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.02590663263101613
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39106145251396646,
"acc_stderr": 0.016320763763808383,
"acc_norm": 0.39106145251396646,
"acc_norm_stderr": 0.016320763763808383
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.026643278474508755,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.026643278474508755
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6881028938906752,
"acc_stderr": 0.026311858071854155,
"acc_norm": 0.6881028938906752,
"acc_norm_stderr": 0.026311858071854155
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.02610567386140983,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.02610567386140983
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.42907801418439717,
"acc_stderr": 0.029525914302558555,
"acc_norm": 0.42907801418439717,
"acc_norm_stderr": 0.029525914302558555
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4165580182529335,
"acc_stderr": 0.012591153245057392,
"acc_norm": 0.4165580182529335,
"acc_norm_stderr": 0.012591153245057392
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5625,
"acc_stderr": 0.030134614954403924,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.030134614954403924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.01978046595477751,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.01978046595477751
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.046534298079135075,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.029162738410249772,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.029162738410249772
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7860696517412935,
"acc_stderr": 0.02899690969332891,
"acc_norm": 0.7860696517412935,
"acc_norm_stderr": 0.02899690969332891
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.03218093795602357,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.03218093795602357
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4541003671970624,
"mc1_stderr": 0.017429593091323522,
"mc2": 0.5979099902582387,
"mc2_stderr": 0.01509977856693472
},
"harness|winogrande|5": {
"acc": 0.7947908445146015,
"acc_stderr": 0.01135031570746206
},
"harness|gsm8k|5": {
"acc": 0.5284306292645944,
"acc_stderr": 0.013750202076584419
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
joey234/mmlu-college_physics-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 27795
num_examples: 102
download_size: 16560
dataset_size: 27795
---
# Dataset Card for "mmlu-college_physics-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
UMCU/MedQA_Dutch_translated_with_MariaNMT | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8270752
num_examples: 9856
download_size: 4467728
dataset_size: 8270752
---
# Dataset Card for "MedQA_Dutch_translated_with_MariaNMT"
Translation of the **English** version of [MedQA](https://huggingface.co/datasets/bigbio/med_qa),
to **Dutch** using an [Maria NMT model](https://marian-nmt.github.io/), trained by [Helsinki NLP](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
Note, for reference: Maria NMT is based on [BART](https://huggingface.co/docs/transformers/model_doc/bart), described [here](https://arxiv.org/abs/1910.13461).
Note:
We do **not** have the full sample count of the original MedQA due to exceedance of the maximum window size.
In updated version we will use stride to translate complete documents.
# Attribution
If you use this dataset please use the following to credit the creators of MedQA:
```citation
@article{jin2021disease,
title={What disease does this patient have? a large-scale open domain question answering dataset from medical exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={Applied Sciences},
volume={11},
number={14},
pages={6421},
year={2021},
publisher={MDPI}
}
```
The creators of the OPUS-MT models:
```
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
and
```
@misc {van_es_2023,
author = { {Bram van Es} },
title = { MedQA_Dutch_translated_with_MariaNMT (Revision 7e88c9e) },
year = 2023,
url = { https://huggingface.co/datasets/UMCU/MedQA_Dutch_translated_with_MariaNMT },
doi = { 10.57967/hf/1355 },
publisher = { Hugging Face }
}
```
# License
For both the Maria NMT model and the original [Helsinki NLP](https://twitter.com/HelsinkiNLP) [Opus MT model](https://huggingface.co/Helsinki-NLP)
we did **not** find a license. We also did not find a license for the MedQA corpus. For these reasons we use a permissive [CC BY](https://wellcome.org/grant-funding/guidance/open-access-guidance/creative-commons-attribution-licence-cc)
license. If this was in error please let us know and we will add the appropriate licensing promptly.
|
supersaiyan2019/main6 | ---
license: openrail
---
|
ibranze/araproje_hellaswag_en_conf_llama_nearestscore_true_y | ---
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 149738.0
num_examples: 250
download_size: 81116
dataset_size: 149738.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_hellaswag_en_conf_llama_nearestscore_true_y"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thorirhrafn/rmh_subset_medium2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 775259169
num_examples: 282160
- name: test
num_bytes: 4398683
num_examples: 2000
- name: valid
num_bytes: 4543850
num_examples: 2000
download_size: 480237633
dataset_size: 784201702
---
# Dataset Card for "rmh_subset_medium2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906071 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/roberta-base-squad2-distilled
metrics: ['bertscore']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
arize-ai/fashion_mnist_label_drift | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|imdb
task_categories:
- image-classification
task_ids:
- multi-class-classification
pretty_name: sentiment-classification-reviews-with-drift
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
trolllemon/dogs | ---
language:
- en
license: mit
task_categories:
- image-classification
pretty_name: Dogs
dataset_info:
features:
- name: image
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 4610
num_examples: 60
- name: test
num_bytes: 1064
num_examples: 14
download_size: 3572
dataset_size: 5674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
joey234/mmlu-computer_security-original-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 10384
num_examples: 17
download_size: 9420
dataset_size: 10384
---
# Dataset Card for "mmlu-computer_security-original-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
suolyer/pile_europarl | ---
license: apache-2.0
---
|
janani4office2/connl_rlhf | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: entities
dtype: string
splits:
- name: train
num_bytes: 1818001
num_examples: 14041
- name: validation
num_bytes: 450958
num_examples: 3250
- name: test
num_bytes: 419551
num_examples: 3453
download_size: 1557421
dataset_size: 2688510
---
|
bridgeconn/snow-mountain | ---
pretty_name: Snow Mountain
language:
- hi
- bgc
- kfs
- dgo
- bhd
- gbk
- xnr
- kfx
- mjl
- kfo
- bfz
annotations_creators:
- 'null': null
language_creators:
- 'null': null
multilinguality:
- multilingual
source_datasets:
- Snow Mountain
task_categories:
- automatic-speech-recognition
- text-to-speech
task_ids: []
configs:
- hi
- bgc
dataset_info:
- config_name: hi
features:
- name: Unnamed
dtype: int64
- name: sentence
dtype: string
- name: path
dtype: string
splits:
- name: train_500
num_examples: 400
- name: val_500
num_examples: 100
- name: train_1000
num_examples: 800
- name: val_1000
num_examples: 200
- name: test_common
num_examples: 500
dataset_size: 71.41 hrs
- config_name: bgc
features:
- name: Unnamed
dtype: int64
- name: sentence
dtype: string
- name: path
dtype: string
splits:
- name: train_500
num_examples: 400
- name: val_500
num_examples: 100
- name: train_1000
num_examples: 800
- name: val_1000
num_examples: 200
- name: test_common
num_examples: 500
dataset_size: 27.41 hrs
license: cc-by-sa-4.0
---
# Snow Mountain
## Dataset Description
- **Paper: https://arxiv.org/abs/2206.01205**
- **Point of Contact: Joel Mathew**
### Dataset Summary
The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible (contains both Old Testament (OT) and New Testament (NT)) in 11 Indian languages. The recordings were done in a studio setting by native speakers. Each language has a single speaker in the dataset. Most of these languages are geographically concentrated in the Northern part of India around the state of Himachal Pradesh. Being related to Hindi they all use the Devanagari script for transcription.
We have used this dataset for experiments in ASR tasks. But these could be used for other applications in speech domain, like speaker recognition, language identification or even as unlabelled corpus for pre-training.
### Supported Tasks and Leaderboards
Atomatic speech recognition, Speech-to-Text, Speaker recognition, Language identification
### Languages
Hindi, Haryanvi, Bilaspuri, Dogri, Bhadrawahi, Gaddi, Kangri, Kulvi, Mandeali, Kulvi Outer Seraji, Pahari Mahasui, Malayalam, Kannada, Tamil, Telugu
## Dataset Structure
```
data
|- cleaned
|- lang1
|- book1_verse_audios.tar.gz
|- book2_verse_audios.tar.gz
...
...
|- all_verses.tar.gz
|- short_verses.tar.gz
|- lang2
...
...
|- experiments
|- lang1
|- train_500.csv
|- val_500.csv
|- test_common.csv
...
...
|- lang2
...
...
|- raw
|- lang1
|- chapter1_audio.mp3
|- chapter2_audio.mp3
...
...
|- text
|- book1.csv
|- book1.usfm
...
...
|- lang2
...
...
```
### Data Instances
A data point comprises of the path to the audio file, called `path` and its transcription, called `sentence`.
```
{'sentence': 'क्यूँके तू अपणी बात्तां कै कारण बेकसूर अर अपणी बात्तां ए कै कारण कसूरवार ठहराया जावैगा',
'audio': {'path': 'data/cleaned/haryanvi/MAT/MAT_012_037.wav',
'array': array([0., 0., 0., ..., 0., 0., 0.]),
'sampling_rate': 16000},
'path': 'data/cleaned/haryanvi/MAT/MAT_012_037.wav'}
```
### Data Fields
`path`: The path to the audio file
`audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
`sentence`: The transcription of the audio file.
### Data Splits
We create splits of the cleaned data for training and analysing the performance of ASR models. The splits are available in the `experiments` directory. The file names indicate the experiment and the split category. Additionally two CSV files are included in the data splits - `all_verses` and `short_verses`. Various data splits were generated from these main two CSVs. `short_verses.csv` contains audios of length < 10s and corresponding transcriptions. `all_verses.csv` contains complete cleaned verses including long and short audios. Due to the large size (>10MB), we keep these CSVs compressed in the `tar.gz format in the `cleaned` folder.
## Dataset Loading
`raw` folder has chapter wise audios in .mp3 format. For doing experiments, we might need audios in .wav format. Verse wise audio files are keept in the `cleaned` folder in .wav format. This results in a much larger size which contributes to longer loading time into memory. Here is the approximate time needed for loading the Dataset.
- Hindi (OT books): ~20 minutes
- Hindi minority languages (NT books): ~9 minutes
- Dravidian languages (OT+NT books): ~30 minutes
## Details
Please refer to the paper for more details on the creation and the rationale for the splits we created in the dataset.
### Licensing Information
The data is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)
### Citation Information
Please cite this work if you make use of it:
```
@inproceedings{Raju2022SnowMD,
title={Snow Mountain: Dataset of Audio Recordings of The Bible in Low Resource Languages},
author={Kavitha Raju and V. Anjaly and R. Allen Lish and Joel Mathew},
year={2022}
}
``` |
dylanalloy/fin-gpt-selftalk_500k | ---
license: cc-by-nc-4.0
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/e8fbcee9 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1336
dataset_size: 188
---
# Dataset Card for "e8fbcee9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HeshamHaroon/arabic-quotes | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- crowdsourced
language:
- ar
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
---
# Arabic Quotes Dataset (arabic_Q)
The "Arabic Quotes" dataset contains a collection of Arabic quotes along with their corresponding authors and tags. The dataset is scraped from the website "arabic-quotes.com" and provides a diverse range of quotes from various authors.
## Dataset Details
- **Version**: 1.0.0
- **Total Quotes**: 3778
- **Languages**: Arabic
- **Source**: arabic-quotes.com
## Dataset Structure
The dataset is provided in the JSONL (JSON Lines) format, where each line represents a separate JSON object. The JSON objects have the following fields:
- `quote`: The Arabic quote text.
- `author`: The author of the quote.
- `tags`: A list of tags associated with the quote, providing additional context or themes.
## Dataset Examples
Here are a few examples of the quotes in the dataset:
```json
{
"quote": "اذا لم يكن لديك هدف ، فاجعل هدفك الاول ايجاد واحد .",
"author": "وليام شكسبير",
"tags": ["تنمية الذات", "تحفيز"]
}
{
"quote": "قيمة الحياة ليست في مدى طولها ، بل في مدى قيمتها",
"author": "وليام شكسبير",
"tags": ["الحياة", "القيمة"]
}
{
"quote": "التحدث عن الامور العميقة ليس سهلاً كما يبدو",
"author": "جبران خليل جبران",
"tags": ["التواصل", "العمق"]
}
```
## Dataset Usage
The "Arabic Quotes" dataset can be used for various purposes, including:
- Natural Language Processing (NLP) tasks in Arabic text analysis.
- Text generation and language modeling.
- Quote recommendation systems.
- Inspirational content generation.
- text-classification
## Acknowledgements
We would like to thank the website "arabic-quotes.com" for providing the valuable collection of Arabic quotes used in this dataset.
## License
The dataset is provided under the [bigscience-bloom-rail-1.0 License](https://huggingface.co/spaces/bigscience/license), which permits non-commercial use and sharing under certain conditions.
|
FastFit/massive_de_60 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 704100
num_examples: 11514
- name: validation
num_bytes: 123376
num_examples: 2033
- name: test
num_bytes: 181452
num_examples: 2974
download_size: 428903
dataset_size: 1008928
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
autoevaluate/autoeval-eval-piaf-plain_text-42b979-39890145062 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- piaf
eval_info:
task: extractive_question_answering
model: etalab-ia/camembert-base-squadFR-fquad-piaf
metrics: ['accuracy']
dataset_name: piaf
dataset_config: plain_text
dataset_split: train
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: etalab-ia/camembert-base-squadFR-fquad-piaf
* Dataset: piaf
* Config: plain_text
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@malou.berthe@gmail.com](https://huggingface.co/malou.berthe@gmail.com) for evaluating this model. |
Tensoic/saraswati-stem | ---
license: openrail
---
|
CyberHarem/kuwayama_chiyuki_theidolmstershinycolors | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kuwayama_chiyuki/桑山千雪 (THE iDOLM@STER: SHINY COLORS)
This is the dataset of kuwayama_chiyuki/桑山千雪 (THE iDOLM@STER: SHINY COLORS), containing 500 images and their tags.
The core tags of this character are `brown_hair, long_hair, breasts, bangs, brown_eyes, ahoge, large_breasts, braid, single_braid`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 853.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuwayama_chiyuki_theidolmstershinycolors/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 422.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuwayama_chiyuki_theidolmstershinycolors/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1288 | 962.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuwayama_chiyuki_theidolmstershinycolors/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 722.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuwayama_chiyuki_theidolmstershinycolors/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1288 | 1.45 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kuwayama_chiyuki_theidolmstershinycolors/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kuwayama_chiyuki_theidolmstershinycolors',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | 1girl, cheerleader, holding_pom_poms, ponytail, solo, blush, crop_top, looking_at_viewer, midriff, miniskirt, navel, pleated_skirt, open_mouth, cleavage, bike_shorts_under_skirt, collarbone, short_sleeves, white_skirt, simple_background, sweat, white_footwear, yellow_belt, :d, black_choker, blue_shirt, boots, confetti, ribbon, white_background |
| 1 | 5 |  |  |  |  |  | 1girl, blush, bracelet, cleavage, collarbone, looking_at_viewer, navel, smile, solo, thighs, bare_shoulders, hair_ornament, sitting, crown_braid, earrings, ponytail, white_bikini, cup, drinking_straw, halterneck, holding, open_mouth, outdoors, white_background |
| 2 | 10 |  |  |  |  |  | 1girl, blush, cleavage, floral_print, looking_at_viewer, solo, necklace, pink_one-piece_swimsuit, smile, earrings, armpits, arms_up, casual_one-piece_swimsuit, collarbone, thighs, cowboy_shot, hairclip, open_mouth, wet |
| 3 | 6 |  |  |  |  |  | 1girl, blush, looking_at_viewer, smile, solo, white_background, simple_background, skirt, sleeveless_shirt, bare_shoulders, black_shirt |
| 4 | 23 |  |  |  |  |  | 1girl, short_sleeves, white_shirt, blush, hair_over_shoulder, solo, hair_bow, smile, black_bow, braided_ponytail, looking_at_viewer, long_braid, white_background, open_mouth, simple_background, collared_shirt, red_skirt |
| 5 | 7 |  |  |  |  |  | 1girl, bare_shoulders, frills, looking_at_viewer, solo, blush, braided_ponytail, white_background, simple_background, white_dress, wrist_cuffs, hairband, long_braid, medium_breasts, open_mouth, :d, hair_bow, hair_ribbon, halterneck, holding, microphone, short_sleeves, thighhighs |
| 6 | 19 |  |  |  |  |  | 1boy, 1girl, blush, hetero, solo_focus, nipples, paizuri, smile, sweat, collarbone, nude, breasts_squeezed_together, looking_at_viewer, open_mouth, penis, huge_breasts, braided_ponytail, pov, censored, hair_over_shoulder, long_braid, breast_grab |
| 7 | 12 |  |  |  |  |  | 1boy, 1girl, blush, hetero, sex, collarbone, nipples, vaginal, completely_nude, navel, solo_focus, cowgirl_position, girl_on_top, spread_legs, sweat, looking_at_viewer, open_mouth, penis, pussy, female_pubic_hair, thighs, mosaic_censoring, smile |
| 8 | 7 |  |  |  |  |  | 1girl, hair_flower, solo, wedding_dress, white_dress, bare_shoulders, blush, bridal_veil, detached_sleeves, looking_at_viewer, white_gloves, bride, earrings, holding_bouquet, see-through_sleeves, sleeveless_dress, petals, smile, blurry |
| 9 | 11 |  |  |  |  |  | 1girl, bare_shoulders, elbow_gloves, looking_at_viewer, sleeveless_shirt, black_gloves, solo, blush, cleavage, peaked_cap, white_shirt, black_necktie, hair_over_shoulder, long_braid, smile, collarbone, necktie_between_breasts, collared_shirt, earrings, holding, black_headwear, riding_crop, skirt, shorts, sidelocks, white_background |
| 10 | 8 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, cleavage, navel, underwear_only, bow, collarbone, sweat, thighs, armpits, black_bra, black_panties, braided_ponytail, hair_over_shoulder, indoors, lingerie, smile, arms_up, lace-trimmed_bra, on_back, parted_lips, stomach |
| 11 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, solo, obi, wide_sleeves, blush, hair_ribbon, leaf, autumn_leaves, holding, light_smile, outdoors, shawl, single_hair_bun, striped_kimono, upper_body |
| 12 | 6 |  |  |  |  |  | 1girl, long_sleeves, solo, blush, looking_at_viewer, maid_apron, maid_headdress, twin_braids, bow, enmaided, glasses, hair_over_shoulder, smile, white_apron, black_dress, closed_mouth, frills, indoors, round_eyewear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cheerleader | holding_pom_poms | ponytail | solo | blush | crop_top | looking_at_viewer | midriff | miniskirt | navel | pleated_skirt | open_mouth | cleavage | bike_shorts_under_skirt | collarbone | short_sleeves | white_skirt | simple_background | sweat | white_footwear | yellow_belt | :d | black_choker | blue_shirt | boots | confetti | ribbon | white_background | bracelet | smile | thighs | bare_shoulders | hair_ornament | sitting | crown_braid | earrings | white_bikini | cup | drinking_straw | halterneck | holding | outdoors | floral_print | necklace | pink_one-piece_swimsuit | armpits | arms_up | casual_one-piece_swimsuit | cowboy_shot | hairclip | wet | skirt | sleeveless_shirt | black_shirt | white_shirt | hair_over_shoulder | hair_bow | black_bow | braided_ponytail | long_braid | collared_shirt | red_skirt | frills | white_dress | wrist_cuffs | hairband | medium_breasts | hair_ribbon | microphone | thighhighs | 1boy | hetero | solo_focus | nipples | paizuri | nude | breasts_squeezed_together | penis | huge_breasts | pov | censored | breast_grab | sex | vaginal | completely_nude | cowgirl_position | girl_on_top | spread_legs | pussy | female_pubic_hair | mosaic_censoring | hair_flower | wedding_dress | bridal_veil | detached_sleeves | white_gloves | bride | holding_bouquet | see-through_sleeves | sleeveless_dress | petals | blurry | elbow_gloves | black_gloves | peaked_cap | black_necktie | necktie_between_breasts | black_headwear | riding_crop | shorts | sidelocks | underwear_only | bow | black_bra | black_panties | indoors | lingerie | lace-trimmed_bra | on_back | parted_lips | stomach | obi | wide_sleeves | leaf | autumn_leaves | light_smile | shawl | single_hair_bun | striped_kimono | upper_body | long_sleeves | maid_apron | maid_headdress | twin_braids | enmaided | glasses | white_apron | black_dress | closed_mouth | round_eyewear |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------|:-------------------|:-----------|:-------|:--------|:-----------|:--------------------|:----------|:------------|:--------|:----------------|:-------------|:-----------|:--------------------------|:-------------|:----------------|:--------------|:--------------------|:--------|:-----------------|:--------------|:-----|:---------------|:-------------|:--------|:-----------|:---------|:-------------------|:-----------|:--------|:---------|:-----------------|:----------------|:----------|:--------------|:-----------|:---------------|:------|:-----------------|:-------------|:----------|:-----------|:---------------|:-----------|:--------------------------|:----------|:----------|:----------------------------|:--------------|:-----------|:------|:--------|:-------------------|:--------------|:--------------|:---------------------|:-----------|:------------|:-------------------|:-------------|:-----------------|:------------|:---------|:--------------|:--------------|:-----------|:-----------------|:--------------|:-------------|:-------------|:-------|:---------|:-------------|:----------|:----------|:-------|:----------------------------|:--------|:---------------|:------|:-----------|:--------------|:------|:----------|:------------------|:-------------------|:--------------|:--------------|:--------|:--------------------|:-------------------|:--------------|:----------------|:--------------|:-------------------|:---------------|:--------|:------------------|:----------------------|:-------------------|:---------|:---------|:---------------|:---------------|:-------------|:----------------|:--------------------------|:-----------------|:--------------|:---------|:------------|:-----------------|:------|:------------|:----------------|:----------|:-----------|:-------------------|:----------|:--------------|:----------|:------|:---------------|:-------|:----------------|:--------------|:--------|:------------------|:-----------------|:-------------|:---------------|:-------------|:-----------------|:--------------|:-----------|:----------|:--------------|:--------------|:---------------|:----------------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | | X | X | X | | X | | | X | | X | X | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | | | | X | X | | X | | | | | X | X | | X | | | | | | | | | | | | | | | X | X | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | | X | X | | X | | | | | | | | | | | X | | | | | | | | | | X | | X | | X | | | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 23 |  |  |  |  |  | X | | | | X | X | | X | | | | | X | | | | X | | X | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | | | X | X | | X | | | | | X | | | | X | | X | | | | X | | | | | | X | | | | X | | | | | | | | X | X | | | | | | | | | | | | | | | | X | | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 19 |  |  |  |  |  | X | | | | | X | | X | | | | | X | | | X | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 12 |  |  |  |  |  | X | | | | | X | | X | | | X | | X | | | X | | | | X | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 7 |  |  |  |  |  | X | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 11 |  |  |  |  |  | X | | | | X | X | | X | | | | | | X | | X | | | | | | | | | | | | | X | | X | | X | | | | X | | | | | X | | | | | | | | | | | X | X | | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 8 |  |  |  |  |  | X | | | | X | X | | X | | | X | | | X | | X | | | | X | | | | | | | | | | | X | X | | | | | | | | | | | | | | | X | X | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 11 | 6 |  |  |  |  |  | X | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 12 | 6 |  |  |  |  |  | X | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
jinaai/miracl | ---
license: apache-2.0
---
## MIRACL Dataset
This dataset is a reformatted version of the original [MIRACL dataset](https://huggingface.co/datasets/miracl/miracl),
into the format expected for MTEB reranking tasks. |
Heba30018/test | ---
dataset_info:
features:
- name: image
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 271721.0
num_examples: 6469
download_size: 89923
dataset_size: 271721.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_allknowingroger__Limmy-phi2-slerp | ---
pretty_name: Evaluation run of allknowingroger/Limmy-phi2-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [allknowingroger/Limmy-phi2-slerp](https://huggingface.co/allknowingroger/Limmy-phi2-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_allknowingroger__Limmy-phi2-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-11T04:46:02.169522](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__Limmy-phi2-slerp/blob/main/results_2024-04-11T04-46-02.169522.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5821651772411324,\n\
\ \"acc_stderr\": 0.03372798315458136,\n \"acc_norm\": 0.5823208166209095,\n\
\ \"acc_norm_stderr\": 0.03442419149083994,\n \"mc1\": 0.35495716034271724,\n\
\ \"mc1_stderr\": 0.016750862381375898,\n \"mc2\": 0.5059941161139769,\n\
\ \"mc2_stderr\": 0.015447236581056423\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.591296928327645,\n \"acc_stderr\": 0.014365750345427,\n\
\ \"acc_norm\": 0.621160409556314,\n \"acc_norm_stderr\": 0.014175915490000324\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.579964150567616,\n\
\ \"acc_stderr\": 0.004925556104679422,\n \"acc_norm\": 0.7635929097789285,\n\
\ \"acc_norm_stderr\": 0.0042400668987025185\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.04605661864718381,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.04605661864718381\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4740740740740741,\n\
\ \"acc_stderr\": 0.04313531696750575,\n \"acc_norm\": 0.4740740740740741,\n\
\ \"acc_norm_stderr\": 0.04313531696750575\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.039777499346220734,\n\
\ \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.039777499346220734\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.03015113445777629,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.03015113445777629\n },\n\
\ \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6388888888888888,\n\
\ \"acc_stderr\": 0.040166600304512336,\n \"acc_norm\": 0.6388888888888888,\n\
\ \"acc_norm_stderr\": 0.040166600304512336\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.47,\n\
\ \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5953757225433526,\n\
\ \"acc_stderr\": 0.03742461193887248,\n \"acc_norm\": 0.5953757225433526,\n\
\ \"acc_norm_stderr\": 0.03742461193887248\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663434,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663434\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.04461960433384739,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.04461960433384739\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4808510638297872,\n \"acc_stderr\": 0.032662042990646775,\n\
\ \"acc_norm\": 0.4808510638297872,\n \"acc_norm_stderr\": 0.032662042990646775\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n\
\ \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n\
\ \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.04166567577101579,\n\
\ \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.04166567577101579\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.43915343915343913,\n \"acc_stderr\": 0.025559920550531006,\n \"\
acc_norm\": 0.43915343915343913,\n \"acc_norm_stderr\": 0.025559920550531006\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n\
\ \"acc_stderr\": 0.04375888492727061,\n \"acc_norm\": 0.3968253968253968,\n\
\ \"acc_norm_stderr\": 0.04375888492727061\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6741935483870968,\n\
\ \"acc_stderr\": 0.026662010578567104,\n \"acc_norm\": 0.6741935483870968,\n\
\ \"acc_norm_stderr\": 0.026662010578567104\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785741,\n\
\ \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785741\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.703030303030303,\n \"acc_stderr\": 0.03567969772268049,\n\
\ \"acc_norm\": 0.703030303030303,\n \"acc_norm_stderr\": 0.03567969772268049\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7272727272727273,\n \"acc_stderr\": 0.03173071239071724,\n \"\
acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.03173071239071724\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7927461139896373,\n \"acc_stderr\": 0.029252823291803627,\n\
\ \"acc_norm\": 0.7927461139896373,\n \"acc_norm_stderr\": 0.029252823291803627\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5743589743589743,\n \"acc_stderr\": 0.025069094387296535,\n\
\ \"acc_norm\": 0.5743589743589743,\n \"acc_norm_stderr\": 0.025069094387296535\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.31851851851851853,\n \"acc_stderr\": 0.028406533090608466,\n \
\ \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.028406533090608466\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6428571428571429,\n \"acc_stderr\": 0.031124619309328177,\n\
\ \"acc_norm\": 0.6428571428571429,\n \"acc_norm_stderr\": 0.031124619309328177\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.818348623853211,\n \"acc_stderr\": 0.016530617409266868,\n \"\
acc_norm\": 0.818348623853211,\n \"acc_norm_stderr\": 0.016530617409266868\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6617647058823529,\n \"acc_stderr\": 0.03320574612945432,\n \"\
acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.03320574612945432\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7468354430379747,\n \"acc_stderr\": 0.028304657943035282,\n \
\ \"acc_norm\": 0.7468354430379747,\n \"acc_norm_stderr\": 0.028304657943035282\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6367713004484304,\n\
\ \"acc_stderr\": 0.032277904428505,\n \"acc_norm\": 0.6367713004484304,\n\
\ \"acc_norm_stderr\": 0.032277904428505\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6793893129770993,\n \"acc_stderr\": 0.04093329229834278,\n\
\ \"acc_norm\": 0.6793893129770993,\n \"acc_norm_stderr\": 0.04093329229834278\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.743801652892562,\n \"acc_stderr\": 0.03984979653302871,\n \"acc_norm\"\
: 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302871\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n\
\ \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n\
\ \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615768,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615768\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.04354631077260594,\n\
\ \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.04354631077260594\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8247863247863247,\n\
\ \"acc_stderr\": 0.02490443909891824,\n \"acc_norm\": 0.8247863247863247,\n\
\ \"acc_norm_stderr\": 0.02490443909891824\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939098,\n \
\ \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939098\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6998722860791826,\n\
\ \"acc_stderr\": 0.016389249691317432,\n \"acc_norm\": 0.6998722860791826,\n\
\ \"acc_norm_stderr\": 0.016389249691317432\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.025009313790069716,\n\
\ \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.025009313790069716\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.26033519553072626,\n\
\ \"acc_stderr\": 0.014676252009319473,\n \"acc_norm\": 0.26033519553072626,\n\
\ \"acc_norm_stderr\": 0.014676252009319473\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6045751633986928,\n \"acc_stderr\": 0.02799672318063145,\n\
\ \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.02799672318063145\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6495176848874598,\n\
\ \"acc_stderr\": 0.027098652621301754,\n \"acc_norm\": 0.6495176848874598,\n\
\ \"acc_norm_stderr\": 0.027098652621301754\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.026725868809100793,\n\
\ \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.026725868809100793\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4219858156028369,\n \"acc_stderr\": 0.0294621892333706,\n \
\ \"acc_norm\": 0.4219858156028369,\n \"acc_norm_stderr\": 0.0294621892333706\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4172099087353325,\n\
\ \"acc_stderr\": 0.012593959992906417,\n \"acc_norm\": 0.4172099087353325,\n\
\ \"acc_norm_stderr\": 0.012593959992906417\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5147058823529411,\n \"acc_stderr\": 0.03035969707904612,\n\
\ \"acc_norm\": 0.5147058823529411,\n \"acc_norm_stderr\": 0.03035969707904612\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5392156862745098,\n \"acc_stderr\": 0.020165523313907904,\n \
\ \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.020165523313907904\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n\
\ \"acc_stderr\": 0.046075820907199756,\n \"acc_norm\": 0.6363636363636364,\n\
\ \"acc_norm_stderr\": 0.046075820907199756\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.029162738410249765,\n\
\ \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.029162738410249765\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7512437810945274,\n\
\ \"acc_stderr\": 0.030567675938916714,\n \"acc_norm\": 0.7512437810945274,\n\
\ \"acc_norm_stderr\": 0.030567675938916714\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.463855421686747,\n\
\ \"acc_stderr\": 0.03882310850890594,\n \"acc_norm\": 0.463855421686747,\n\
\ \"acc_norm_stderr\": 0.03882310850890594\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7017543859649122,\n \"acc_stderr\": 0.03508771929824563,\n\
\ \"acc_norm\": 0.7017543859649122,\n \"acc_norm_stderr\": 0.03508771929824563\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.35495716034271724,\n\
\ \"mc1_stderr\": 0.016750862381375898,\n \"mc2\": 0.5059941161139769,\n\
\ \"mc2_stderr\": 0.015447236581056423\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7545382794001578,\n \"acc_stderr\": 0.012095272937183644\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6141015921152388,\n \
\ \"acc_stderr\": 0.013409077471319164\n }\n}\n```"
repo_url: https://huggingface.co/allknowingroger/Limmy-phi2-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|arc:challenge|25_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|gsm8k|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hellaswag|10_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T04-46-02.169522.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-11T04-46-02.169522.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- '**/details_harness|winogrande|5_2024-04-11T04-46-02.169522.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-11T04-46-02.169522.parquet'
- config_name: results
data_files:
- split: 2024_04_11T04_46_02.169522
path:
- results_2024-04-11T04-46-02.169522.parquet
- split: latest
path:
- results_2024-04-11T04-46-02.169522.parquet
---
# Dataset Card for Evaluation run of allknowingroger/Limmy-phi2-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [allknowingroger/Limmy-phi2-slerp](https://huggingface.co/allknowingroger/Limmy-phi2-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_allknowingroger__Limmy-phi2-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-11T04:46:02.169522](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__Limmy-phi2-slerp/blob/main/results_2024-04-11T04-46-02.169522.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5821651772411324,
"acc_stderr": 0.03372798315458136,
"acc_norm": 0.5823208166209095,
"acc_norm_stderr": 0.03442419149083994,
"mc1": 0.35495716034271724,
"mc1_stderr": 0.016750862381375898,
"mc2": 0.5059941161139769,
"mc2_stderr": 0.015447236581056423
},
"harness|arc:challenge|25": {
"acc": 0.591296928327645,
"acc_stderr": 0.014365750345427,
"acc_norm": 0.621160409556314,
"acc_norm_stderr": 0.014175915490000324
},
"harness|hellaswag|10": {
"acc": 0.579964150567616,
"acc_stderr": 0.004925556104679422,
"acc_norm": 0.7635929097789285,
"acc_norm_stderr": 0.0042400668987025185
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.04605661864718381,
"acc_norm": 0.3,
"acc_norm_stderr": 0.04605661864718381
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4740740740740741,
"acc_stderr": 0.04313531696750575,
"acc_norm": 0.4740740740740741,
"acc_norm_stderr": 0.04313531696750575
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6052631578947368,
"acc_stderr": 0.039777499346220734,
"acc_norm": 0.6052631578947368,
"acc_norm_stderr": 0.039777499346220734
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6,
"acc_stderr": 0.03015113445777629,
"acc_norm": 0.6,
"acc_norm_stderr": 0.03015113445777629
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.040166600304512336,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.040166600304512336
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5953757225433526,
"acc_stderr": 0.03742461193887248,
"acc_norm": 0.5953757225433526,
"acc_norm_stderr": 0.03742461193887248
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663434,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663434
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4808510638297872,
"acc_stderr": 0.032662042990646775,
"acc_norm": 0.4808510638297872,
"acc_norm_stderr": 0.032662042990646775
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.046151869625837026,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.046151869625837026
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.503448275862069,
"acc_stderr": 0.04166567577101579,
"acc_norm": 0.503448275862069,
"acc_norm_stderr": 0.04166567577101579
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43915343915343913,
"acc_stderr": 0.025559920550531006,
"acc_norm": 0.43915343915343913,
"acc_norm_stderr": 0.025559920550531006
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.04375888492727061,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.04375888492727061
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6741935483870968,
"acc_stderr": 0.026662010578567104,
"acc_norm": 0.6741935483870968,
"acc_norm_stderr": 0.026662010578567104
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.703030303030303,
"acc_stderr": 0.03567969772268049,
"acc_norm": 0.703030303030303,
"acc_norm_stderr": 0.03567969772268049
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.03173071239071724,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.03173071239071724
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7927461139896373,
"acc_stderr": 0.029252823291803627,
"acc_norm": 0.7927461139896373,
"acc_norm_stderr": 0.029252823291803627
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5743589743589743,
"acc_stderr": 0.025069094387296535,
"acc_norm": 0.5743589743589743,
"acc_norm_stderr": 0.025069094387296535
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.028406533090608466,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.028406533090608466
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6428571428571429,
"acc_stderr": 0.031124619309328177,
"acc_norm": 0.6428571428571429,
"acc_norm_stderr": 0.031124619309328177
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242741,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242741
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.818348623853211,
"acc_stderr": 0.016530617409266868,
"acc_norm": 0.818348623853211,
"acc_norm_stderr": 0.016530617409266868
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6617647058823529,
"acc_stderr": 0.03320574612945432,
"acc_norm": 0.6617647058823529,
"acc_norm_stderr": 0.03320574612945432
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7468354430379747,
"acc_stderr": 0.028304657943035282,
"acc_norm": 0.7468354430379747,
"acc_norm_stderr": 0.028304657943035282
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6367713004484304,
"acc_stderr": 0.032277904428505,
"acc_norm": 0.6367713004484304,
"acc_norm_stderr": 0.032277904428505
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6793893129770993,
"acc_stderr": 0.04093329229834278,
"acc_norm": 0.6793893129770993,
"acc_norm_stderr": 0.04093329229834278
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.03984979653302871,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.03984979653302871
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615768,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615768
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7378640776699029,
"acc_stderr": 0.04354631077260594,
"acc_norm": 0.7378640776699029,
"acc_norm_stderr": 0.04354631077260594
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8247863247863247,
"acc_stderr": 0.02490443909891824,
"acc_norm": 0.8247863247863247,
"acc_norm_stderr": 0.02490443909891824
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939098,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939098
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6998722860791826,
"acc_stderr": 0.016389249691317432,
"acc_norm": 0.6998722860791826,
"acc_norm_stderr": 0.016389249691317432
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.684971098265896,
"acc_stderr": 0.025009313790069716,
"acc_norm": 0.684971098265896,
"acc_norm_stderr": 0.025009313790069716
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.26033519553072626,
"acc_stderr": 0.014676252009319473,
"acc_norm": 0.26033519553072626,
"acc_norm_stderr": 0.014676252009319473
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.02799672318063145,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.02799672318063145
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6495176848874598,
"acc_stderr": 0.027098652621301754,
"acc_norm": 0.6495176848874598,
"acc_norm_stderr": 0.027098652621301754
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.026725868809100793,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.026725868809100793
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4219858156028369,
"acc_stderr": 0.0294621892333706,
"acc_norm": 0.4219858156028369,
"acc_norm_stderr": 0.0294621892333706
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4172099087353325,
"acc_stderr": 0.012593959992906417,
"acc_norm": 0.4172099087353325,
"acc_norm_stderr": 0.012593959992906417
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5147058823529411,
"acc_stderr": 0.03035969707904612,
"acc_norm": 0.5147058823529411,
"acc_norm_stderr": 0.03035969707904612
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5392156862745098,
"acc_stderr": 0.020165523313907904,
"acc_norm": 0.5392156862745098,
"acc_norm_stderr": 0.020165523313907904
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.046075820907199756,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.046075820907199756
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.029162738410249765,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.029162738410249765
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7512437810945274,
"acc_stderr": 0.030567675938916714,
"acc_norm": 0.7512437810945274,
"acc_norm_stderr": 0.030567675938916714
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-virology|5": {
"acc": 0.463855421686747,
"acc_stderr": 0.03882310850890594,
"acc_norm": 0.463855421686747,
"acc_norm_stderr": 0.03882310850890594
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7017543859649122,
"acc_stderr": 0.03508771929824563,
"acc_norm": 0.7017543859649122,
"acc_norm_stderr": 0.03508771929824563
},
"harness|truthfulqa:mc|0": {
"mc1": 0.35495716034271724,
"mc1_stderr": 0.016750862381375898,
"mc2": 0.5059941161139769,
"mc2_stderr": 0.015447236581056423
},
"harness|winogrande|5": {
"acc": 0.7545382794001578,
"acc_stderr": 0.012095272937183644
},
"harness|gsm8k|5": {
"acc": 0.6141015921152388,
"acc_stderr": 0.013409077471319164
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
cigdemcnb/turkishReviews-ds-mini | ---
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 1252876.2642514652
num_examples: 3378
- name: validation
num_bytes: 139455.7357485349
num_examples: 376
download_size: 896651
dataset_size: 1392332.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
mesolitica/chatgpt-explain-sentiment | ---
language:
- ms
pretty_name: chatgpt-malay-explain-sentiment
---
# Explain Sentiment
Generated using ChatGPT3.5 on Malaysian tweets, notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/sentiment/chatgpt3.5-sentiment
- [sentiment.jsonl](sentiment.jsonl), 162902 rows, 86 MB
## Example data
```python
{'sentiment': 'negative',
'explain_en': 'The text is negative because it contains an angry tone and disrespectful language towards someone named Amzar.',
'explain_ms': 'Teks ini negatif kerana mengandungi nada marah dan bahasa yang tidak sopan terhadap seseorang yang bernama Amzar.',
'text': 'BABUN PUNYA AMZAR. TAK RETI HORMAT ORANG KEEEEEE???!!!!'}
``` |
yzhuang/autotree_automl_electricity_gosdt_l256_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float64
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float64
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2773600000
num_examples: 100000
- name: validation
num_bytes: 277360000
num_examples: 10000
download_size: 691921046
dataset_size: 3050960000
---
# Dataset Card for "autotree_automl_electricity_gosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nayohan/commonsense_qa-ko | ---
dataset_info:
features:
- name: stem
dtype: string
- name: label_A
dtype: string
- name: label_B
dtype: string
- name: label_C
dtype: string
- name: label_D
dtype: string
- name: label_E
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 1662060
num_examples: 9741
- name: valid
num_bytes: 206056
num_examples: 1221
download_size: 1169959
dataset_size: 1868116
---
# Dataset Card for "commonsense_qa-ko"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NYTK/HuRC | ---
YAML tags:
annotations_creators:
- crowdsourced
language_creators:
- found
- expert-generated
language:
- hu
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: HuRC
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- question-answering
task_ids:
- extractive-qa
- abstractive-qa
---
# Dataset Card for HuRC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuRC dataset](https://github.com/nytud/HuRC)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian Corpus for Reading Comprehension with Commonsense Reasoning (HuRC), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.
The dataset contains 80 614 instances. Each instance is composed of a lead, a passage and a cloze-style query with a masked entity. The task is to select the named entity that is being masked in the query.
The data was automatically collected from the online news of Népszabadság online (nol.hu).
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a lead, a passage, a query and a MASK.
An example:
```
{
"id": "1",
"lead": ["A Közigazgatási és Igazságügyi Minisztérium szerint a Bárka Színház esetében felmerült a felelőtlen gazdálkodás gyanúja, egyes értesülések szerint pedig ebben \"a színház igazgatójának és gazdasági vezetőjének felelőssége is felmerül\""],
"passage": [
"A teátrumnak Navracsics Tibor közigazgatási és igazságügyi miniszterhez és Kocsis Máté VIII. kerületi polgármesterhez",
"reagálva a tárca azt írta, hogy a felelőtlen gazdálkodás gyanújában \"egyes értesülések szerint a színház igazgatójának és gazdasági vezetőjének felelőssége is felmerül\". A KIM \"éppen ezért nagyon várja az Állami Számvevőszék készülő jelentését, hogy tiszta képet kaphasson a színház működéséről\".",
"A minisztérium hangsúlyozta, hogy az elmúlt évben is mindent elkövetett azért, hogy a Bárka Színház \"valós, rangos művészeti térként\" működjön, és a továbbiakban is ez a szándéka, de jelenleg a társulat működtetését a minisztérium fenntartói támogatás formájában jogszerűen még nem tudja megoldani.",
"A teátrum az átadás-átvétel elhúzódásának okát keresve tette közzé nyílt levelét, amelyben elmaradó fizetésekre, előadásokra és bemutatókra hívta fel a figyelmet, és jelezte, hogy várja a helyzet megoldását.",
"A színház átadás-átvétele jelenleg zajlik, a folyamat végeztével a Bárka a józsefvárosi önkormányzattól állami tulajdonba, a tervek szerint a Közigazgatási és Igazságügyi Minisztérium fenntartásába kerül."
],
"query": "A KIM 2014-es költségvetésében szerepel a Bárka Színház, de amíg nem a minisztérium a [MASK] fenntartója, addig ez a költségvetési keret nem nyitható meg.",
"MASK": "Bárka",
}
```
### Data Fields
- id: unique id of the instances;
- lead: a short summary of the article as it was extracted from the source texts;
- passage: 3-6 paragraphs of texts as the body of the article;
- query: the last paragraph of an article, some kind of summary or conclusion, with a named entity masked (with [MASK]) in it;
- MASK: the masked named entity.
### Data Splits
HuRC has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split | Proportion of the split
|---------------|----------------------------------| ---------|
| train | 64614 | 80%|
| validation | 8000 |10%|
| test | 8000 |10%|
The test data is distributed without the MASK fields. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
To produce the Hungarian material, we used the daily articles from Népszabadság Online which had titles and summaries as well. We selected 3-6 paragraphs from each article from the ones which contain proper nouns both in the main part and the summary as well. We trained a NER model using huBERT (Nemeskey 2021) for recognizing proper nouns. NerKor (Simon és Vadász 2021) and Huggingface’s token-level classification library were used to fine-tune the model. Our model achieved an F-score of 90.18 on the test material. As a final step, we found pairs of proper names which are present both in the main article and the summary. Multiple articles contained more than one such pairs so we used those more than once. This resulted in a database of 88655 instances (from 49782 articles).
The quantitative properties of our corpus are as follows: Number of articles: 88655 Number of different articles (type): 49782 Token: 27703631 Type: 1115.260 Average length of text (token): 249.42 (median: 229) Average question length (token): 63.07 (median: 56). We fine-tuned the corpus by hand.
One annotator per 100 unit checked and validated the dataset for which we provided our own demo interface. Automatic masking and the previous occurrence of the entity was checked. This resulted in a database of 80 614 validated entries.
## Additional Information
### Licensing Information
HuRC is released under the cc-by-4.0 license.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. |
huangyt/FINETUNE1 | ---
license: openrail
pretty_name: Finetune1
---

# 📔 **DATASET**
| **Dataset** | Class | Number of Questions |
| ------- | ----------------------------------------------------------------- | ------------------------ |
| **FLAN_CoT(zs)** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense | 91910 |
| **Prm800k** | Reasoning 、 MATH | 6713 |
| **ScienceQA** | ScienceQA | 5177 |
| **SciBench** | ScienceQA | 695 |
| **ReClor** | Reasoning | 1624 |
| **TheoremQA** | Commonsense 、 MATH 、 ScienceQA | 800 |
| **OpenBookQA** | Text_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA | 5957 |
| **ARB** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text_Understanding | 605 |
| **Openassistant-guanaco** | Commonsense 、 Text_Understanding 、 Reasoning | 802 |
| **SQuAD 2.0** | Text_Understanding | 87599 |
| **CommonsenseQA** | Commonsense | 9741 |
| **Ethics** | Commonsense | 21759 |
# 📌 **Methon**
## *Dataset Format Definition*
Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform.
```
{
"input": "",
"output": "",
"instruction": ""
}
```
- ### [FLAN_V2 COT(ZS)](https://huggingface.co/datasets/conceptofmind/cot_submix_original/tree/main)
We only extract the 'zs_opt' from COT and categorize each task.
- ### [CommonsenseQA](https://huggingface.co/datasets/commonsense_qa)
We extracted the question and choices from the original CommonsenseQA dataset and placed them in the instruction. We also wrote the input prompt: "Choose A, B, C, D, or E as your solution."
- ### [SQuAD](https://huggingface.co/datasets/squad)
We used the questions from the SQUAD dataset as instructions and treated the context as the input.
- ### [Ethics](https://huggingface.co/datasets/hendrycks/ethics)
The ethics dataset, which was originally in labeled format, has been transformed into a true or false format. Additionally, the input now includes the instruction "Give true or false according to ethics."
- ### [OTHER](https://github.com/arielnlee/Platypus/tree/main/data_pipeline)
Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.
## *Sampling Algorithms*
1. First,we are taking all datasets from COT, ARB, TheoremQA and Ethics. ARB and TheoremQA encompass a wide range of fields and have a relatively low total count. Since COT has high quality, we are including the entire dataset. For the Ethics dataset, we are collecting the entire dataset because we want the model to comprehensively learn more about ethics and security aspects.
2. The remaining datasets were initially categorized into the following four groups for the purpose of **Simple Random Sampling**:
- *Science Questions and Answers* : ScienceQA、SciBench
- *Reasoning & Mathematics* : ReClor、Prm800k
- *Text Comprehension* : OpenBookQA、SQuAD
- *Commonsense* : CommonsenseQA、Openassistant-guanaco
However, we discovered that the total number of datasets in the Science Questions and Answers、Reasoning & Mathematics、and Commonsense categories did not exceed 30,000. As a result, only the Text Comprehension category underwent Simple Random Sampling, while the others were taken in their entirety.
# 🏁 **Feature Work**
- In the future, we intend to utilize Stratified Sampling due to the imbalance in the number of questions across different datasets, which introduces bias. Conversely, if we opt to randomly sample an equal number of examples from each dataset, it can yield a smaller estimation error for the same total sample size.
- We can even evaluate based on the fine-tuning from the first stage and employ additional scripting techniques to enhance the quality of the dataset. |
open-llm-leaderboard/details_ArianAskari__SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta | ---
pretty_name: Evaluation run of ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta](https://huggingface.co/ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ArianAskari__SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-11T14:20:18.392173](https://huggingface.co/datasets/open-llm-leaderboard/details_ArianAskari__SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta/blob/main/results_2024-02-11T14-20-18.392173.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5989068889556914,\n\
\ \"acc_stderr\": 0.03306588865476634,\n \"acc_norm\": 0.6081578643232973,\n\
\ \"acc_norm_stderr\": 0.03380748896101241,\n \"mc1\": 0.3818849449204406,\n\
\ \"mc1_stderr\": 0.017008101939163495,\n \"mc2\": 0.5376745022515824,\n\
\ \"mc2_stderr\": 0.01602462184426783\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5631399317406144,\n \"acc_stderr\": 0.014494421584256522,\n\
\ \"acc_norm\": 0.5972696245733788,\n \"acc_norm_stderr\": 0.01433223630679014\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6338378809002191,\n\
\ \"acc_stderr\": 0.004807699539973415,\n \"acc_norm\": 0.817167894841665,\n\
\ \"acc_norm_stderr\": 0.003857388613533091\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5333333333333333,\n\
\ \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.5333333333333333,\n\
\ \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.618421052631579,\n \"acc_stderr\": 0.03953173377749194,\n\
\ \"acc_norm\": 0.618421052631579,\n \"acc_norm_stderr\": 0.03953173377749194\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n\
\ \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6377358490566037,\n \"acc_stderr\": 0.0295822451283843,\n\
\ \"acc_norm\": 0.6377358490566037,\n \"acc_norm_stderr\": 0.0295822451283843\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n\
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n\
\ \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n\
\ \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.04784060704105654,\n\
\ \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.04784060704105654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5361702127659574,\n \"acc_stderr\": 0.03260038511835772,\n\
\ \"acc_norm\": 0.5361702127659574,\n \"acc_norm_stderr\": 0.03260038511835772\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n\
\ \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n\
\ \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n\
\ \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3994708994708995,\n \"acc_stderr\": 0.025225450284067884,\n \"\
acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.025225450284067884\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n\
\ \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n\
\ \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7483870967741936,\n\
\ \"acc_stderr\": 0.024685979286239966,\n \"acc_norm\": 0.7483870967741936,\n\
\ \"acc_norm_stderr\": 0.024685979286239966\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.03514528562175008,\n\
\ \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.03514528562175008\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n\
\ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7373737373737373,\n \"acc_stderr\": 0.03135305009533086,\n \"\
acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.03135305009533086\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\
\ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6128205128205129,\n \"acc_stderr\": 0.024697216930878934,\n\
\ \"acc_norm\": 0.6128205128205129,\n \"acc_norm_stderr\": 0.024697216930878934\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.02889774874113115,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.02889774874113115\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886793,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886793\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"\
acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8055045871559633,\n \"acc_stderr\": 0.016970289090458033,\n \"\
acc_norm\": 0.8055045871559633,\n \"acc_norm_stderr\": 0.016970289090458033\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5324074074074074,\n \"acc_stderr\": 0.03402801581358966,\n \"\
acc_norm\": 0.5324074074074074,\n \"acc_norm_stderr\": 0.03402801581358966\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7647058823529411,\n \"acc_stderr\": 0.029771775228145628,\n \"\
acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.029771775228145628\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7341772151898734,\n \"acc_stderr\": 0.028756799629658342,\n \
\ \"acc_norm\": 0.7341772151898734,\n \"acc_norm_stderr\": 0.028756799629658342\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6233183856502242,\n\
\ \"acc_stderr\": 0.03252113489929188,\n \"acc_norm\": 0.6233183856502242,\n\
\ \"acc_norm_stderr\": 0.03252113489929188\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6641221374045801,\n \"acc_stderr\": 0.041423137719966634,\n\
\ \"acc_norm\": 0.6641221374045801,\n \"acc_norm_stderr\": 0.041423137719966634\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7107438016528925,\n \"acc_stderr\": 0.041391127276354626,\n \"\
acc_norm\": 0.7107438016528925,\n \"acc_norm_stderr\": 0.041391127276354626\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7116564417177914,\n \"acc_stderr\": 0.035590395316173425,\n\
\ \"acc_norm\": 0.7116564417177914,\n \"acc_norm_stderr\": 0.035590395316173425\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n\
\ \"acc_stderr\": 0.04635550135609976,\n \"acc_norm\": 0.39285714285714285,\n\
\ \"acc_norm_stderr\": 0.04635550135609976\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.04354631077260595,\n\
\ \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.04354631077260595\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n\
\ \"acc_stderr\": 0.02023714900899093,\n \"acc_norm\": 0.8931623931623932,\n\
\ \"acc_norm_stderr\": 0.02023714900899093\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7867177522349936,\n\
\ \"acc_stderr\": 0.014648172749593515,\n \"acc_norm\": 0.7867177522349936,\n\
\ \"acc_norm_stderr\": 0.014648172749593515\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.02494679222527231,\n\
\ \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.02494679222527231\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32625698324022345,\n\
\ \"acc_stderr\": 0.015680441518889178,\n \"acc_norm\": 0.32625698324022345,\n\
\ \"acc_norm_stderr\": 0.015680441518889178\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6895424836601307,\n \"acc_stderr\": 0.026493033225145898,\n\
\ \"acc_norm\": 0.6895424836601307,\n \"acc_norm_stderr\": 0.026493033225145898\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n\
\ \"acc_stderr\": 0.026311858071854155,\n \"acc_norm\": 0.6881028938906752,\n\
\ \"acc_norm_stderr\": 0.026311858071854155\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6759259259259259,\n \"acc_stderr\": 0.026041766202717156,\n\
\ \"acc_norm\": 0.6759259259259259,\n \"acc_norm_stderr\": 0.026041766202717156\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.46099290780141844,\n \"acc_stderr\": 0.02973659252642444,\n \
\ \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.02973659252642444\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4178617992177314,\n\
\ \"acc_stderr\": 0.012596744108998562,\n \"acc_norm\": 0.4178617992177314,\n\
\ \"acc_norm_stderr\": 0.012596744108998562\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6654411764705882,\n \"acc_stderr\": 0.028661996202335303,\n\
\ \"acc_norm\": 0.6654411764705882,\n \"acc_norm_stderr\": 0.028661996202335303\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6127450980392157,\n \"acc_stderr\": 0.019706875804085637,\n \
\ \"acc_norm\": 0.6127450980392157,\n \"acc_norm_stderr\": 0.019706875804085637\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n\
\ \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n\
\ \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6489795918367347,\n \"acc_stderr\": 0.03055531675557364,\n\
\ \"acc_norm\": 0.6489795918367347,\n \"acc_norm_stderr\": 0.03055531675557364\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7960199004975125,\n\
\ \"acc_stderr\": 0.02849317624532607,\n \"acc_norm\": 0.7960199004975125,\n\
\ \"acc_norm_stderr\": 0.02849317624532607\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n\
\ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n\
\ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3818849449204406,\n\
\ \"mc1_stderr\": 0.017008101939163495,\n \"mc2\": 0.5376745022515824,\n\
\ \"mc2_stderr\": 0.01602462184426783\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7466456195737964,\n \"acc_stderr\": 0.012223754434233623\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12282031842304776,\n \
\ \"acc_stderr\": 0.009041108602874664\n }\n}\n```"
repo_url: https://huggingface.co/ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|arc:challenge|25_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|gsm8k|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hellaswag|10_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-11T14-20-18.392173.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-11T14-20-18.392173.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- '**/details_harness|winogrande|5_2024-02-11T14-20-18.392173.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-11T14-20-18.392173.parquet'
- config_name: results
data_files:
- split: 2024_02_11T14_20_18.392173
path:
- results_2024-02-11T14-20-18.392173.parquet
- split: latest
path:
- results_2024-02-11T14-20-18.392173.parquet
---
# Dataset Card for Evaluation run of ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta](https://huggingface.co/ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ArianAskari__SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-11T14:20:18.392173](https://huggingface.co/datasets/open-llm-leaderboard/details_ArianAskari__SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta/blob/main/results_2024-02-11T14-20-18.392173.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5989068889556914,
"acc_stderr": 0.03306588865476634,
"acc_norm": 0.6081578643232973,
"acc_norm_stderr": 0.03380748896101241,
"mc1": 0.3818849449204406,
"mc1_stderr": 0.017008101939163495,
"mc2": 0.5376745022515824,
"mc2_stderr": 0.01602462184426783
},
"harness|arc:challenge|25": {
"acc": 0.5631399317406144,
"acc_stderr": 0.014494421584256522,
"acc_norm": 0.5972696245733788,
"acc_norm_stderr": 0.01433223630679014
},
"harness|hellaswag|10": {
"acc": 0.6338378809002191,
"acc_stderr": 0.004807699539973415,
"acc_norm": 0.817167894841665,
"acc_norm_stderr": 0.003857388613533091
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5333333333333333,
"acc_stderr": 0.043097329010363554,
"acc_norm": 0.5333333333333333,
"acc_norm_stderr": 0.043097329010363554
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.618421052631579,
"acc_stderr": 0.03953173377749194,
"acc_norm": 0.618421052631579,
"acc_norm_stderr": 0.03953173377749194
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6377358490566037,
"acc_stderr": 0.0295822451283843,
"acc_norm": 0.6377358490566037,
"acc_norm_stderr": 0.0295822451283843
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.037455547914624555,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.037455547914624555
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.04784060704105654,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.04784060704105654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5361702127659574,
"acc_stderr": 0.03260038511835772,
"acc_norm": 0.5361702127659574,
"acc_norm_stderr": 0.03260038511835772
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3994708994708995,
"acc_stderr": 0.025225450284067884,
"acc_norm": 0.3994708994708995,
"acc_norm_stderr": 0.025225450284067884
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7483870967741936,
"acc_stderr": 0.024685979286239966,
"acc_norm": 0.7483870967741936,
"acc_norm_stderr": 0.024685979286239966
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.03514528562175008,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.03514528562175008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7373737373737373,
"acc_stderr": 0.03135305009533086,
"acc_norm": 0.7373737373737373,
"acc_norm_stderr": 0.03135305009533086
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6128205128205129,
"acc_stderr": 0.024697216930878934,
"acc_norm": 0.6128205128205129,
"acc_norm_stderr": 0.024697216930878934
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.02889774874113115,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.02889774874113115
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886793,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886793
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8055045871559633,
"acc_stderr": 0.016970289090458033,
"acc_norm": 0.8055045871559633,
"acc_norm_stderr": 0.016970289090458033
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5324074074074074,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.5324074074074074,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.029771775228145628,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.029771775228145628
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7341772151898734,
"acc_stderr": 0.028756799629658342,
"acc_norm": 0.7341772151898734,
"acc_norm_stderr": 0.028756799629658342
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6233183856502242,
"acc_stderr": 0.03252113489929188,
"acc_norm": 0.6233183856502242,
"acc_norm_stderr": 0.03252113489929188
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6641221374045801,
"acc_stderr": 0.041423137719966634,
"acc_norm": 0.6641221374045801,
"acc_norm_stderr": 0.041423137719966634
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7107438016528925,
"acc_stderr": 0.041391127276354626,
"acc_norm": 0.7107438016528925,
"acc_norm_stderr": 0.041391127276354626
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7116564417177914,
"acc_stderr": 0.035590395316173425,
"acc_norm": 0.7116564417177914,
"acc_norm_stderr": 0.035590395316173425
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.39285714285714285,
"acc_stderr": 0.04635550135609976,
"acc_norm": 0.39285714285714285,
"acc_norm_stderr": 0.04635550135609976
},
"harness|hendrycksTest-management|5": {
"acc": 0.7378640776699029,
"acc_stderr": 0.04354631077260595,
"acc_norm": 0.7378640776699029,
"acc_norm_stderr": 0.04354631077260595
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8931623931623932,
"acc_stderr": 0.02023714900899093,
"acc_norm": 0.8931623931623932,
"acc_norm_stderr": 0.02023714900899093
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7867177522349936,
"acc_stderr": 0.014648172749593515,
"acc_norm": 0.7867177522349936,
"acc_norm_stderr": 0.014648172749593515
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.02494679222527231,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.02494679222527231
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.32625698324022345,
"acc_stderr": 0.015680441518889178,
"acc_norm": 0.32625698324022345,
"acc_norm_stderr": 0.015680441518889178
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6895424836601307,
"acc_stderr": 0.026493033225145898,
"acc_norm": 0.6895424836601307,
"acc_norm_stderr": 0.026493033225145898
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6881028938906752,
"acc_stderr": 0.026311858071854155,
"acc_norm": 0.6881028938906752,
"acc_norm_stderr": 0.026311858071854155
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6759259259259259,
"acc_stderr": 0.026041766202717156,
"acc_norm": 0.6759259259259259,
"acc_norm_stderr": 0.026041766202717156
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.02973659252642444,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.02973659252642444
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4178617992177314,
"acc_stderr": 0.012596744108998562,
"acc_norm": 0.4178617992177314,
"acc_norm_stderr": 0.012596744108998562
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6654411764705882,
"acc_stderr": 0.028661996202335303,
"acc_norm": 0.6654411764705882,
"acc_norm_stderr": 0.028661996202335303
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6127450980392157,
"acc_stderr": 0.019706875804085637,
"acc_norm": 0.6127450980392157,
"acc_norm_stderr": 0.019706875804085637
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.046534298079135075,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6489795918367347,
"acc_stderr": 0.03055531675557364,
"acc_norm": 0.6489795918367347,
"acc_norm_stderr": 0.03055531675557364
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7960199004975125,
"acc_stderr": 0.02849317624532607,
"acc_norm": 0.7960199004975125,
"acc_norm_stderr": 0.02849317624532607
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.02917088550072767,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.02917088550072767
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3818849449204406,
"mc1_stderr": 0.017008101939163495,
"mc2": 0.5376745022515824,
"mc2_stderr": 0.01602462184426783
},
"harness|winogrande|5": {
"acc": 0.7466456195737964,
"acc_stderr": 0.012223754434233623
},
"harness|gsm8k|5": {
"acc": 0.12282031842304776,
"acc_stderr": 0.009041108602874664
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/details_perlthoughts__openchat-3.5-1210-32k-8x7b-MoE | ---
pretty_name: Evaluation run of perlthoughts/openchat-3.5-1210-32k-8x7b-MoE
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [perlthoughts/openchat-3.5-1210-32k-8x7b-MoE](https://huggingface.co/perlthoughts/openchat-3.5-1210-32k-8x7b-MoE)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_perlthoughts__openchat-3.5-1210-32k-8x7b-MoE\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-05T03:11:16.908454](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__openchat-3.5-1210-32k-8x7b-MoE/blob/main/results_2024-01-05T03-11-16.908454.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6167149824796962,\n\
\ \"acc_stderr\": 0.03270785052087277,\n \"acc_norm\": 0.6202787181505718,\n\
\ \"acc_norm_stderr\": 0.03336449220180264,\n \"mc1\": 0.3292533659730722,\n\
\ \"mc1_stderr\": 0.016451264440068232,\n \"mc2\": 0.4931724783053433,\n\
\ \"mc2_stderr\": 0.015404387399947296\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5972696245733788,\n \"acc_stderr\": 0.01433223630679015,\n\
\ \"acc_norm\": 0.6459044368600683,\n \"acc_norm_stderr\": 0.013975454122756565\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6394144592710616,\n\
\ \"acc_stderr\": 0.004791890625834195,\n \"acc_norm\": 0.8406691894045011,\n\
\ \"acc_norm_stderr\": 0.0036523632532895825\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \
\ \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"\
acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.02890159361241178,\n\
\ \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.02890159361241178\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6944444444444444,\n\
\ \"acc_stderr\": 0.03852084696008534,\n \"acc_norm\": 0.6944444444444444,\n\
\ \"acc_norm_stderr\": 0.03852084696008534\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n\
\ \"acc_stderr\": 0.037336266553835096,\n \"acc_norm\": 0.6011560693641619,\n\
\ \"acc_norm_stderr\": 0.037336266553835096\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.04576665403207762,\n\
\ \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.04576665403207762\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n\
\ \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467383,\n\
\ \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467383\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3783068783068783,\n \"acc_stderr\": 0.02497695405315525,\n \"\
acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.02497695405315525\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n\
\ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n\
\ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7741935483870968,\n\
\ \"acc_stderr\": 0.023785577884181015,\n \"acc_norm\": 0.7741935483870968,\n\
\ \"acc_norm_stderr\": 0.023785577884181015\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.035107665979592174,\n\
\ \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.035107665979592174\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\"\
: 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7070707070707071,\n \"acc_stderr\": 0.03242497958178815,\n \"\
acc_norm\": 0.7070707070707071,\n \"acc_norm_stderr\": 0.03242497958178815\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758733,\n\
\ \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758733\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n\
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n \
\ \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6134453781512605,\n \"acc_stderr\": 0.03163145807552379,\n \
\ \"acc_norm\": 0.6134453781512605,\n \"acc_norm_stderr\": 0.03163145807552379\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2781456953642384,\n \"acc_stderr\": 0.03658603262763743,\n \"\
acc_norm\": 0.2781456953642384,\n \"acc_norm_stderr\": 0.03658603262763743\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8293577981651377,\n \"acc_stderr\": 0.01612927102509986,\n \"\
acc_norm\": 0.8293577981651377,\n \"acc_norm_stderr\": 0.01612927102509986\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.48148148148148145,\n \"acc_stderr\": 0.03407632093854052,\n \"\
acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.03407632093854052\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.803921568627451,\n \"acc_stderr\": 0.027865942286639325,\n \"\
acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639325\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7932489451476793,\n \"acc_stderr\": 0.0263616516683891,\n \
\ \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.0263616516683891\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n\
\ \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.036401182719909476,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.036401182719909476\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n\
\ \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.7129629629629629,\n\
\ \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.034878251684978906,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.034878251684978906\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n\
\ \"acc_stderr\": 0.023365051491753715,\n \"acc_norm\": 0.8504273504273504,\n\
\ \"acc_norm_stderr\": 0.023365051491753715\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7956577266922095,\n\
\ \"acc_stderr\": 0.014419123980931894,\n \"acc_norm\": 0.7956577266922095,\n\
\ \"acc_norm_stderr\": 0.014419123980931894\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7138728323699421,\n \"acc_stderr\": 0.02433214677913413,\n\
\ \"acc_norm\": 0.7138728323699421,\n \"acc_norm_stderr\": 0.02433214677913413\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32737430167597764,\n\
\ \"acc_stderr\": 0.015694238967737386,\n \"acc_norm\": 0.32737430167597764,\n\
\ \"acc_norm_stderr\": 0.015694238967737386\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6928104575163399,\n \"acc_stderr\": 0.026415601914388992,\n\
\ \"acc_norm\": 0.6928104575163399,\n \"acc_norm_stderr\": 0.026415601914388992\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6784565916398714,\n\
\ \"acc_stderr\": 0.026527724079528872,\n \"acc_norm\": 0.6784565916398714,\n\
\ \"acc_norm_stderr\": 0.026527724079528872\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.024922001168886335,\n\
\ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.024922001168886335\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4397163120567376,\n \"acc_stderr\": 0.02960991207559411,\n \
\ \"acc_norm\": 0.4397163120567376,\n \"acc_norm_stderr\": 0.02960991207559411\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45436766623207303,\n\
\ \"acc_stderr\": 0.012716941720734813,\n \"acc_norm\": 0.45436766623207303,\n\
\ \"acc_norm_stderr\": 0.012716941720734813\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6654411764705882,\n \"acc_stderr\": 0.028661996202335307,\n\
\ \"acc_norm\": 0.6654411764705882,\n \"acc_norm_stderr\": 0.028661996202335307\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6290849673202614,\n \"acc_stderr\": 0.01954210156485412,\n \
\ \"acc_norm\": 0.6290849673202614,\n \"acc_norm_stderr\": 0.01954210156485412\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\
\ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\
\ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.029393609319879804,\n\
\ \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.029393609319879804\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8009950248756219,\n\
\ \"acc_stderr\": 0.028231365092758406,\n \"acc_norm\": 0.8009950248756219,\n\
\ \"acc_norm_stderr\": 0.028231365092758406\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n\
\ \"acc_stderr\": 0.0389136449583582,\n \"acc_norm\": 0.4879518072289157,\n\
\ \"acc_norm_stderr\": 0.0389136449583582\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n\
\ \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3292533659730722,\n\
\ \"mc1_stderr\": 0.016451264440068232,\n \"mc2\": 0.4931724783053433,\n\
\ \"mc2_stderr\": 0.015404387399947296\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7916337805840569,\n \"acc_stderr\": 0.011414554399987729\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.48142532221379836,\n \
\ \"acc_stderr\": 0.013762977910317583\n }\n}\n```"
repo_url: https://huggingface.co/perlthoughts/openchat-3.5-1210-32k-8x7b-MoE
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|arc:challenge|25_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|gsm8k|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hellaswag|10_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T03-11-16.908454.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-05T03-11-16.908454.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- '**/details_harness|winogrande|5_2024-01-05T03-11-16.908454.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-05T03-11-16.908454.parquet'
- config_name: results
data_files:
- split: 2024_01_05T03_11_16.908454
path:
- results_2024-01-05T03-11-16.908454.parquet
- split: latest
path:
- results_2024-01-05T03-11-16.908454.parquet
---
# Dataset Card for Evaluation run of perlthoughts/openchat-3.5-1210-32k-8x7b-MoE
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [perlthoughts/openchat-3.5-1210-32k-8x7b-MoE](https://huggingface.co/perlthoughts/openchat-3.5-1210-32k-8x7b-MoE) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_perlthoughts__openchat-3.5-1210-32k-8x7b-MoE",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-05T03:11:16.908454](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__openchat-3.5-1210-32k-8x7b-MoE/blob/main/results_2024-01-05T03-11-16.908454.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6167149824796962,
"acc_stderr": 0.03270785052087277,
"acc_norm": 0.6202787181505718,
"acc_norm_stderr": 0.03336449220180264,
"mc1": 0.3292533659730722,
"mc1_stderr": 0.016451264440068232,
"mc2": 0.4931724783053433,
"mc2_stderr": 0.015404387399947296
},
"harness|arc:challenge|25": {
"acc": 0.5972696245733788,
"acc_stderr": 0.01433223630679015,
"acc_norm": 0.6459044368600683,
"acc_norm_stderr": 0.013975454122756565
},
"harness|hellaswag|10": {
"acc": 0.6394144592710616,
"acc_stderr": 0.004791890625834195,
"acc_norm": 0.8406691894045011,
"acc_norm_stderr": 0.0036523632532895825
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6716981132075471,
"acc_stderr": 0.02890159361241178,
"acc_norm": 0.6716981132075471,
"acc_norm_stderr": 0.02890159361241178
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.03852084696008534,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.03852084696008534
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.037336266553835096,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.037336266553835096
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.04576665403207762,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.04576665403207762
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.03261936918467383,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.03261936918467383
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3783068783068783,
"acc_stderr": 0.02497695405315525,
"acc_norm": 0.3783068783068783,
"acc_norm_stderr": 0.02497695405315525
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181015,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181015
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.46798029556650245,
"acc_stderr": 0.035107665979592174,
"acc_norm": 0.46798029556650245,
"acc_norm_stderr": 0.035107665979592174
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7070707070707071,
"acc_stderr": 0.03242497958178815,
"acc_norm": 0.7070707070707071,
"acc_norm_stderr": 0.03242497958178815
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8704663212435233,
"acc_stderr": 0.024233532297758733,
"acc_norm": 0.8704663212435233,
"acc_norm_stderr": 0.024233532297758733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563976,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563976
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.027634907264178544,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.027634907264178544
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6134453781512605,
"acc_stderr": 0.03163145807552379,
"acc_norm": 0.6134453781512605,
"acc_norm_stderr": 0.03163145807552379
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2781456953642384,
"acc_stderr": 0.03658603262763743,
"acc_norm": 0.2781456953642384,
"acc_norm_stderr": 0.03658603262763743
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8293577981651377,
"acc_stderr": 0.01612927102509986,
"acc_norm": 0.8293577981651377,
"acc_norm_stderr": 0.01612927102509986
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.03407632093854052,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.03407632093854052
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639325,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639325
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7932489451476793,
"acc_stderr": 0.0263616516683891,
"acc_norm": 0.7932489451476793,
"acc_norm_stderr": 0.0263616516683891
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.036401182719909476,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.036401182719909476
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7129629629629629,
"acc_stderr": 0.043733130409147614,
"acc_norm": 0.7129629629629629,
"acc_norm_stderr": 0.043733130409147614
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.034878251684978906,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.034878251684978906
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8504273504273504,
"acc_stderr": 0.023365051491753715,
"acc_norm": 0.8504273504273504,
"acc_norm_stderr": 0.023365051491753715
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7956577266922095,
"acc_stderr": 0.014419123980931894,
"acc_norm": 0.7956577266922095,
"acc_norm_stderr": 0.014419123980931894
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7138728323699421,
"acc_stderr": 0.02433214677913413,
"acc_norm": 0.7138728323699421,
"acc_norm_stderr": 0.02433214677913413
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.32737430167597764,
"acc_stderr": 0.015694238967737386,
"acc_norm": 0.32737430167597764,
"acc_norm_stderr": 0.015694238967737386
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6928104575163399,
"acc_stderr": 0.026415601914388992,
"acc_norm": 0.6928104575163399,
"acc_norm_stderr": 0.026415601914388992
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6784565916398714,
"acc_stderr": 0.026527724079528872,
"acc_norm": 0.6784565916398714,
"acc_norm_stderr": 0.026527724079528872
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.024922001168886335,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.024922001168886335
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4397163120567376,
"acc_stderr": 0.02960991207559411,
"acc_norm": 0.4397163120567376,
"acc_norm_stderr": 0.02960991207559411
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45436766623207303,
"acc_stderr": 0.012716941720734813,
"acc_norm": 0.45436766623207303,
"acc_norm_stderr": 0.012716941720734813
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6654411764705882,
"acc_stderr": 0.028661996202335307,
"acc_norm": 0.6654411764705882,
"acc_norm_stderr": 0.028661996202335307
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6290849673202614,
"acc_stderr": 0.01954210156485412,
"acc_norm": 0.6290849673202614,
"acc_norm_stderr": 0.01954210156485412
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6979591836734694,
"acc_stderr": 0.029393609319879804,
"acc_norm": 0.6979591836734694,
"acc_norm_stderr": 0.029393609319879804
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8009950248756219,
"acc_stderr": 0.028231365092758406,
"acc_norm": 0.8009950248756219,
"acc_norm_stderr": 0.028231365092758406
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4879518072289157,
"acc_stderr": 0.0389136449583582,
"acc_norm": 0.4879518072289157,
"acc_norm_stderr": 0.0389136449583582
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3292533659730722,
"mc1_stderr": 0.016451264440068232,
"mc2": 0.4931724783053433,
"mc2_stderr": 0.015404387399947296
},
"harness|winogrande|5": {
"acc": 0.7916337805840569,
"acc_stderr": 0.011414554399987729
},
"harness|gsm8k|5": {
"acc": 0.48142532221379836,
"acc_stderr": 0.013762977910317583
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
dhruvabansal/llama-training-ablation | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: training-ablation
size_categories:
- 1K<n<10K
---
Each dataset has exactly three columns: instruction,input,output. Everything is clean and can be processed to create few shot training examples. |
sam1120/terrain-jackal-morning-344-v1.0 | ---
dataset_info:
features:
- name: name
dtype: string
- name: pixel_values
dtype: image
- name: labels
dtype: image
splits:
- name: train
num_bytes: 955653437.0
num_examples: 344
download_size: 276803569
dataset_size: 955653437.0
---
# Dataset Card for "terrain-jackal-morning-344-v1.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sminpark/ds-alpha-small-dataset-v1.3 | ---
license: gpl
---
|
Denissilva88/JJS | ---
license: openrail
---
|
CyberHarem/iris_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of iris/アイリス/爱丽丝 (Arknights)
This is the dataset of iris/アイリス/爱丽丝 (Arknights), containing 44 images and their tags.
The core tags of this character are `long_hair, animal_ears, blue_eyes, hair_ornament, cat_ears, bow, hair_bow, hair_flower, very_long_hair, blonde_hair, parted_bangs, animal_ear_fluff, black_bow, brown_hair, drill_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 44 | 86.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 44 | 71.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 108 | 132.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/iris_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, long_sleeves, solo, black_jacket, looking_at_viewer, closed_mouth, white_shirt, blue_rose, open_jacket, blue_skirt, frills, blue_nails, holding_fan, folding_fan, staff |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | solo | black_jacket | looking_at_viewer | closed_mouth | white_shirt | blue_rose | open_jacket | blue_skirt | frills | blue_nails | holding_fan | folding_fan | staff |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:---------------|:--------------------|:---------------|:--------------|:------------|:--------------|:-------------|:---------|:-------------|:--------------|:--------------|:--------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/katori_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of katori/香取/香取 (Kantai Collection)
This is the dataset of katori/香取/香取 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `glasses, green_eyes, folded_ponytail, breasts, large_breasts, brown_hair, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 423.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 304.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1097 | 598.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 397.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1097 | 738.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/katori_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, epaulettes, military_uniform, necktie, solo, white_gloves, collared_shirt, double-breasted, jacket, black_pantyhose, miniskirt, smile, looking_at_viewer, parted_bangs, light_brown_hair, pencil_skirt, simple_background, white_background, grey_skirt, riding_crop, long_sleeves |
| 1 | 15 |  |  |  |  |  | 1girl, collared_shirt, double-breasted, epaulettes, looking_at_viewer, military_uniform, solo, upper_body, parted_bangs, simple_background, smile, white_gloves, white_background, jacket, long_sleeves, light_brown_hair, black_necktie, grey_shirt |
| 2 | 6 |  |  |  |  |  | 1girl, epaulettes, military_uniform, miniskirt, necktie, pantyhose, riding_crop, solo, white_gloves, smile |
| 3 | 5 |  |  |  |  |  | 1girl, epaulettes, military_uniform, necktie, panties_under_pantyhose, solo, white_gloves, black_pantyhose, sitting, smile, looking_at_viewer, miniskirt, feet |
| 4 | 5 |  |  |  |  |  | 1boy, 1girl, epaulettes, hetero, military_uniform, solo_focus, blush, necktie, penis, white_gloves, smile, bar_censor, heart, huge_breasts, looking_at_viewer, nipples, paizuri |
| 5 | 8 |  |  |  |  |  | 1girl, light_brown_hair, looking_at_viewer, solo, blush, cleavage, parted_bangs, rimless_eyewear, simple_background, long_hair, side-tie_bikini_bottom, white_background, cowboy_shot, navel, white_bikini, front-tie_top |
| 6 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, bikini, blush, navel, cleavage, pointer, twitter_username |
| 7 | 7 |  |  |  |  |  | 1girl, competition_swimsuit, cowboy_shot, solo, parted_bangs, collarbone, highleg_swimsuit, looking_at_viewer, simple_background, blue_one-piece_swimsuit, dated, jacket, twitter_username, white_background, white_one-piece_swimsuit |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | epaulettes | military_uniform | necktie | solo | white_gloves | collared_shirt | double-breasted | jacket | black_pantyhose | miniskirt | smile | looking_at_viewer | parted_bangs | light_brown_hair | pencil_skirt | simple_background | white_background | grey_skirt | riding_crop | long_sleeves | upper_body | black_necktie | grey_shirt | pantyhose | panties_under_pantyhose | sitting | feet | 1boy | hetero | solo_focus | blush | penis | bar_censor | heart | huge_breasts | nipples | paizuri | cleavage | rimless_eyewear | long_hair | side-tie_bikini_bottom | cowboy_shot | navel | white_bikini | front-tie_top | bikini | pointer | twitter_username | competition_swimsuit | collarbone | highleg_swimsuit | blue_one-piece_swimsuit | dated | white_one-piece_swimsuit |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------------------|:----------|:-------|:---------------|:-----------------|:------------------|:---------|:------------------|:------------|:--------|:--------------------|:---------------|:-------------------|:---------------|:--------------------|:-------------------|:-------------|:--------------|:---------------|:-------------|:----------------|:-------------|:------------|:--------------------------|:----------|:-------|:-------|:---------|:-------------|:--------|:--------|:-------------|:--------|:---------------|:----------|:----------|:-----------|:------------------|:------------|:-------------------------|:--------------|:--------|:---------------|:----------------|:---------|:----------|:-------------------|:-----------------------|:-------------|:-------------------|:--------------------------|:--------|:---------------------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | X | X | | X | X | X | X | X | | | X | X | X | X | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | X | | | | | X | X | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | | | X | | | | | | | | X | X | X | | X | X | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | X | | | X | X | X | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | | | X | | | | X | | | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X |
|
itzzdeep/mrbeast-thumbnails | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 20560694.0
num_examples: 150
download_size: 20535577
dataset_size: 20560694.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yzhuang/autotree_automl_default-of-credit-card-clients_gosdt_l512_d3_sd2 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float64
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float64
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 10863200000
num_examples: 100000
- name: validation
num_bytes: 1086320000
num_examples: 10000
download_size: 2039406876
dataset_size: 11949520000
---
# Dataset Card for "autotree_automl_default-of-credit-card-clients_gosdt_l512_d3_sd2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
universeTBD/arxiv-qa-astro-ph | ---
dataset_info:
features:
- name: index
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4108026
num_examples: 10356
download_size: 2402562
dataset_size: 4108026
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv-qa-astro-ph"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_biology-neg-prepend-verbal | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: neg_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
splits:
- name: dev
num_bytes: 8495
num_examples: 5
- name: test
num_bytes: 1406615
num_examples: 144
download_size: 196092
dataset_size: 1415110
---
# Dataset Card for "mmlu-college_biology-neg-prepend-verbal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Corianas__256_5epoch | ---
pretty_name: Evaluation run of Corianas/256_5epoch
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Corianas/256_5epoch](https://huggingface.co/Corianas/256_5epoch) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Corianas__256_5epoch\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T17:10:44.545164](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__256_5epoch/blob/main/results_2023-09-17T17-10-44.545164.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.006082214765100671,\n\
\ \"em_stderr\": 0.0007962432393028846,\n \"f1\": 0.04929320469798652,\n\
\ \"f1_stderr\": 0.0015028533751229739,\n \"acc\": 0.26475206337105733,\n\
\ \"acc_stderr\": 0.0076718947223475545\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.006082214765100671,\n \"em_stderr\": 0.0007962432393028846,\n\
\ \"f1\": 0.04929320469798652,\n \"f1_stderr\": 0.0015028533751229739\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.002274450341167551,\n \
\ \"acc_stderr\": 0.0013121578148674133\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5272296764009471,\n \"acc_stderr\": 0.014031631629827696\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Corianas/256_5epoch
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T17_10_44.545164
path:
- '**/details_harness|drop|3_2023-09-17T17-10-44.545164.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T17-10-44.545164.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T17_10_44.545164
path:
- '**/details_harness|gsm8k|5_2023-09-17T17-10-44.545164.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T17-10-44.545164.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T17_10_44.545164
path:
- '**/details_harness|winogrande|5_2023-09-17T17-10-44.545164.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T17-10-44.545164.parquet'
- config_name: results
data_files:
- split: 2023_09_17T17_10_44.545164
path:
- results_2023-09-17T17-10-44.545164.parquet
- split: latest
path:
- results_2023-09-17T17-10-44.545164.parquet
---
# Dataset Card for Evaluation run of Corianas/256_5epoch
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Corianas/256_5epoch
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Corianas/256_5epoch](https://huggingface.co/Corianas/256_5epoch) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Corianas__256_5epoch",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T17:10:44.545164](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__256_5epoch/blob/main/results_2023-09-17T17-10-44.545164.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.006082214765100671,
"em_stderr": 0.0007962432393028846,
"f1": 0.04929320469798652,
"f1_stderr": 0.0015028533751229739,
"acc": 0.26475206337105733,
"acc_stderr": 0.0076718947223475545
},
"harness|drop|3": {
"em": 0.006082214765100671,
"em_stderr": 0.0007962432393028846,
"f1": 0.04929320469798652,
"f1_stderr": 0.0015028533751229739
},
"harness|gsm8k|5": {
"acc": 0.002274450341167551,
"acc_stderr": 0.0013121578148674133
},
"harness|winogrande|5": {
"acc": 0.5272296764009471,
"acc_stderr": 0.014031631629827696
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
HuggingFaceM4/AdVQA_modif-Sample | Invalid username or password. |
Bieubr/CarlosGemer | ---
license: openrail
---
|
Talelaw/soningtidsberegninger | ---
license: eupl-1.1
---
|
Fece228/latin-literature-dataset-170M | ---
language:
- la
tags:
- text
- linguistics
- NLP
- Latin
- literature
size_categories:
- 100M<n<1B
---
This is a dataset collected from all the texts available at Corpus Corporum, which includes probably all the literary works ever written in Latin. The dataset is split in two parts: preprocessed with basic cltk tools, ready for work, and raw text data. It must be noted, however, that the latter contains text in Greek, Hebrew, and other languages, with references and contractions |
udkai/klexikon_dpo | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 7156517
num_examples: 2893
download_size: 4334446
dataset_size: 7156517
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- de
pretty_name: Kinder Lexikon Direct Preference Optimization Dataset
tags:
- simple-german
- dpo
- language simplification
---
Version of https://huggingface.co/datasets/dennlinger/klexikon which can be useful for Direct Preference Optimization of large language models generating sentences in simple german. |
Mitsuki-Sakamoto/alpaca_farm-deberta-re-pref-64-_fil_self_1.4b_bo2_100_kl_0.1_prm_160m_thr_1.0_seed_1 | ---
dataset_info:
config_name: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: index
dtype: int64
- name: filtered_epoch
dtype: int64
- name: gen_reward
dtype: float64
- name: gen_response
dtype: string
splits:
- name: epoch_0
num_bytes: 43630616
num_examples: 18929
- name: epoch_1
num_bytes: 43868088
num_examples: 18929
- name: epoch_2
num_bytes: 43827793
num_examples: 18929
- name: epoch_3
num_bytes: 43780418
num_examples: 18929
- name: epoch_4
num_bytes: 43767895
num_examples: 18929
- name: epoch_5
num_bytes: 43748008
num_examples: 18929
- name: epoch_6
num_bytes: 43740763
num_examples: 18929
- name: epoch_7
num_bytes: 43732082
num_examples: 18929
- name: epoch_8
num_bytes: 43726319
num_examples: 18929
- name: epoch_9
num_bytes: 43727460
num_examples: 18929
download_size: 232404489
dataset_size: 437549442
configs:
- config_name: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500
data_files:
- split: epoch_0
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_0-*
- split: epoch_1
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_1-*
- split: epoch_2
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_2-*
- split: epoch_3
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_3-*
- split: epoch_4
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_4-*
- split: epoch_5
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_5-*
- split: epoch_6
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_6-*
- split: epoch_7
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_7-*
- split: epoch_8
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_8-*
- split: epoch_9
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/epoch_9-*
---
|
liuyanchen1015/MULTI_VALUE_mnli_double_determiners | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 719
num_examples: 5
- name: dev_mismatched
num_bytes: 1459
num_examples: 6
- name: test_matched
num_bytes: 1145
num_examples: 8
- name: test_mismatched
num_bytes: 439
num_examples: 3
- name: train
num_bytes: 54368
num_examples: 261
download_size: 37140
dataset_size: 58130
---
# Dataset Card for "MULTI_VALUE_mnli_double_determiners"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shaowinw/seismic_inversion | ---
license: cc-by-4.0
---
# How to download this
```$ git lfs install```
```$ git clone https://huggingface.co/shaowinw/seismic_inversion``` |
arieg/cluster02_large_10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '000140'
'1': 001259
'2': '004507'
'3': 005940
'4': '006443'
'5': 007483
'6': 007487
'7': 007872
'8': '011237'
'9': 012986
'10': '014541'
'11': '014576'
'12': '014661'
'13': 018037
'14': 018038
'15': '022477'
'16': '024367'
'17': 025668
'18': 028241
'19': 028266
'20': '030056'
'21': '032333'
'22': '032337'
'23': 032339
'24': '035543'
'25': 036999
'26': 039259
'27': 039658
'28': '040657'
'29': '042020'
'30': '042023'
'31': '042025'
'32': '042030'
'33': '042046'
'34': '042372'
'35': '043030'
'36': 043598
'37': '043761'
'38': 043965
'39': 044794
'40': 046839
'41': 047197
'42': 047835
'43': 049394
'44': 049478
'45': '051655'
'46': 051659
'47': '052120'
'48': '052122'
'49': '052123'
'50': '052125'
'51': '053154'
'52': '054153'
'53': 055826
'54': 055830
'55': 055831
'56': '057371'
'57': '057640'
'58': '057665'
'59': 057691
'60': 059678
'61': '060170'
'62': '061160'
'63': '061736'
'64': 061820
'65': 061821
'66': 062592
'67': '064364'
'68': 064629
'69': '066405'
'70': '067366'
'71': '067367'
'72': '070426'
'73': 072149
'74': 072788
'75': 073309
'76': '073467'
'77': 075428
'78': 075784
'79': 075862
'80': '076074'
'81': 076079
'82': 079593
'83': 080518
'84': 085966
'85': 086140
'86': 091443
'87': 094449
'88': 094628
'89': 095908
'90': 096168
'91': 096696
'92': 097374
'93': 099095
'94': '101111'
'95': '101112'
'96': '107432'
'97': '107567'
'98': '108012'
'99': '108529'
'100': '109445'
'101': '109449'
'102': '109450'
'103': '110263'
'104': '111392'
'105': '112197'
'106': '113018'
'107': '113360'
'108': '114036'
'109': '114041'
'110': '116239'
'111': '116735'
'112': '117170'
'113': '119592'
'114': '120196'
'115': '121273'
'116': '122077'
'117': '122082'
'118': '122201'
'119': '122247'
'120': '125190'
'121': '126017'
'122': '126300'
'123': '126411'
'124': '126718'
'125': '128469'
'126': '129887'
'127': '129972'
'128': '130129'
'129': '130709'
'130': '130711'
'131': '131624'
'132': '131787'
'133': '134643'
'134': '134934'
'135': '135028'
'136': '135043'
'137': '135336'
'138': '137898'
'139': '139330'
'140': '139804'
'141': '140421'
'142': '141903'
'143': '144171'
'144': '144551'
'145': '144935'
'146': '145749'
'147': '145780'
'148': '146639'
'149': '148303'
'150': '148518'
'151': '148608'
'152': '149623'
'153': '149953'
splits:
- name: train
num_bytes: 82713531.4
num_examples: 1540
download_size: 82253270
dataset_size: 82713531.4
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
one-sec-cv12/chunk_124 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 28649869776.125
num_examples: 298287
download_size: 26744267899
dataset_size: 28649869776.125
---
# Dataset Card for "chunk_124"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
feynman-integrals-nn/t331ZZZM-s12_24 | ---
license: cc-by-4.0
---
# t331ZZZM
* [data](https://huggingface.co/datasets/feynman-integrals-nn/t331ZZZM-s12_24)
* [model](https://huggingface.co/feynman-integrals-nn/t331ZZZM-dimensionless)
* [source](https://gitlab.com/feynman-integrals-nn/feynman-integrals-nn/-/tree/main/t331ZZZM)
Warning: deprecated dataset
|
persiannlp/parsinlu_reading_comprehension | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|wikipedia|google
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for PersiNLU (Reading Comprehension)
## Table of Contents
- [Dataset Card for PersiNLU (Reading Comprehension)](#dataset-card-for-persi_nlu_reading_comprehension)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{
'question': 'پیامبر در چه سالی به پیامبری رسید؟',
'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
'answers': [
{'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
]
}
```
### Data Fields
- `question`: the question, mined using Google auto-complete.
- `passage`: the passage that contains the answer.
- `url`: the url from which the passage was mined.
- `answers`: a list of answers, containing the string and the index of the answer.
### Data Splits
The train/test split contains 600/575 samples.
## Dataset Creation
### Curation Rationale
The question were collected via Google auto-complete.
The answers were annotated by native speakers.
For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
tomekkorbak/shp_with_features_20k | ---
dataset_info:
features:
- name: post_id
dtype: string
- name: domain
dtype: string
- name: upvote_ratio
dtype: float64
- name: history
dtype: string
- name: c_root_id_A
dtype: string
- name: c_root_id_B
dtype: string
- name: created_at_utc_A
dtype: int64
- name: created_at_utc_B
dtype: int64
- name: score_A
dtype: int64
- name: score_B
dtype: int64
- name: human_ref_A
dtype: string
- name: human_ref_B
dtype: string
- name: labels
dtype: int64
- name: seconds_difference
dtype: float64
- name: score_ratio
dtype: float64
- name: helpfulness_A
dtype: float64
- name: helpfulness_B
dtype: float64
- name: specificity_A
dtype: float64
- name: specificity_B
dtype: float64
- name: intent_A
dtype: float64
- name: intent_B
dtype: float64
- name: factuality_A
dtype: float64
- name: factuality_B
dtype: float64
- name: easy-to-understand_A
dtype: float64
- name: easy-to-understand_B
dtype: float64
- name: relevance_A
dtype: float64
- name: relevance_B
dtype: float64
- name: readability_A
dtype: float64
- name: readability_B
dtype: float64
- name: enough-detail_A
dtype: float64
- name: enough-detail_B
dtype: float64
- name: biased:_A
dtype: float64
- name: biased:_B
dtype: float64
- name: fail-to-consider-individual-preferences_A
dtype: float64
- name: fail-to-consider-individual-preferences_B
dtype: float64
- name: repetetive_A
dtype: float64
- name: repetetive_B
dtype: float64
- name: fail-to-consider-context_A
dtype: float64
- name: fail-to-consider-context_B
dtype: float64
- name: too-long_A
dtype: float64
- name: too-long_B
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 20532157.0
num_examples: 9459
- name: test
num_bytes: 20532157.0
num_examples: 9459
download_size: 23638147
dataset_size: 41064314.0
---
# Dataset Card for "shp_with_features_20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dylanmontoya22/biomedical-ner | ---
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence: string
- name: prediction
sequence: 'null'
- name: prediction_agent
dtype: string
- name: annotation
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: start
dtype: int64
- name: annotation_agent
dtype: string
- name: vectors
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: annotated
struct:
- name: mentions
list:
- name: capitalness
dtype: string
- name: chars_length
dtype: int64
- name: density
dtype: float64
- name: label
dtype: string
- name: score
dtype: float64
- name: tokens_length
dtype: int64
- name: value
dtype: string
- name: tags
list:
- name: tag
dtype: string
- name: value
dtype: string
- name: predicted
struct:
- name: mentions
sequence: 'null'
- name: tags
list:
- name: tag
dtype: string
- name: value
dtype: string
- name: text_length
dtype: int64
- name: tokens
list:
- name: capitalness
dtype: string
- name: char_end
dtype: int64
- name: char_start
dtype: int64
- name: custom
dtype: 'null'
- name: idx
dtype: int64
- name: length
dtype: int64
- name: score
dtype: 'null'
- name: tag
dtype: string
- name: value
dtype: string
- name: tokens_length
dtype: int64
splits:
- name: train
num_bytes: 1055927
num_examples: 1000
download_size: 184089
dataset_size: 1055927
---
# Dataset Card for "biomedical-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Laethitia/Gaarabr | ---
license: openrail
---
|
open-llm-leaderboard/details_Linly-AI__Chinese-LLaMA-2-7B-hf | ---
pretty_name: Evaluation run of Linly-AI/Chinese-LLaMA-2-7B-hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Linly-AI/Chinese-LLaMA-2-7B-hf](https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Linly-AI__Chinese-LLaMA-2-7B-hf\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-29T10:13:08.335270](https://huggingface.co/datasets/open-llm-leaderboard/details_Linly-AI__Chinese-LLaMA-2-7B-hf/blob/main/results_2023-10-29T10-13-08.335270.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.20123741610738255,\n\
\ \"em_stderr\": 0.004105848061320724,\n \"f1\": 0.24458682885905994,\n\
\ \"f1_stderr\": 0.00409620440356687,\n \"acc\": 0.38191288394439116,\n\
\ \"acc_stderr\": 0.009754960327281063\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.20123741610738255,\n \"em_stderr\": 0.004105848061320724,\n\
\ \"f1\": 0.24458682885905994,\n \"f1_stderr\": 0.00409620440356687\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0621683093252464,\n \
\ \"acc_stderr\": 0.006651035644531703\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7016574585635359,\n \"acc_stderr\": 0.012858885010030425\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|arc:challenge|25_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_29T10_13_08.335270
path:
- '**/details_harness|drop|3_2023-10-29T10-13-08.335270.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-29T10-13-08.335270.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_29T10_13_08.335270
path:
- '**/details_harness|gsm8k|5_2023-10-29T10-13-08.335270.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-29T10-13-08.335270.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hellaswag|10_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T14-35-23.324449.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-01T14-35-23.324449.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-01T14-35-23.324449.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_29T10_13_08.335270
path:
- '**/details_harness|winogrande|5_2023-10-29T10-13-08.335270.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-29T10-13-08.335270.parquet'
- config_name: results
data_files:
- split: 2023_10_01T14_35_23.324449
path:
- results_2023-10-01T14-35-23.324449.parquet
- split: 2023_10_29T10_13_08.335270
path:
- results_2023-10-29T10-13-08.335270.parquet
- split: latest
path:
- results_2023-10-29T10-13-08.335270.parquet
---
# Dataset Card for Evaluation run of Linly-AI/Chinese-LLaMA-2-7B-hf
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Linly-AI/Chinese-LLaMA-2-7B-hf](https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Linly-AI__Chinese-LLaMA-2-7B-hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T10:13:08.335270](https://huggingface.co/datasets/open-llm-leaderboard/details_Linly-AI__Chinese-LLaMA-2-7B-hf/blob/main/results_2023-10-29T10-13-08.335270.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.20123741610738255,
"em_stderr": 0.004105848061320724,
"f1": 0.24458682885905994,
"f1_stderr": 0.00409620440356687,
"acc": 0.38191288394439116,
"acc_stderr": 0.009754960327281063
},
"harness|drop|3": {
"em": 0.20123741610738255,
"em_stderr": 0.004105848061320724,
"f1": 0.24458682885905994,
"f1_stderr": 0.00409620440356687
},
"harness|gsm8k|5": {
"acc": 0.0621683093252464,
"acc_stderr": 0.006651035644531703
},
"harness|winogrande|5": {
"acc": 0.7016574585635359,
"acc_stderr": 0.012858885010030425
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
johnnyclee/chats | ---
configs:
- config_name: default
data_files:
- split: train
path: '**/*.jsonl'
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
unaidedelf87777/SlimOrca-with-ids | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: float64
splits:
- name: train
num_bytes: 930346157
num_examples: 517982
download_size: 469683669
dataset_size: 930346157
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DEplain/DEplain-APA-sent | ---
annotations_creators:
- expert-generated
language:
- de
language_creators:
- expert-generated
license:
- other
multilinguality:
- translation
- monolingual
pretty_name: DEplain-APA-sent
size_categories:
- 10K<n<100K
source_datasets:
- extended|DEplain-APA-doc
tags:
- sentence simplification
- web-text
- plain language
- easy-to-read language
task_categories:
- text2text-generation
task_ids:
- text-simplification
---
# DEplain-APA-sent: A corpus for German Sentence Simplification
DEplain-APA-sent is a subcorpus of DEplain [Stodden et al., 2023]((https://arxiv.org/abs/2305.18939)) for evaluation of sentence simplification.
The corpus consists of 13,122 manual-aligned sentence pairs of 483 parallel documents of the Austrian Press Agency (APA) in German written for people with CEFR level B1 (plain language) and for people with CEFR level A2 (plain language).
The data of APA (Austrian Press Agency) is restricted for non-commercial research purposes. To get access to DEplain-APA please request the access via zenodo (https://zenodo.org/record/7674560).
Human annotators sentence-wise aligned the 483 documents to build a corpus for sentence simplification. For the document-level version of this corpus, please see [https://huggingface.co/datasets/DEplain/DEplain-APA-doc](https://huggingface.co/datasets/DEplain/DEplain-APA-doc).
## Dataset Card for DEplain-APA-sent
### Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
### Dataset Description
- **Repository:** [DEplain-APA zenodo repository](https://zenodo.org/record/7674560)
- **Paper:** ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939)
- **Point of Contact:** [Regina Stodden](regina.stodden@hhu.de)
#### Dataset Summary
DEplain-APA [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the training and evaluation of sentence and document simplification in German. All texts of this dataset are provided by the Austrian Press Agency. The simple-complex sentence pairs are manually aligned.
#### Supported Tasks and Leaderboards
The dataset supports the training and evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
#### Languages
The text in this dataset is in Austrian German (`de-at`).
#### Domains
All texts in this dataset are news data.
## Dataset Structure
#### Data Access
- The dataset is licensed with restricted access for only academic purposes. To download the dataset, please request access on [zenodo](https://zenodo.org/record/7674560).
#### Data Instances
- `document-simplification` configuration: an instance consists of an original document and one reference simplification (in plain-text format).
- `sentence-simplification` configuration: an instance consists of original sentence(s) and one manually aligned reference simplification (inclusing one or more sentences).
#### Data Fields
| data field | data field description |
|-------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| `original` | an original text from the source dataset |
| `simplification` | a simplified text from the source dataset |
| `pair_id` | document pair id |
| `complex_document_id ` (on doc-level) | id of complex document (-1) |
| `simple_document_id ` (on doc-level) | id of simple document (-0) |
| `original_id ` (on sent-level) | id of sentence(s) of the original text |
| `simplification_id ` (on sent-level) | id of sentence(s) of the simplified text |
| `domain ` | text domain of the document pair |
| `corpus ` | subcorpus name |
| `simple_url ` | origin URL of the simplified document |
| `complex_url ` | origin URL of the simplified document |
| `simple_level ` or `language_level_simple ` | required CEFR language level to understand the simplified document |
| `complex_level ` or `language_level_original ` | required CEFR language level to understand the original document |
| `simple_location_html ` | location on hard disk where the HTML file of the simple document is stored |
| `complex_location_html ` | location on hard disk where the HTML file of the original document is stored |
| `simple_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
| `complex_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
| `alignment_location ` | location on hard disk where the alignment is stored |
| `simple_author ` | author (or copyright owner) of the simplified document |
| `complex_author ` | author (or copyright owner) of the original document |
| `simple_title ` | title of the simplified document |
| `complex_title ` | title of the original document |
| `license ` | license of the data |
| `last_access ` or `access_date` | data origin data or data when the HTML files were downloaded |
| `rater` | id of the rater who annotated the sentence pair |
| `alignment` | type of alignment, e.g., 1:1, 1:n, n:1 or n:m |
#### Data Splits
DEplain-APA is randomly split into a training, development and test set. The training set of the sentence-simplification configuration contains only texts of documents which are part of the training set of document-simplification configuration and the same for dev and test sets.
The statistics are given below.
| | Train | Dev | Test | Total |
| ----- | ------ | ------ | ---- | ----- |
| Document Pairs | 387 | 48 | 48 |483 |
| Sentence Pairs | 10660 | 1231 | 1231 | 13122|
Inter-Annotator-Agreement: 0.7497 (moderate).
Here, more information on simplification operations will follow soon.
### Dataset Creation
#### Curation Rationale
DEplain-APA was created to improve the training and evaluation of German document and sentence simplification. The data is provided by the same data provided as for the APA-LHA data. In comparison to APA-LHA (automatic-aligned), the sentence pairs of DEplain-APA are all manually aligned. Further, DEplain-APA aligns the texts in language level B1 with the texts in A2, which result in mostly mild simplifications.
Further DEplain-APA, contains parallel documents as well as parallel sentence pairs.
#### Source Data
##### Initial Data Collection and Normalization
The original news texts (in CEFR level B2) were manually simplified by professional translators, i.e. capito – CFS GmbH, and provided to us by the Austrian Press Agency.
All documents date back to 2019 to 2021.
Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool TS-ANNO. The data was split into sentences using a German model of SpaCy.
##### Who are the source language producers?
The original news texts (in CEFR level B2) were manually simplified by professional translators, i.e. capito – CFS GmbH. No other demographic or compensation information is known.
#### Annotations
##### Annotation process
The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
##### Who are the annotators?
The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
They are not part of any target group of text simplification.
#### Personal and Sensitive Information
No sensitive data.
### Considerations for Using the Data
#### Social Impact of Dataset
Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
#### Discussion of Biases
No bias is known.
#### Other Known Limitations
The dataset is provided for research purposes only. Please check the dataset license for additional information.
### Additional Information
#### Dataset Curators
Researchers at the Heinrich-Heine-University Düsseldorf, Germany, developed DEplain-APA. This research is part of the PhD-program `Online Participation` supported by the North Rhine-Westphalian (German) funding scheme `Forschungskolleg`.
#### Licensing Information
The dataset (DEplain-APA) is provided for research purposes only. Please request access using the following form: [https://zenodo.org/record/7674560](https://zenodo.org/record/7674560).
#### Citation Information
If you use part of this work, please cite our paper:
```
@inproceedings{stodden-etal-2023-deplain,
title = "{DE}-plain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification",
author = "Stodden, Regina and
Momen, Omar and
Kallmeyer, Laura",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
notes = "preprint: https://arxiv.org/abs/2305.18939",
}
```
This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite). |
lonestar108/naughty-chat | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 80492
num_examples: 266
download_size: 21186
dataset_size: 80492
---
# Dataset Card for "naughty-chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saibo/bookcorpus_compact_1024_shard9_of_10 | ---
dataset_info:
features:
- name: text
dtype: string
- name: concept_with_offset
dtype: string
splits:
- name: train
num_bytes: 754029555
num_examples: 61605
download_size: 379859859
dataset_size: 754029555
---
# Dataset Card for "bookcorpus_compact_1024_shard9_of_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_mnli_possessives_for_pre | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 773174
num_examples: 3253
- name: dev_mismatched
num_bytes: 934287
num_examples: 3840
- name: test_matched
num_bytes: 783981
num_examples: 3286
- name: test_mismatched
num_bytes: 947271
num_examples: 3877
- name: train
num_bytes: 31700435
num_examples: 132019
download_size: 22887147
dataset_size: 35139148
---
# Dataset Card for "MULTI_VALUE_mnli_possessives_for_pre"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mihaien/ads_dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: metaphor
dtype: string
- name: visual_elaboration
dtype: string
splits:
- name: train
num_bytes: 5821963.0
num_examples: 243
download_size: 5813008
dataset_size: 5821963.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.