datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-93000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 657739
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
satware/yggdrasil | ---
license: mit
---
|
deeplearning-tide/actresses | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': keira
'1': nathalie
'2': others
splits:
- name: train
num_bytes: 137979476.0
num_examples: 429
- name: val
num_bytes: 54519033.0
num_examples: 168
- name: test
num_bytes: 54024602.0
num_examples: 168
download_size: 246545069
dataset_size: 246523111.0
---
# Dataset Card for "actresses"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mujif/vrptest2 | ---
license: cc-by-4.0
---
|
abhinavrai/therapy | ---
license: mit
---
|
jjonhwa/raw5_v1 | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_start
dtype: int64
splits:
- name: train
num_bytes: 2782963652
num_examples: 86975
download_size: 386216630
dataset_size: 2782963652
---
# Dataset Card for "raw5_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dacavi/spanish-dataset | ---
license: apache-2.0
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 96050672
num_examples: 100
- name: train
num_bytes: 14897493976
num_examples: 15510
download_size: 3158166164
dataset_size: 14993544648
configs:
- config_name: default
data_files:
- split: test
path: data/train-*
- split: train
path: data/test-*
---
|
ms_terms | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bs
- ca
- chr
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fil
- fr
- ga
- gd
- gl
- gu
- guc
- ha
- he
- hi
- hr
- hu
- hy
- id
- ig
- is
- it
- iu
- ja
- ka
- kk
- km
- kn
- knn
- ko
- ku
- ky
- lb
- lo
- lt
- lv
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- nb
- ne
- nl
- nn
- ory
- pa
- pl
- prs
- pst
- pt
- qu
- quc
- ro
- ru
- rw
- sd
- si
- sk
- sl
- sq
- sr
- st
- sv
- swh
- ta
- te
- tg
- th
- ti
- tk
- tn
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yo
- zh
- zu
language_bcp47:
- bn-IN
- bs-Latn
- es-MX
- fr-CA
- ms-BN
- pt-BR
- sr-BH
- sr-Latn
- zh-Hant-HK
- zh-Hant-TW
license:
- ms-pl
multilinguality:
- multilingual
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: MsTerms
dataset_info:
features:
- name: entry_id
dtype: string
- name: term_source
dtype: string
- name: pos
dtype: string
- name: definition
dtype: string
- name: term_target
dtype: string
splits:
- name: train
num_bytes: 6995497
num_examples: 33738
download_size: 0
dataset_size: 6995497
---
# Dataset Card for [ms_terms]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
[Microsoft Terminology Collection](https://www.microsoft.com/en-us/language/terminology)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microsoft terminology into other terminology collections or serve as a base IT glossary for language development in the nearly 100 languages available. Terminology is provided in .tbx format, an industry standard for terminology exchange.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Nearly 100 Languages.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@leoxzhao](https://github.com/leoxzhao), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
pharaouk/glaive-code-assistant-v3 | ---
license: apache-2.0
size_categories:
- 100K<n<1M
tags:
- code
- synthetic
---
# Glaive-code-assistant-v2
Glaive-code-assistant-v2 is a dataset of ~1M code problems and solutions generated using Glaive’s synthetic data generation platform.
This is built on top of the previous version of the dataset that can be found [here](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v2). This already includes v1 and v2 of the dataset.
To report any problems or suggestions in the data, join the [Glaive discord](https://discord.gg/fjQ4uf3yWD) |
ruanchaves/test_stanford | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Test-Stanford
tags:
- word-segmentation
---
# Dataset Card for Test-Stanford
## Dataset Description
- **Paper:** [Towards Deep Semantic Analysis Of Hashtags](https://arxiv.org/abs/1501.03210)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 1467856821,
"hashtag": "therapyfail",
"segmentation": "therapy fail",
"gold_position": 8,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20
],
"candidate": [
"therap y fail",
"the rap y fail",
"t her apy fail",
"the rap yfail",
"t he rap y fail",
"thera py fail",
"ther apy fail",
"th era py fail",
"therapy fail",
"therapy fai l",
"the r apy fail",
"the rapyfa il",
"the rapy fail",
"t herapy fail",
"the rapyfail",
"therapy f ai l",
"therapy fa il",
"the rapyf a il",
"therapy f ail",
"the ra py fail"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `rank`: Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
Gbssreejith/testthis | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 36543935.0
num_examples: 158
- name: test
num_bytes: 4102859.0
num_examples: 18
- name: valid
num_bytes: 9746669.0
num_examples: 44
download_size: 48190867
dataset_size: 50393463.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
EduardoPacheco/dalle-3-LAION-discord | ---
license: apache-2.0
dataset_info:
features:
- name: caption
dtype: string
- name: link
dtype: string
- name: message_id
dtype: string
- name: timestamp
dtype: string
splits:
- name: train
num_bytes: 1547491.0
num_examples: 3144
download_size: 746143
dataset_size: 1547491.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_4iters_iter_3 | ---
pretty_name: Evaluation run of ZhangShenao/0.001_idpo_declr_4iters_iter_3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ZhangShenao/0.001_idpo_declr_4iters_iter_3](https://huggingface.co/ZhangShenao/0.001_idpo_declr_4iters_iter_3)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_4iters_iter_3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-08T08:47:34.953273](https://huggingface.co/datasets/open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_4iters_iter_3/blob/main/results_2024-04-08T08-47-34.953273.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6050759758521479,\n\
\ \"acc_stderr\": 0.03311893846776277,\n \"acc_norm\": 0.6114459885903942,\n\
\ \"acc_norm_stderr\": 0.03380911687393187,\n \"mc1\": 0.3488372093023256,\n\
\ \"mc1_stderr\": 0.016684419859986886,\n \"mc2\": 0.5033400649330749,\n\
\ \"mc2_stderr\": 0.01588434641111232\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5981228668941979,\n \"acc_stderr\": 0.014327268614578276,\n\
\ \"acc_norm\": 0.6305460750853242,\n \"acc_norm_stderr\": 0.014104578366491885\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6579366660027883,\n\
\ \"acc_stderr\": 0.004734311435009194,\n \"acc_norm\": 0.8498307110137423,\n\
\ \"acc_norm_stderr\": 0.0035650718701954478\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n\
\ \"acc_stderr\": 0.042763494943765995,\n \"acc_norm\": 0.5703703703703704,\n\
\ \"acc_norm_stderr\": 0.042763494943765995\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.631578947368421,\n \"acc_stderr\": 0.03925523381052932,\n\
\ \"acc_norm\": 0.631578947368421,\n \"acc_norm_stderr\": 0.03925523381052932\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n\
\ \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.03745554791462456,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.03745554791462456\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\"\
: 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5838150289017341,\n\
\ \"acc_stderr\": 0.03758517775404947,\n \"acc_norm\": 0.5838150289017341,\n\
\ \"acc_norm_stderr\": 0.03758517775404947\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4803921568627451,\n \"acc_stderr\": 0.04971358884367405,\n\
\ \"acc_norm\": 0.4803921568627451,\n \"acc_norm_stderr\": 0.04971358884367405\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.03265019475033582,\n\
\ \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.03265019475033582\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.04166567577101579,\n\
\ \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.04166567577101579\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42328042328042326,\n \"acc_stderr\": 0.025446365634406772,\n \"\
acc_norm\": 0.42328042328042326,\n \"acc_norm_stderr\": 0.025446365634406772\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.04285714285714281,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.04285714285714281\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7322580645161291,\n\
\ \"acc_stderr\": 0.025189006660212385,\n \"acc_norm\": 0.7322580645161291,\n\
\ \"acc_norm_stderr\": 0.025189006660212385\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n\
\ \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.0347769116216366,\n\
\ \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.0347769116216366\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386417,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386417\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8290155440414507,\n \"acc_stderr\": 0.02717121368316453,\n\
\ \"acc_norm\": 0.8290155440414507,\n \"acc_norm_stderr\": 0.02717121368316453\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5666666666666667,\n \"acc_stderr\": 0.025124653525885113,\n\
\ \"acc_norm\": 0.5666666666666667,\n \"acc_norm_stderr\": 0.025124653525885113\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.29259259259259257,\n \"acc_stderr\": 0.027738969632176088,\n \
\ \"acc_norm\": 0.29259259259259257,\n \"acc_norm_stderr\": 0.027738969632176088\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6134453781512605,\n \"acc_stderr\": 0.03163145807552378,\n \
\ \"acc_norm\": 0.6134453781512605,\n \"acc_norm_stderr\": 0.03163145807552378\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7889908256880734,\n \"acc_stderr\": 0.01749392240411265,\n \"\
acc_norm\": 0.7889908256880734,\n \"acc_norm_stderr\": 0.01749392240411265\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4212962962962963,\n \"acc_stderr\": 0.03367462138896079,\n \"\
acc_norm\": 0.4212962962962963,\n \"acc_norm_stderr\": 0.03367462138896079\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588667,\n \"\
acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588667\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159263,\n \
\ \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159263\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6636771300448431,\n\
\ \"acc_stderr\": 0.031708824268455,\n \"acc_norm\": 0.6636771300448431,\n\
\ \"acc_norm_stderr\": 0.031708824268455\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6564885496183206,\n \"acc_stderr\": 0.041649760719448786,\n\
\ \"acc_norm\": 0.6564885496183206,\n \"acc_norm_stderr\": 0.041649760719448786\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n\
\ \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.7129629629629629,\n\
\ \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.034624199316156234,\n\
\ \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.034624199316156234\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \
\ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8045977011494253,\n\
\ \"acc_stderr\": 0.014179171373424384,\n \"acc_norm\": 0.8045977011494253,\n\
\ \"acc_norm_stderr\": 0.014179171373424384\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6994219653179191,\n \"acc_stderr\": 0.024685316867257803,\n\
\ \"acc_norm\": 0.6994219653179191,\n \"acc_norm_stderr\": 0.024685316867257803\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3307262569832402,\n\
\ \"acc_stderr\": 0.01573502625896612,\n \"acc_norm\": 0.3307262569832402,\n\
\ \"acc_norm_stderr\": 0.01573502625896612\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6503267973856209,\n \"acc_stderr\": 0.027305308076274695,\n\
\ \"acc_norm\": 0.6503267973856209,\n \"acc_norm_stderr\": 0.027305308076274695\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\
\ \"acc_stderr\": 0.026003301117885135,\n \"acc_norm\": 0.7009646302250804,\n\
\ \"acc_norm_stderr\": 0.026003301117885135\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.02622964917882117,\n\
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.02622964917882117\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \
\ \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4276401564537158,\n\
\ \"acc_stderr\": 0.012635799922765844,\n \"acc_norm\": 0.4276401564537158,\n\
\ \"acc_norm_stderr\": 0.012635799922765844\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6433823529411765,\n \"acc_stderr\": 0.02909720956841195,\n\
\ \"acc_norm\": 0.6433823529411765,\n \"acc_norm_stderr\": 0.02909720956841195\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6258169934640523,\n \"acc_stderr\": 0.019576953122088833,\n \
\ \"acc_norm\": 0.6258169934640523,\n \"acc_norm_stderr\": 0.019576953122088833\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.673469387755102,\n \"acc_stderr\": 0.03002105623844031,\n\
\ \"acc_norm\": 0.673469387755102,\n \"acc_norm_stderr\": 0.03002105623844031\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8208955223880597,\n\
\ \"acc_stderr\": 0.027113286753111837,\n \"acc_norm\": 0.8208955223880597,\n\
\ \"acc_norm_stderr\": 0.027113286753111837\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909282,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909282\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n\
\ \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n\
\ \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368036,\n\
\ \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368036\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3488372093023256,\n\
\ \"mc1_stderr\": 0.016684419859986886,\n \"mc2\": 0.5033400649330749,\n\
\ \"mc2_stderr\": 0.01588434641111232\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774104\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.28278999241849884,\n \
\ \"acc_stderr\": 0.012405020417873619\n }\n}\n```"
repo_url: https://huggingface.co/ZhangShenao/0.001_idpo_declr_4iters_iter_3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|arc:challenge|25_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|gsm8k|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hellaswag|10_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-08T08-47-34.953273.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-08T08-47-34.953273.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- '**/details_harness|winogrande|5_2024-04-08T08-47-34.953273.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-08T08-47-34.953273.parquet'
- config_name: results
data_files:
- split: 2024_04_08T08_47_34.953273
path:
- results_2024-04-08T08-47-34.953273.parquet
- split: latest
path:
- results_2024-04-08T08-47-34.953273.parquet
---
# Dataset Card for Evaluation run of ZhangShenao/0.001_idpo_declr_4iters_iter_3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ZhangShenao/0.001_idpo_declr_4iters_iter_3](https://huggingface.co/ZhangShenao/0.001_idpo_declr_4iters_iter_3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_4iters_iter_3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-08T08:47:34.953273](https://huggingface.co/datasets/open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_4iters_iter_3/blob/main/results_2024-04-08T08-47-34.953273.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6050759758521479,
"acc_stderr": 0.03311893846776277,
"acc_norm": 0.6114459885903942,
"acc_norm_stderr": 0.03380911687393187,
"mc1": 0.3488372093023256,
"mc1_stderr": 0.016684419859986886,
"mc2": 0.5033400649330749,
"mc2_stderr": 0.01588434641111232
},
"harness|arc:challenge|25": {
"acc": 0.5981228668941979,
"acc_stderr": 0.014327268614578276,
"acc_norm": 0.6305460750853242,
"acc_norm_stderr": 0.014104578366491885
},
"harness|hellaswag|10": {
"acc": 0.6579366660027883,
"acc_stderr": 0.004734311435009194,
"acc_norm": 0.8498307110137423,
"acc_norm_stderr": 0.0035650718701954478
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.042763494943765995,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.042763494943765995
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.631578947368421,
"acc_stderr": 0.03925523381052932,
"acc_norm": 0.631578947368421,
"acc_norm_stderr": 0.03925523381052932
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.02881561571343211,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.02881561571343211
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.03745554791462456,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.03745554791462456
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5838150289017341,
"acc_stderr": 0.03758517775404947,
"acc_norm": 0.5838150289017341,
"acc_norm_stderr": 0.03758517775404947
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4803921568627451,
"acc_stderr": 0.04971358884367405,
"acc_norm": 0.4803921568627451,
"acc_norm_stderr": 0.04971358884367405
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.74,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.03265019475033582,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.03265019475033582
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.503448275862069,
"acc_stderr": 0.04166567577101579,
"acc_norm": 0.503448275862069,
"acc_norm_stderr": 0.04166567577101579
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42328042328042326,
"acc_stderr": 0.025446365634406772,
"acc_norm": 0.42328042328042326,
"acc_norm_stderr": 0.025446365634406772
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04285714285714281,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04285714285714281
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7322580645161291,
"acc_stderr": 0.025189006660212385,
"acc_norm": 0.7322580645161291,
"acc_norm_stderr": 0.025189006660212385
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.0347769116216366,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.0347769116216366
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386417,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386417
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8290155440414507,
"acc_stderr": 0.02717121368316453,
"acc_norm": 0.8290155440414507,
"acc_norm_stderr": 0.02717121368316453
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5666666666666667,
"acc_stderr": 0.025124653525885113,
"acc_norm": 0.5666666666666667,
"acc_norm_stderr": 0.025124653525885113
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.29259259259259257,
"acc_stderr": 0.027738969632176088,
"acc_norm": 0.29259259259259257,
"acc_norm_stderr": 0.027738969632176088
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6134453781512605,
"acc_stderr": 0.03163145807552378,
"acc_norm": 0.6134453781512605,
"acc_norm_stderr": 0.03163145807552378
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7889908256880734,
"acc_stderr": 0.01749392240411265,
"acc_norm": 0.7889908256880734,
"acc_norm_stderr": 0.01749392240411265
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4212962962962963,
"acc_stderr": 0.03367462138896079,
"acc_norm": 0.4212962962962963,
"acc_norm_stderr": 0.03367462138896079
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.028379449451588667,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.028379449451588667
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159263,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159263
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6636771300448431,
"acc_stderr": 0.031708824268455,
"acc_norm": 0.6636771300448431,
"acc_norm_stderr": 0.031708824268455
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6564885496183206,
"acc_stderr": 0.041649760719448786,
"acc_norm": 0.6564885496183206,
"acc_norm_stderr": 0.041649760719448786
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7129629629629629,
"acc_stderr": 0.043733130409147614,
"acc_norm": 0.7129629629629629,
"acc_norm_stderr": 0.043733130409147614
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.034624199316156234,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.034624199316156234
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8045977011494253,
"acc_stderr": 0.014179171373424384,
"acc_norm": 0.8045977011494253,
"acc_norm_stderr": 0.014179171373424384
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6994219653179191,
"acc_stderr": 0.024685316867257803,
"acc_norm": 0.6994219653179191,
"acc_norm_stderr": 0.024685316867257803
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3307262569832402,
"acc_stderr": 0.01573502625896612,
"acc_norm": 0.3307262569832402,
"acc_norm_stderr": 0.01573502625896612
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6503267973856209,
"acc_stderr": 0.027305308076274695,
"acc_norm": 0.6503267973856209,
"acc_norm_stderr": 0.027305308076274695
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.026003301117885135,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.026003301117885135
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.02622964917882117,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.02622964917882117
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4276401564537158,
"acc_stderr": 0.012635799922765844,
"acc_norm": 0.4276401564537158,
"acc_norm_stderr": 0.012635799922765844
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6433823529411765,
"acc_stderr": 0.02909720956841195,
"acc_norm": 0.6433823529411765,
"acc_norm_stderr": 0.02909720956841195
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6258169934640523,
"acc_stderr": 0.019576953122088833,
"acc_norm": 0.6258169934640523,
"acc_norm_stderr": 0.019576953122088833
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.673469387755102,
"acc_stderr": 0.03002105623844031,
"acc_norm": 0.673469387755102,
"acc_norm_stderr": 0.03002105623844031
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111837,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111837
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.029913127232368036,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.029913127232368036
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3488372093023256,
"mc1_stderr": 0.016684419859986886,
"mc2": 0.5033400649330749,
"mc2_stderr": 0.01588434641111232
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774104
},
"harness|gsm8k|5": {
"acc": 0.28278999241849884,
"acc_stderr": 0.012405020417873619
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
plncmm/wl-disease | ---
license: cc-by-nc-4.0
---
|
gaizerick/diana | ---
license: openrail
---
|
Arbaz0348/article-name-dataset | ---
license: creativeml-openrail-m
---
|
Binaryy/queries | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: 'Unnamed: 0.1'
dtype: int64
- name: 'Unnamed: 0'
dtype: int64
- name: queries
dtype: string
splits:
- name: train
num_bytes: 62531
num_examples: 543
download_size: 24151
dataset_size: 62531
---
# Dataset Card for "queries"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fxmeng/OCR-VQA | ---
license: apache-2.0
---
|
mboth/medienVersorgen-50-undersampled | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: Datatype
dtype: string
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Unit
dtype: string
- name: text
dtype: string
- name: Grundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
'0': Bereitstellen
'1': Entsorgen
'2': Speichern
'3': Verteilen
splits:
- name: train
num_bytes: 37075.44918032787
num_examples: 188
- name: test
num_bytes: 14725
num_examples: 77
- name: valid
num_bytes: 14725
num_examples: 77
download_size: 36084
dataset_size: 66525.44918032788
---
# Dataset Card for "medienVersorgen-50-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/nishikawa_honami_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nishikawa_honami/西川保奈美/니시카와호나미 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of nishikawa_honami/西川保奈美/니시카와호나미 (THE iDOLM@STER: Cinderella Girls), containing 31 images and their tags.
The core tags of this character are `brown_hair, green_eyes, long_hair, breasts, earrings, bangs, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 31 | 27.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishikawa_honami_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 31 | 23.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishikawa_honami_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 71 | 42.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishikawa_honami_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 31 | 26.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishikawa_honami_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 71 | 48.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishikawa_honami_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nishikawa_honami_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------|
| 0 | 31 |  |  |  |  |  | 1girl, solo, looking_at_viewer, jewelry, smile, dress, blush, cleavage, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | jewelry | smile | dress | blush | cleavage | open_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:----------|:--------|:--------|:--------|:-----------|:-------------|
| 0 | 31 |  |  |  |  |  | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S2-v0.1 | ---
pretty_name: Evaluation run of MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1](https://huggingface.co/MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S2-v0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-18T23:05:58.776213](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S2-v0.1/blob/main/results_2024-02-18T23-05-58.776213.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6545868511485138,\n\
\ \"acc_stderr\": 0.031980293841566164,\n \"acc_norm\": 0.6542757501692061,\n\
\ \"acc_norm_stderr\": 0.03263807517879597,\n \"mc1\": 0.45165238678090575,\n\
\ \"mc1_stderr\": 0.017421480300277643,\n \"mc2\": 0.6217500644350165,\n\
\ \"mc2_stderr\": 0.015583825644663436\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6723549488054608,\n \"acc_stderr\": 0.01371584794071934,\n\
\ \"acc_norm\": 0.6945392491467577,\n \"acc_norm_stderr\": 0.01346008047800251\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7046405098585939,\n\
\ \"acc_stderr\": 0.0045527183605131,\n \"acc_norm\": 0.871539533957379,\n\
\ \"acc_norm_stderr\": 0.0033391798350182853\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n\
\ \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n\
\ \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.02783491252754407,\n\
\ \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.02783491252754407\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\"\
: 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n\
\ \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n\
\ \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n\
\ \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41798941798941797,\n \"acc_stderr\": 0.025402555503260912,\n \"\
acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.025402555503260912\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\
\ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\
\ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n\
\ \"acc_stderr\": 0.023157879349083522,\n \"acc_norm\": 0.7903225806451613,\n\
\ \"acc_norm_stderr\": 0.023157879349083522\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n\
\ \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n\
\ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"\
acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657266,\n\
\ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657266\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34444444444444444,\n \"acc_stderr\": 0.02897264888484427,\n \
\ \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.02897264888484427\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135363,\n\
\ \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135363\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\"\
: 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n\
\ \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n\
\ \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n\
\ \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8382352941176471,\n \"acc_stderr\": 0.025845017986926917,\n \"\
acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.025845017986926917\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.036412970813137296,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.036412970813137296\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822584,\n\
\ \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822584\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n\
\ \"acc_stderr\": 0.022209309073165612,\n \"acc_norm\": 0.8675213675213675,\n\
\ \"acc_norm_stderr\": 0.022209309073165612\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8352490421455939,\n\
\ \"acc_stderr\": 0.013265346261323788,\n \"acc_norm\": 0.8352490421455939,\n\
\ \"acc_norm_stderr\": 0.013265346261323788\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7543352601156069,\n \"acc_stderr\": 0.023176298203992005,\n\
\ \"acc_norm\": 0.7543352601156069,\n \"acc_norm_stderr\": 0.023176298203992005\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4547486033519553,\n\
\ \"acc_stderr\": 0.016653875777524006,\n \"acc_norm\": 0.4547486033519553,\n\
\ \"acc_norm_stderr\": 0.016653875777524006\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7483660130718954,\n \"acc_stderr\": 0.0248480182638752,\n\
\ \"acc_norm\": 0.7483660130718954,\n \"acc_norm_stderr\": 0.0248480182638752\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n\
\ \"acc_stderr\": 0.02549425935069491,\n \"acc_norm\": 0.7202572347266881,\n\
\ \"acc_norm_stderr\": 0.02549425935069491\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.02378858355165854,\n\
\ \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.02378858355165854\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4745762711864407,\n\
\ \"acc_stderr\": 0.012753716929101008,\n \"acc_norm\": 0.4745762711864407,\n\
\ \"acc_norm_stderr\": 0.012753716929101008\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7095588235294118,\n \"acc_stderr\": 0.027576468622740536,\n\
\ \"acc_norm\": 0.7095588235294118,\n \"acc_norm_stderr\": 0.027576468622740536\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6928104575163399,\n \"acc_stderr\": 0.01866335967146367,\n \
\ \"acc_norm\": 0.6928104575163399,\n \"acc_norm_stderr\": 0.01866335967146367\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.025538433368578337,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.025538433368578337\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n\
\ \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n\
\ \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.45165238678090575,\n\
\ \"mc1_stderr\": 0.017421480300277643,\n \"mc2\": 0.6217500644350165,\n\
\ \"mc2_stderr\": 0.015583825644663436\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7963693764798737,\n \"acc_stderr\": 0.011317798781626913\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7202426080363912,\n \
\ \"acc_stderr\": 0.01236438401673532\n }\n}\n```"
repo_url: https://huggingface.co/MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|arc:challenge|25_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|gsm8k|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hellaswag|10_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T23-05-58.776213.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-18T23-05-58.776213.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- '**/details_harness|winogrande|5_2024-02-18T23-05-58.776213.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-18T23-05-58.776213.parquet'
- config_name: results
data_files:
- split: 2024_02_18T23_05_58.776213
path:
- results_2024-02-18T23-05-58.776213.parquet
- split: latest
path:
- results_2024-02-18T23-05-58.776213.parquet
---
# Dataset Card for Evaluation run of MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1](https://huggingface.co/MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S2-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-18T23:05:58.776213](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S2-v0.1/blob/main/results_2024-02-18T23-05-58.776213.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6545868511485138,
"acc_stderr": 0.031980293841566164,
"acc_norm": 0.6542757501692061,
"acc_norm_stderr": 0.03263807517879597,
"mc1": 0.45165238678090575,
"mc1_stderr": 0.017421480300277643,
"mc2": 0.6217500644350165,
"mc2_stderr": 0.015583825644663436
},
"harness|arc:challenge|25": {
"acc": 0.6723549488054608,
"acc_stderr": 0.01371584794071934,
"acc_norm": 0.6945392491467577,
"acc_norm_stderr": 0.01346008047800251
},
"harness|hellaswag|10": {
"acc": 0.7046405098585939,
"acc_stderr": 0.0045527183605131,
"acc_norm": 0.871539533957379,
"acc_norm_stderr": 0.0033391798350182853
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.02783491252754407,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.02783491252754407
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083522,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083522
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.029376616484945633,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.029376616484945633
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657266,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657266
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.02897264888484427,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.02897264888484427
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135363,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135363
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.036412970813137296,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.036412970813137296
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165612,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165612
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8352490421455939,
"acc_stderr": 0.013265346261323788,
"acc_norm": 0.8352490421455939,
"acc_norm_stderr": 0.013265346261323788
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7543352601156069,
"acc_stderr": 0.023176298203992005,
"acc_norm": 0.7543352601156069,
"acc_norm_stderr": 0.023176298203992005
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4547486033519553,
"acc_stderr": 0.016653875777524006,
"acc_norm": 0.4547486033519553,
"acc_norm_stderr": 0.016653875777524006
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7483660130718954,
"acc_stderr": 0.0248480182638752,
"acc_norm": 0.7483660130718954,
"acc_norm_stderr": 0.0248480182638752
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.02549425935069491,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.02549425935069491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.02378858355165854,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.02378858355165854
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4745762711864407,
"acc_stderr": 0.012753716929101008,
"acc_norm": 0.4745762711864407,
"acc_norm_stderr": 0.012753716929101008
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7095588235294118,
"acc_stderr": 0.027576468622740536,
"acc_norm": 0.7095588235294118,
"acc_norm_stderr": 0.027576468622740536
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6928104575163399,
"acc_stderr": 0.01866335967146367,
"acc_norm": 0.6928104575163399,
"acc_norm_stderr": 0.01866335967146367
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578337,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578337
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.45165238678090575,
"mc1_stderr": 0.017421480300277643,
"mc2": 0.6217500644350165,
"mc2_stderr": 0.015583825644663436
},
"harness|winogrande|5": {
"acc": 0.7963693764798737,
"acc_stderr": 0.011317798781626913
},
"harness|gsm8k|5": {
"acc": 0.7202426080363912,
"acc_stderr": 0.01236438401673532
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
felipesampaio/sailorjupiter | ---
license: openrail
---
|
Telugu-LLM-Labs/konkani_alpaca_yahma_cleaned_filtered | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: konkani_instruction
dtype: string
- name: konkani_input
dtype: string
- name: konkani_output
dtype: string
splits:
- name: train
num_bytes: 103869076
num_examples: 28910
download_size: 44786167
dataset_size: 103869076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tobecold/new_metric | ---
license: apache-2.0
---
|
csuhan/OneLLM_Eval | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
tags:
- Evaluation
- MLLM
- OneLLM
---
### OneLLM Evaluation Datasets |
miss-swan/Website_Segmentation | ---
dataset_info:
features:
- name: name
dtype: string
- name: uuid
dtype: string
- name: status
dtype: string
- name: image
dtype: image
- name: label.annotations
list:
- name: id
dtype: int32
- name: category_id
dtype: int32
- name: label.segmentation_bitmap
dtype: image
splits:
- name: train
num_bytes: 5912843.0
num_examples: 10
download_size: 5866632
dataset_size: 5912843.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Website_Segmentation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davidfant/natural-questions-chunk-18 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4674986494
num_examples: 10000
download_size: 1817082642
dataset_size: 4674986494
---
# Dataset Card for "natural-questions-chunk-18"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WUYONGF/pokemon10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 844527.0
num_examples: 10
download_size: 775236
dataset_size: 844527.0
---
# Dataset Card for "pokemon10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bene-ges/asr_med_ru_tuberculosis | ---
license: cc-by-4.0
language:
- ru
size_categories:
- n<1K
tags:
- automatic_speech_recognition
- Speech-to-Text
- asr
- medical
---
This is a small 30-minute dataset for testing ASR on medical domain, based on this [video lecture](https://www.youtube.com/watch?v=p_8IhrOWRGw).
The manifest file is in NeMo format, "text" is the reference text. |
rr/dd | ---
license: afl-3.0
---
|
danielmalencar/teste | ---
license: mit
---
|
BangumiBase/popteamepic | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Pop Team Epic
This is the image base of bangumi POP TEAM EPIC, we detected 15 characters, 353 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 35 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 13 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 9 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 6 | [Download](3/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 4 | 13 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 15 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 48 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 15 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 77 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 14 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 10 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 8 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 13 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 11 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 66 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
FreedomIntelligence/MMLU_Korean | ---
license: mit
language:
- ko
---
Korean version of MMLU dataset tranlasted by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
renumics/speech_commands_enrichment_only | ---
annotations_creators:
- other
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- extended|speech_commands
task_categories:
- audio-classification
task_ids:
- keyword-spotting
pretty_name: SpeechCommands
config_names:
- v0.01
- v0.02
tags:
- spotlight
- enriched
- renumics
- enhanced
- audio
- classification
- extended
dataset_info:
- config_name: enrichment_only
features:
- name: label_string
dtype: string
- name: probability
dtype: float64
- name: probability_vector
sequence: float32
- name: prediction
dtype: int64
- name: prediction_string
dtype: string
- name: embedding_reduced
sequence: float32
splits:
- name: train
num_bytes: 8763867
num_examples: 51093
- name: validation
num_bytes: 1165942
num_examples: 6799
- name: test
num_bytes: 528408
num_examples: 3081
download_size: 0
dataset_size: 10458217
- config_name: raw_and_enrichment_combined
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype:
class_label:
names:
'0': 'yes'
'1': 'no'
'2': up
'3': down
'4': left
'5': right
'6': 'on'
'7': 'off'
'8': stop
'9': go
'10': zero
'11': one
'12': two
'13': three
'14': four
'15': five
'16': six
'17': seven
'18': eight
'19': nine
'20': bed
'21': bird
'22': cat
'23': dog
'24': happy
'25': house
'26': marvin
'27': sheila
'28': tree
'29': wow
'30': _silence_
- name: is_unknown
dtype: bool
- name: speaker_id
dtype: string
- name: utterance_id
dtype: int8
- name: logits
sequence: float64
- name: embedding
sequence: float32
- name: label_string
dtype: string
- name: probability
dtype: float64
- name: probability_vector
sequence: float32
- name: prediction
dtype: int64
- name: prediction_string
dtype: string
- name: embedding_reduced
sequence: float32
splits:
- name: train
num_bytes: 1803565876.375
num_examples: 51093
- name: validation
num_bytes: 240795605.125
num_examples: 6799
- name: test
num_bytes: 109673146.875
num_examples: 3081
download_size: 0
dataset_size: 2154034628.375
configs:
- config_name: enrichment_only
data_files:
- split: train
path: enrichment_only/train-*
- split: validation
path: enrichment_only/validation-*
- split: test
path: enrichment_only/test-*
- config_name: raw_and_enrichment_combined
data_files:
- split: train
path: raw_and_enrichment_combined/train-*
- split: validation
path: raw_and_enrichment_combined/validation-*
- split: test
path: raw_and_enrichment_combined/test-*
---
# Dataset Card for SpeechCommands
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=speech-commands-enrichment_only)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [tensorflow.org/datasets](https://www.tensorflow.org/datasets/catalog/speech_commands)
- **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://arxiv.org/pdf/1804.03209.pdf)
- **Leaderboard:** [More Information Needed]
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=speech-commands-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [SpeechCommands Dataset](https://huggingface.co/datasets/speech_commands).
### Explore the Dataset
There are two configurations of the dataset: **Enrichment only** provides the enrichments calculated by Renumics using the MIT AST transformer, while **raw_and_enrichment_combined** provides a concatenated dataset of the original speech commands and the enrichment.
The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets[audio]
```
> **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information.
Load the dataset from huggingface in your notebook and start exploring with a simple view:
```python
import datasets
from renumics import spotlight
from renumics.spotlight.layouts import debug_classification
dataset = datasets.load_dataset("renumics/speech_commands_enrichment_only", "raw_and_enrichment_combined")
joined_dataset = datasets.concatenate_datasets([dataset["train"], dataset["validation"], dataset["test"]])
layout = debug_classification(label='label_string', prediction='prediction', embedding='embedding_reduced',
features=["label", "prediction", "probability"], inspect={'audio': spotlight.Audio})
dtypes = {
"audio": spotlight.Audio,
"embedding_reduced": spotlight.Embedding
}
spotlight.show(
joined_dataset,
dtype=dtypes,
layout= layout
)
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
As a plug and play option, you can check out the Huggingface space: [Huggingface Space for speech enrichment](https://huggingface.co/spaces/renumics/speech_commands_enrichment_space)
Alternatively, you can run the notebook exploration.ipynb locally.
### SpeechCommands Dataset
This is a set of one-second .wav audio files, each containing a single spoken
English word or background noise. These words are from a small set of commands, and are spoken by a
variety of different speakers. This data set is designed to help train simple
machine learning models. It is covered in more detail at [https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
64,727 audio files.
Version 0.02 of the data set (configuration `"v0.02"`) was released on April 11th 2018 and
contains 105,829 audio files.
### Supported Tasks and Leaderboards
* `keyword-spotting`: the dataset can be used to train and evaluate keyword
spotting systems. The task is to detect preregistered keywords by classifying utterances
into a predefined set of words. The task is usually performed on-device for the
fast response time. Thus, accuracy, model size, and inference time are all crucial.
### Languages
The language data in SpeechCommands is in English (BCP-47 `en`).
## Dataset Structure
### Data Instances
Example of a core word (`"label"` is a word, `"is_unknown"` is `False`):
```python
{
"file": "no/7846fd85_nohash_0.wav",
"audio": {
"path": "no/7846fd85_nohash_0.wav",
"array": array([ -0.00021362, -0.00027466, -0.00036621, ..., 0.00079346,
0.00091553, 0.00079346]),
"sampling_rate": 16000
},
"label": 1, # "no"
"is_unknown": False,
"speaker_id": "7846fd85",
"utterance_id": 0
}
```
Example of an auxiliary word (`"label"` is a word, `"is_unknown"` is `True`)
```python
{
"file": "tree/8b775397_nohash_0.wav",
"audio": {
"path": "tree/8b775397_nohash_0.wav",
"array": array([ -0.00854492, -0.01339722, -0.02026367, ..., 0.00274658,
0.00335693, 0.0005188]),
"sampling_rate": 16000
},
"label": 28, # "tree"
"is_unknown": True,
"speaker_id": "1b88bf70",
"utterance_id": 0
}
```
Example of background noise (`_silence_`) class:
```python
{
"file": "_silence_/doing_the_dishes.wav",
"audio": {
"path": "_silence_/doing_the_dishes.wav",
"array": array([ 0. , 0. , 0. , ..., -0.00592041,
-0.00405884, -0.00253296]),
"sampling_rate": 16000
},
"label": 30, # "_silence_"
"is_unknown": False,
"speaker_id": "None",
"utterance_id": 0 # doesn't make sense here
}
```
### Data Fields
* `file`: relative audio filename inside the original archive.
* `audio`: dictionary containing a relative audio filename,
a decoded audio array, and the sampling rate. Note that when accessing
the audio column: `dataset[0]["audio"]` the audio is automatically decoded
and resampled to `dataset.features["audio"].sampling_rate`.
Decoding and resampling of a large number of audios might take a significant
amount of time. Thus, it is important to first query the sample index before
the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred
over `dataset["audio"][0]`.
* `label`: either word pronounced in an audio sample or background noise (`_silence_`) class.
Note that it's an integer value corresponding to the class name.
* `is_unknown`: if a word is auxiliary. Equals to `False` if a word is a core word or `_silence_`,
`True` if a word is an auxiliary word.
* `speaker_id`: unique id of a speaker. Equals to `None` if label is `_silence_`.
* `utterance_id`: incremental id of a word utterance within the same speaker.
### Data Splits
The dataset has two versions (= configurations): `"v0.01"` and `"v0.02"`. `"v0.02"`
contains more words (see section [Source Data](#source-data) for more details).
| | train | validation | test |
|----- |------:|-----------:|-----:|
| v0.01 | 51093 | 6799 | 3081 |
| v0.02 | 84848 | 9982 | 4890 |
Note that in train and validation sets examples of `_silence_` class are longer than 1 second.
You can use the following code to sample 1-second examples from the longer ones:
```python
def sample_noise(example):
# Use this function to extract random 1 sec slices of each _silence_ utterance,
# e.g. inside `torch.utils.data.Dataset.__getitem__()`
from random import randint
if example["label"] == "_silence_":
random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
return example
```
## Dataset Creation
### Curation Rationale
The primary goal of the dataset is to provide a way to build and test small
models that can detect a single word from a set of target words and differentiate it
from background noise or unrelated speech with as few false positives as possible.
### Source Data
#### Initial Data Collection and Normalization
The audio files were collected using crowdsourcing, see
[aiyprojects.withgoogle.com/open_speech_recording](https://github.com/petewarden/extract_loudest_section)
for some of the open source audio collection code that was used. The goal was to gather examples of
people speaking single-word commands, rather than conversational sentences, so
they were prompted for individual words over the course of a five minute
session.
In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
"Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
from unrecognized ones.
The `_silence_` label contains a set of longer audio clips that are either recordings or
a mathematical simulation of noise.
#### Who are the source language producers?
The audio files were collected using crowdsourcing.
### Annotations
#### Annotation process
Labels are the list of words prepared in advances.
Speakers were prompted for individual words over the course of a five minute
session.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]).
### Citation Information
```
@article{speechcommandsv2,
author = { {Warden}, P.},
title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1804.03209},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
year = 2018,
month = apr,
url = {https://arxiv.org/abs/1804.03209},
}
```
### Contributions
[More Information Needed] |
maghwa/OpenHermes-2-AR-10K-1 | ---
dataset_info:
features:
- name: idx
dtype: 'null'
- name: source
dtype: string
- name: conversations
dtype: string
- name: topic
dtype: 'null'
- name: id
dtype: string
- name: language
dtype: 'null'
- name: model_name
dtype: 'null'
- name: model
dtype: 'null'
- name: skip_prompt_formatting
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: hash
dtype: 'null'
- name: views
dtype: float64
- name: title
dtype: 'null'
- name: system_prompt
dtype: 'null'
- name: custom_instruction
dtype: 'null'
- name: category
dtype: 'null'
splits:
- name: train
num_bytes: 34246780
num_examples: 10001
download_size: 11733843
dataset_size: 34246780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xiongfei/testfruitdata | ---
license: openrail
---
|
EleutherAI/quirky_modularaddition_increment0_alice_hard | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 3563112.95803125
num_examples: 48087
- name: validation
num_bytes: 75436.0905
num_examples: 1018
- name: test
num_bytes: 73418.235
num_examples: 991
download_size: 1107453
dataset_size: 3711967.28353125
---
# Dataset Card for "quirky_modularaddition_increment0_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sasvata/MOM-Summary-Dataset | ---
license: apache-2.0
dataset_info:
features:
- name: Meeting Transcript
dtype: string
- name: Summary
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3761645
num_examples: 767
download_size: 1426442
dataset_size: 3761645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/Flatten-Math-Shepherd_0.9_12.0_-2.0_True | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: next_prompt
dtype: string
- name: next_response
dtype: string
- name: label
dtype: string
- name: question
dtype: string
- name: step
dtype: int64
- name: trajectory
dtype: int64
- name: mask
dtype: int64
- name: reward
dtype: float64
- name: mc_values
dtype: float64
splits:
- name: train
num_bytes: 4279469183
num_examples: 2482945
- name: test
num_bytes: 491798737
num_examples: 283159
download_size: 880084163
dataset_size: 4771267920
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
reciprocate/tinygsm_mixtral_1M_with_errors | ---
dataset_info:
features:
- name: question
dtype: string
- name: program
dtype: string
- name: result
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1369296129
num_examples: 1000000
download_size: 397367354
dataset_size: 1369296129
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
316usman/thematic1d_rr_embed | ---
dataset_info:
features:
- name: text
dtype: string
- name: document_url
dtype: string
- name: source_url
dtype: string
splits:
- name: train
num_bytes: 81805025
num_examples: 131629
download_size: 29481268
dataset_size: 81805025
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
snats/chico2prompts | ---
license: cc-by-4.0
---
# chico2prompts
There are 2 files, they follow two different prompts. They are in 2 different csv files in Spanish.
# Prompts
First prompt: Suggest a title for the following.
In english:
```
Suggest a title for the following story:
{{contents}}
completion:
Sure, here's a suitable title for the given story {{titles}}.
```
In spanish:
```
Sugiere un título para la siguiente historia: {{contents}}
Completado por lo siguiente:
Un título posible para la siguiente historia podría ser: {{titles}}
```
Second prompt: Write a short story
In english:
```
prompt:
Write a short story based on the following title:
{{titles}}
completion:
{{contents}}
```
In spanish:
```
prompt:
Escribe una historia corta basada en el siguiente título {{titles}}
completion:
{{contents}}
```
This dataset is a sub-version of the original [chico dataset](https://huggingface.co/datasets/snats/chico). |
deepset/prompt-injections | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 71720
num_examples: 546
- name: test
num_bytes: 15981
num_examples: 116
download_size: 51215
dataset_size: 87701
license: cc-by-4.0
---
# Dataset Card for "deberta-v3-base-injection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/clueweb09_ko | ---
pretty_name: '`clueweb09/ko`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `clueweb09/ko`
The `clueweb09/ko` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/ko).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=18,075,141
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb09_ko', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
diwank/synthetic-student-profiles | ---
dataset_info:
features:
- name: Name
dtype: string
- name: Age
dtype: int64
- name: Sex
dtype: string
- name: Major
dtype: string
- name: Year
dtype: string
- name: GPA
dtype: float64
- name: Hobbies
sequence: string
- name: Country
dtype: string
- name: State/Province
dtype: string
- name: Unique Quality
dtype: string
- name: Story
dtype: string
splits:
- name: train
num_bytes: 61833951
num_examples: 23236
download_size: 31090449
dataset_size: 61833951
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
joey234/mmlu-global_facts-neg-answer | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_answer
dtype: string
splits:
- name: test
num_bytes: 19969
num_examples: 100
download_size: 12966
dataset_size: 19969
---
# Dataset Card for "mmlu-global_facts-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hyperdemocracy/us-congress | ---
configs:
- config_name: billstatus_xml
data_files:
- split: '108'
path: data/billstatus_xml/usc-108-billstatus-xml.parquet
- split: '109'
path: data/billstatus_xml/usc-109-billstatus-xml.parquet
- split: '110'
path: data/billstatus_xml/usc-110-billstatus-xml.parquet
- split: '111'
path: data/billstatus_xml/usc-111-billstatus-xml.parquet
- split: '112'
path: data/billstatus_xml/usc-112-billstatus-xml.parquet
- split: '113'
path: data/billstatus_xml/usc-113-billstatus-xml.parquet
- split: '114'
path: data/billstatus_xml/usc-114-billstatus-xml.parquet
- split: '115'
path: data/billstatus_xml/usc-115-billstatus-xml.parquet
- split: '116'
path: data/billstatus_xml/usc-116-billstatus-xml.parquet
- split: '117'
path: data/billstatus_xml/usc-117-billstatus-xml.parquet
- split: '118'
path: data/billstatus_xml/usc-118-billstatus-xml.parquet
- config_name: billstatus_parsed
data_files:
- split: '108'
path: data/billstatus_parsed/usc-108-billstatus-parsed.parquet
- split: '109'
path: data/billstatus_parsed/usc-109-billstatus-parsed.parquet
- split: '110'
path: data/billstatus_parsed/usc-110-billstatus-parsed.parquet
- split: '111'
path: data/billstatus_parsed/usc-111-billstatus-parsed.parquet
- split: '112'
path: data/billstatus_parsed/usc-112-billstatus-parsed.parquet
- split: '113'
path: data/billstatus_parsed/usc-113-billstatus-parsed.parquet
- split: '114'
path: data/billstatus_parsed/usc-114-billstatus-parsed.parquet
- split: '115'
path: data/billstatus_parsed/usc-115-billstatus-parsed.parquet
- split: '116'
path: data/billstatus_parsed/usc-116-billstatus-parsed.parquet
- split: '117'
path: data/billstatus_parsed/usc-117-billstatus-parsed.parquet
- split: '118'
path: data/billstatus_parsed/usc-118-billstatus-parsed.parquet
- config_name: textversions_dtd_xml
data_files:
- split: '113'
path: data/textversions_dtd_xml/usc-113-textversions-dtd-xml.parquet
- split: '114'
path: data/textversions_dtd_xml/usc-114-textversions-dtd-xml.parquet
- split: '115'
path: data/textversions_dtd_xml/usc-115-textversions-dtd-xml.parquet
- split: '116'
path: data/textversions_dtd_xml/usc-116-textversions-dtd-xml.parquet
- split: '117'
path: data/textversions_dtd_xml/usc-117-textversions-dtd-xml.parquet
- split: '118'
path: data/textversions_dtd_xml/usc-118-textversions-dtd-xml.parquet
- config_name: textversions_uslm_xml
data_files:
- split: '113'
path: data/textversions_uslm_xml/usc-113-textversions-uslm-xml.parquet
- split: '114'
path: data/textversions_uslm_xml/usc-114-textversions-uslm-xml.parquet
- split: '115'
path: data/textversions_uslm_xml/usc-115-textversions-uslm-xml.parquet
- split: '116'
path: data/textversions_uslm_xml/usc-116-textversions-uslm-xml.parquet
- split: '117'
path: data/textversions_uslm_xml/usc-117-textversions-uslm-xml.parquet
- split: '118'
path: data/textversions_uslm_xml/usc-118-textversions-uslm-xml.parquet
- config_name: unified_v1
data_files:
- split: '113'
path: data/unified_v1/usc-113-unified-v1.parquet
- split: '114'
path: data/unified_v1/usc-114-unified-v1.parquet
- split: '115'
path: data/unified_v1/usc-115-unified-v1.parquet
- split: '116'
path: data/unified_v1/usc-116-unified-v1.parquet
- split: '117'
path: data/unified_v1/usc-117-unified-v1.parquet
- split: '118'
path: data/unified_v1/usc-118-unified-v1.parquet
- config_name: chunks_v1_s1024_o256
data_files:
- split: '113'
path: data/chunks_v1_s1024_o256/usc-113-chunks-v1-s1024-o256.parquet
- split: '114'
path: data/chunks_v1_s1024_o256/usc-114-chunks-v1-s1024-o256.parquet
- split: '115'
path: data/chunks_v1_s1024_o256/usc-115-chunks-v1-s1024-o256.parquet
- split: '116'
path: data/chunks_v1_s1024_o256/usc-116-chunks-v1-s1024-o256.parquet
- split: '117'
path: data/chunks_v1_s1024_o256/usc-117-chunks-v1-s1024-o256.parquet
- split: '118'
path: data/chunks_v1_s1024_o256/usc-118-chunks-v1-s1024-o256.parquet
- config_name: chunks_v1_s2048_o256
data_files:
- split: '113'
path: data/chunks_v1_s2048_o256/usc-113-chunks-v1-s2048-o256.parquet
- split: '114'
path: data/chunks_v1_s2048_o256/usc-114-chunks-v1-s2048-o256.parquet
- split: '115'
path: data/chunks_v1_s2048_o256/usc-115-chunks-v1-s2048-o256.parquet
- split: '116'
path: data/chunks_v1_s2048_o256/usc-116-chunks-v1-s2048-o256.parquet
- split: '117'
path: data/chunks_v1_s2048_o256/usc-117-chunks-v1-s2048-o256.parquet
- split: '118'
path: data/chunks_v1_s2048_o256/usc-118-chunks-v1-s2048-o256.parquet
- config_name: chunks_v1_s4096_o512
data_files:
- split: '113'
path: data/chunks_v1_s4096_o512/usc-113-chunks-v1-s4096-o512.parquet
- split: '114'
path: data/chunks_v1_s4096_o512/usc-114-chunks-v1-s4096-o512.parquet
- split: '115'
path: data/chunks_v1_s4096_o512/usc-115-chunks-v1-s4096-o512.parquet
- split: '116'
path: data/chunks_v1_s4096_o512/usc-116-chunks-v1-s4096-o512.parquet
- split: '117'
path: data/chunks_v1_s4096_o512/usc-117-chunks-v1-s4096-o512.parquet
- split: '118'
path: data/chunks_v1_s4096_o512/usc-118-chunks-v1-s4096-o512.parquet
- config_name: chunks_v1_s8192_o512
data_files:
- split: '113'
path: data/chunks_v1_s8192_o512/usc-113-chunks-v1-s8192-o512.parquet
- split: '114'
path: data/chunks_v1_s8192_o512/usc-114-chunks-v1-s8192-o512.parquet
- split: '115'
path: data/chunks_v1_s8192_o512/usc-115-chunks-v1-s8192-o512.parquet
- split: '116'
path: data/chunks_v1_s8192_o512/usc-116-chunks-v1-s8192-o512.parquet
- split: '117'
path: data/chunks_v1_s8192_o512/usc-117-chunks-v1-s8192-o512.parquet
- split: '118'
path: data/chunks_v1_s8192_o512/usc-118-chunks-v1-s8192-o512.parquet
license: mit
language:
- en
---
# Dataset Description
This dataset provides convenient access to congressional data from
the US [Government Publishing Office](https://www.gpo.gov/)
via the [GovInfo Bulk Data Repository](https://www.govinfo.gov/developers).
GovInfo provides bulk data in xml format.
The raw xml files were downloaded using the
[congress](https://github.com/unitedstates/congress) repo.
Further processing was done using the
hyperdemocracy [congress_prep](https://github.com/hyperdemocracy/congress-prep) repo.
## Quickstart
Check out our [hyperdemocracy getting started notebook](https://colab.research.google.com/drive/18_PKiMd_9xAV5IWQZVbx2iZkGW05REJL?usp=sharing) Google Colab notebook.
## BILLSTATUS (metadata for congresses 108-118)
* https://www.govinfo.gov/bulkdata/BILLSTATUS
* https://github.com/usgpo/bill-status/blob/main/BILLSTATUS-XML_User_User-Guide.md
* https://github.com/usgpo/bulk-data/blob/main/Bills-XML-User-Guide.md
These xml files contain metadata about each bill and
pointers to different xml files that contain various text versions of each bill.
## BILLS (text for congresses 113-118)
* https://www.govinfo.gov/bulkdata/BILLS
* https://xml.house.gov/
* https://github.com/usgpo/bill-dtd?tab=readme-ov-file
These xml files contain various text versions for each bill.
# Subset Descriptions
| Subset | Description |
|--------|-------------|
| billstatus_xml | One row per bill with the raw govinfo xml metadata file. |
| textversions_dtd_xml | One row per text version of a bill with the raw govinfo dtd xml text version file (complete). |
| textversions_uslm_xml | One row per text version of a bill with the raw govinfo uslm xml text version file (very sparse). |
| billstatus_parsed | One row per bill with the raw govinfo xml metadata parsed into a standardized json model. |
| unified_v1 | One row per bill with parsed metadata and parsed plaintext text versions joined. |
| chunks_s{chunk_size}_o{chunk_overlap} | Text broken into chunks of size {chunk_size} with overlap {chunk_overlap} (units in characters) |
# Examples
The dataset is broken into subsets (described above) and splits (one split per congress number).
```python
from datasets import load_dataset
# load each split into a `DatasetDict` keyed on congress number
dsd = load_dataset(path="hyperdemocracy/us-congress", name="unified_v1")
# load a single congress number into a `Dataset`
ds = load_dataset(path="hyperdemocracy/us-congress", name="unified_v1", split=117)
# load all congress numbers into a single `Dataset`
ds = load_dataset(path="hyperdemocracy/us-congress", name="unified_v1", split="all")
```
# Congress Number to Date Mapping
| Congress Number | Years | Metadata | Text |
|-----------------|-------|----------|------|
| 118 | 2023-2024 | True | True |
| 117 | 2021-2022 | True | True |
| 116 | 2019-2020 | True | True |
| 115 | 2017-2018 | True | True |
| 114 | 2015-2016 | True | True |
| 113 | 2013-2014 | True | True |
| 112 | 2011-2012 | True | False |
| 111 | 2009-2010 | True | False |
| 110 | 2007-2008 | True | False |
| 109 | 2005-2006 | True | False |
| 108 | 2003-2004 | True | False |
|
voidful/set-dg | ---
language: en
dataset_info:
features:
- name: question
dtype: string
- name: passage
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: answer_index
dtype: int64
splits:
- name: eduqg_train
num_bytes: 2914261
num_examples: 2126
- name: eduqg_valid
num_bytes: 729652
num_examples: 522
- name: cosmosqa_train
num_bytes: 7385154
num_examples: 12088
- name: cosmosqa_test
num_bytes: 2376996
num_examples: 3738
- name: cosmosqa_val
num_bytes: 551960
num_examples: 795
- name: mctest_train
num_bytes: 1153917
num_examples: 874
- name: mctest_test
num_bytes: 549224
num_examples: 435
- name: mctest_val
num_bytes: 193168
num_examples: 151
- name: reclor_train
num_bytes: 5220478
num_examples: 4619
- name: reclor_valid
num_bytes: 579336
num_examples: 500
- name: dream_train
num_bytes: 3845518
num_examples: 5297
- name: dream_test
num_bytes: 1254192
num_examples: 1777
- name: dream_val
num_bytes: 1257577
num_examples: 1751
- name: eqg_race_f_train
num_bytes: 26950949
num_examples: 15279
- name: eqg_race_f_test
num_bytes: 1453647
num_examples: 830
- name: eqg_race_f_dev
num_bytes: 1583078
num_examples: 906
download_size: 26917282
dataset_size: 57999107
---
# Dataset Card for "set-dg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zkdeng/commonSpidersBalanced | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Aculepeira_ceropegia
'1': Agalenatea_redii
'2': Agelena_labyrinthica
'3': Anasaitis_canosa
'4': Anyphaena_accentuata
'5': Aphonopelma_hentzi
'6': Araneus_diadematus
'7': Araneus_marmoreus
'8': Araneus_quadratus
'9': Araneus_trifolium
'10': Araniella_displicata
'11': Argiope_argentata
'12': Argiope_aurantia
'13': Argiope_bruennichi
'14': Argiope_keyserlingi
'15': Argiope_lobata
'16': Argiope_trifasciata
'17': Attulus_fasciger
'18': Austracantha_minax
'19': Badumna_longinqua
'20': Carrhotus_xanthogramma
'21': Colonus_hesperus
'22': Colonus_sylvanus
'23': Cyclosa_conica
'24': Cyrtophora_citricola
'25': Dolomedes_albineus
'26': Dolomedes_minor
'27': Dolomedes_scriptus
'28': Dolomedes_tenebrosus
'29': Dolomedes_triton
'30': Dysdera_crocata
'31': Ebrechtella_tricuspidata
'32': Enoplognatha_ovata
'33': Eratigena_duellica
'34': Eriophora_ravilla
'35': Eris_militaris
'36': Evarcha_arcuata
'37': Gasteracantha_cancriformis
'38': Habronattus_pyrrithrix
'39': Hasarius_adansoni
'40': Helpis_minitabunda
'41': Hentzia_mitrata
'42': Hentzia_palmarum
'43': Herpyllus_ecclesiasticus
'44': Heteropoda_venatoria
'45': Hogna_radiata
'46': Holocnemus_pluchei
'47': Kukulcania_hibernalis
'48': Larinioides_cornutus
'49': Larinioides_sclopetarius
'50': Latrodectus_geometricus
'51': Latrodectus_hesperus
'52': Latrodectus_mactans
'53': Leucauge_argyra
'54': Leucauge_argyrobapta
'55': Leucauge_dromedaria
'56': Leucauge_venusta
'57': Lyssomanes_viridis
'58': Maevia_inclemens
'59': Mangora_acalypha
'60': Maratus_griseus
'61': Marpissa_muscosa
'62': Mecynogea_lemniscata
'63': Menemerus_bivittatus
'64': Menemerus_semilimbatus
'65': Micrathena_gracilis
'66': Micrathena_sagittata
'67': Micrommata_virescens
'68': Misumena_vatia
'69': Misumenoides_formosipes
'70': Misumessus_oblongus
'71': Naphrys_pulex
'72': Neoscona_arabesca
'73': Neoscona_crucifera
'74': Neoscona_oaxacensis
'75': Nephila_pilipes
'76': Neriene_radiata
'77': Nesticodes_rufipes
'78': Nuctenea_umbratica
'79': Oxyopes_salticus
'80': Oxyopes_scalaris
'81': Paraphidippus_aurantius
'82': Parasteatoda_tepidariorum
'83': Peucetia_viridans
'84': Phidippus_audax
'85': Phidippus_clarus
'86': Phidippus_johnsoni
'87': Phidippus_putnami
'88': Philaeus_chrysops
'89': Philodromus_dispar
'90': Pholcus_phalangioides
'91': Pisaura_mirabilis
'92': Pisaurina_mira
'93': Platycryptus_californicus
'94': Platycryptus_undatus
'95': Plebs_eburnus
'96': Plexippus_paykulli
'97': Rabidosa_rabida
'98': Salticus_scenicus
'99': Sassacus_vitis
'100': Scytodes_thoracica
'101': Socca_pustulosa
'102': Steatoda_grossa
'103': Steatoda_nobilis
'104': Steatoda_triangulosa
'105': Synema_globosum
'106': Thomisus_onustus
'107': Trichonephila_clavata
'108': Trichonephila_clavipes
'109': Trichonephila_edulis
'110': Trichonephila_plumipes
'111': Verrucosa_arenata
'112': Zoropsis_spinimana
'113': Zygiella_x-notata
splits:
- name: train
num_bytes: 3394498525.325
num_examples: 166907
download_size: 3267608949
dataset_size: 3394498525.325
---
# Dataset Card for "commonSpidersBalanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tanvirsrbd1/nov1_with_annotation | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: xml
dtype: string
- name: html
dtype: string
- name: response
dtype: string
- name: annotated
dtype: string
splits:
- name: train
num_bytes: 37050488.1899474
num_examples: 1323
download_size: 4186492
dataset_size: 37050488.1899474
---
# Dataset Card for "nov1_with_annotation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joagonzalez/asr-interviews-test-full | ---
dataset_info:
features:
- name: filename
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: speaker
dtype: string
- name: duration
dtype: float64
- name: filesize
dtype: float64
- name: channels
dtype: int64
- name: sample_rate
dtype: int64
- name: bitrate
dtype: int64
- name: word_count
dtype: int64
splits:
- name: test
num_bytes: 117835383.01896264
num_examples: 288
download_size: 119397139
dataset_size: 117835383.01896264
---
# Dataset Card for "asr-interviews-test-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
boeddeker/espnet_libri_css_diarize_spectral_rttm | ---
license: mit
---
The RTTM files are generated by executing the `libri_css` recipe from `ESPnet` (https://github.com/espnet/espnet/tree/master/egs/libri_css/asr1).
|
kinyugo/lima_concatenated | ---
language: en
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2883591
num_examples: 1030
- name: test
num_bytes: 37237
num_examples: 300
download_size: 1722252
dataset_size: 2920828
---
# Dataset Card for "lima_concatenated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/code_instructions_standardized_cluster_12_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 14477312
num_examples: 15494
download_size: 7003682
dataset_size: 14477312
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code_instructions_standardized_cluster_12_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jpdiazpardo/guturalScream_metalVocals | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: song_name
dtype: string
- name: artist_name
dtype: string
- name: album_name
dtype: string
- name: release_year
dtype: int64
- name: video_id
dtype: string
- name: timestamp_start
dtype: float64
- name: timestamp_end
dtype: float64
- name: sample_rate
dtype: int64
splits:
- name: train
num_bytes: 1259147118.2099998
num_examples: 1740
- name: test
num_bytes: 403875517.75
num_examples: 580
download_size: 1629538009
dataset_size: 1663022635.9599998
license: mit
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- music
size_categories:
- 1K<n<10K
pretty_name: Scream and gutural sound transcriptions from heavy metal songs
---
# Dataset Card for "Gutural Speech Recognition"
This dataset contains annotations of 57 songs.
### How to use
Load the dataset from huggingface in your notebook:
```python
!pip install datasets[audio]
import datasets
dataset = datasets.load_dataset("jpdiazpardo/guturalScream_metalVocals")
```
### Data Fields
* `audio`: the trimmed audio file from the song.
* `text`: the transcribed vocals.
* `song_name`: the song title.
* `artist_name`: the artist name.
* `album_name`: the name of the album where the song was released.
* `release_year`: the release year of the song.
* `video_id`: the YouTube video id.
* `timestamp_start`: the start time of the snippet from the full audio.
* `timestamp_end`: the end time of the snippet from the full audio.
* `sample_rate`: the sampling rate of the audio.
### Youtube playlist: [Gutural Speech Recognition](https://www.youtube.com/playlist?list=PLkCTyMdVt0AHgp-80jqskjUtfHo-Ht4xy)
### Source Data
| video id | artist | song | album | release_year |
|-------------|-------------------------|-----------------------------------------------|------------------------------------------|--------------|
| 5cLFdIzMhn8 | Amon Armath | Crack the Sky | Berserker | 2019 |
| m_m2oYJkx1A | Arch Enemy | Deceiver, Deceiver | Deceivers | 2022 |
| mjF1rmSV1dM | Arch Enemy | The Eagle Flies Alone | Will to Power | 2017 |
| O59JNz7rdIU | Archtects | A Match Made In Heaven | All Our Gods have Abandoned Us | 2016 |
| -jFgNreZPf0 | Asking Alexandria | Into the Fire | Asking Alexandria | 2017 |
| l7Fi8-7HRhc | Asking Alexandria | Not the American Average | Stand Up and Scream | 2009 |
| z71_E_YqWqA | Asking Alexandria | The Final Episode (Let's Change the Channel) | Stand Up and Scream | 2010 |
| Ql2THDlBD9g | Asking Alexandria | Vultures | Asking Alexandria | 2017 |
| W1l6izYwIhM | Attila | Pizza | Pizza | 2018 |
| gVC7f59ibI8 | Attila | Three 6 | Three 6 | 2017 |
| HKWqzjQAv14 | Behemoth | Ecclesia Diabolica Catholica | I Loved you at your Darkest | 2018 |
| UA_j_72psoo | Behemoth | O Father O Satan O Sun! | The Satanist | 2014 |
| g7yxjTcM7Bs | Behemoth | Wolves ov Siberia | I Loved you at your Darkest | 2018 |
| C7cczTyQ4iY | Bring me the Horizon | Go to Hell, For Heaven's Sake | Sempiternal | 2013 |
| AWggPLXeOkU | Bring me the Horizon | Pray for Pleagues | Count your Blessings | 2006 |
| q2I0ulTZWXA | Bullet for my Valentine | Waking the Demon | Scream Aim Fire | 2008 |
| 482tDopNzoc | Cannibal Corpse | Evisceration Plague | Evisceration Plague | 2009 |
| vlgiWBCbCJk | Cannibal Corpse | Hammer Smashed Face corpse Hammer | Tomb of the Mutilated | 1992 |
| Wks1aBh49sQ | Cradle of Filth | Crawling King Chaos | Existence is Futile | 2021 |
| DNRIaeg6EyY | Cradle of Filth | Heartbreak and Seance | Cryptoriana – The Seductiveness of Decay | 2017 |
| 04F4xlWSFh0 | Drowning Pool | Bodies | Sinner | 2001 |
| B4CcX720DW4 | Gojira | Amazonia | Fortitude | 2021 |
| tvmC7qxtQxs | Gojira | Into the Storm | Fortitude | 2021 |
| EkRrend3sIw | Gojira | The Chant | Fortitude | 2021 |
| uJRUq90EC_A | Hypocrisy | Chemical Whore | Worship | 2021 |
| 75xYN7VBiTY | In Flames | Alias | A Sense of Purpose | 2008 |
| FC3djB7-nc0 | Jinjer | Ape | Micro | 2019 |
| 7f353euyRno | Jinjer | Pit of Consciousness | Macro | 2019 |
| 2N0ShfOOEq4 | Killswitch Engage | The Signal Fire | Atonement | 2019 |
| Lm-sI1EB8BA | Killswitch Engage | Unleashed | Atonement | 2019 |
| lNwHjNz6My4 | Lamb of God | Checkmate | Lamb of God | 2020 |
| SnEXcv0YJQA | Lamb of God | Nevermore | Omens | 2022 |
| VHVsG2taJVs | Lamb of God | Omens | Omens | 2022 |
| GkoYsXDvL8s | Lamb of God | Wake up Dead | Omens | 2022 |
| 7Na3sECLYI8 | Motionless in White | 570 | Graveyard Shift | 2017 |
| Pj2miRJ6bZs | Motionless in White | Another Life | Disguise | 2019 |
| cIEc_11Aydc | Motionless in White | Disguise | Disguise | 2019 |
| TwO0zLLybQ0 | Motionless in White | Eternally Yours | Graveyard Shift | 2017 |
| CYG2kaZ5OfQ | Motionless in White | Undead Ahead 2: The Tale of the Midnight Ride | Disguise | 2019 |
| udeaeWGO4Is | Of Mice & Men | Earth & Sky | Earth and Sky | 2019 |
| AkFqg5wAuFk | Pantera | Walk | Vulgar Display of Power | 1992 |
| UpEHp6u0ZxU | Parkway Drive | Absolute Power | Reverence | 2018 |
| 4dBA2YxbFoE | Parkway Drive | Chronos | Reverence | 2018 |
| 4FTVDKo7kWY | Parkway Drive | I Hope you Rot | Reverence | 2018 |
| WL_8ZY89dP4 | Parkway Drive | Prey | Reverence | 2018 |
| lP6QplMvOBg | Parkway Drive | Shadow Boxing | Reverence | 2018 |
| 5uwyvvxNvqQ | Parkway Drive | Wishing Wells | Reverence | 2018 |
| wLoYIBEZEfw | Slipknot | All Out Life | We are not your Kind | 2019 |
| dymAGwL2kQI | Slipknot | The Chapeltown Rag | The End, so Far | 2022 |
| FukeNR1ydOA | Suicide Silence | Disengage | No Time to Bleed | 2009 |
| dWoQyC8_WtM | Suicide Silence | Unanswered | The Cleansing | 2007 |
| ds9s-pzGD0M | Suicide Silence | You only live Once | The Black Crown | 2011 |
| t2d3EDNDCn8 | Wage War | Low | Pressure | 2019 |
| lWo1N8Q0t9o | Wage War | Witness | Deadweight | 2017 |
| rbWFZMFlDIU | Whitechapel | I Will Find you | Kin | 2021 |
| eVI6c0TlM2g | Whitechapel | The Saw is the Law | Our Endless War | 2014 |
| W72Lnz1n-jw | Whitechapel | When a Demon Defiles a Witch | The Valley | 2019 |
#### Initial Data Collection and Normalization
The data was collected from the YouTube playlist above and trimmed using the timestamps provided in the dataset.
The audio files were passed through the [Spleeter](https://joss.theoj.org/papers/10.21105/joss.02154) (Hennequin et al., 2020) source separation algorithm to separate the vocals from the other components.
### Licensing Information
MIT License
Copyright (c) 2023 Juan Pablo Díaz
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
### Citation Information
```
@article{
Hennequin2020,
doi = {10.21105/joss.02154},
url = {https://doi.org/10.21105/joss.02154},
year = {2020}, publisher = {The Open Journal},
volume = {5}, number = {50}, pages = {2154},
author = {Romain Hennequin and Anis Khlif and Felix Voituret and Manuel Moussallam},
title = {Spleeter: a fast and efficient music source separation tool with pre-trained models},
journal = {Journal of Open Source Software}
}
``` |
SpeedOfMagic/xsum_tiny_ood | ---
dataset_info:
features:
- name: document
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2343786.0
num_examples: 1100
- name: dev
num_bytes: 398593.0
num_examples: 200
- name: test
num_bytes: 468841.0
num_examples: 200
download_size: 2101221
dataset_size: 3211220.0
---
# Dataset Card for "xsum_tiny_ood"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
asheinin/The_Mathematical_Principles_of_Natural_Philosophy_1846 | ---
license: openrail
task_categories:
- text-generation
language:
- en
pretty_name: newton_math_prenciples
---
# Dataset Card for Newton Matematical Principles
### Dataset Summary
This dataset is meant to me used as a showcase for finetuning an LLM on a specific domain.
### Supported Tasks and Leaderboards
Text generation
### Languages
English
## Dataset Structure
text file
### Data Splits
Train only, the entire 1846 English version of the book.
### Source Data
https://ws-export.wmcloud.org/?lang=en&title=The_Mathematical_Principles_of_Natural_Philosophy_(1846)
### Contributions
Avraham Sheinin, Domino Data Lab |
WeixuanYuan/VAE_sound | ---
license: openrail
---
|
joey234/mmlu-high_school_physics-rule-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 122298
num_examples: 151
download_size: 65900
dataset_size: 122298
---
# Dataset Card for "mmlu-high_school_physics-rule-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_13b_VQAv2_visclues_ns_128 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_8
num_bytes: 3262495
num_examples: 128
download_size: 641392
dataset_size: 3262495
---
# Dataset Card for "VQAv2_sample_validation_facebook_opt_13b_VQAv2_visclues_ns_128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lilbillbiscuit/biocoder_hidden | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 48125672
num_examples: 12792
- name: test
num_bytes: 8563408
num_examples: 1035
download_size: 2865779
dataset_size: 56689080
---
# Dataset Card for "biocoder_hidden"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
leeminxji/doguri | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 211325.0
num_examples: 32
download_size: 212377
dataset_size: 211325.0
---
# Dataset Card for "doguri"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_cola_doubly_filled_comp | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 510
num_examples: 6
- name: test
num_bytes: 172
num_examples: 3
- name: train
num_bytes: 2402
num_examples: 36
download_size: 7800
dataset_size: 3084
---
# Dataset Card for "MULTI_VALUE_cola_doubly_filled_comp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
benlipkin/arlsat | ---
license: mit
---
Raw datset: https://github.com/zhongwanjun/AR-LSAT
|
pleisto/tianpeng-dataset | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- en
- ch
- zh
--- |
ovior/twitter_dataset_1713219621 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2319848
num_examples: 7203
download_size: 1305201
dataset_size: 2319848
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bigscience-data/roots_pt_wiktionary | ---
language: pt
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
|
open-llm-leaderboard/details_zarakiquemparte__kuchiki-l2-7b | ---
pretty_name: Evaluation run of zarakiquemparte/kuchiki-l2-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [zarakiquemparte/kuchiki-l2-7b](https://huggingface.co/zarakiquemparte/kuchiki-l2-7b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_zarakiquemparte__kuchiki-l2-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-27T01:56:08.960825](https://huggingface.co/datasets/open-llm-leaderboard/details_zarakiquemparte__kuchiki-l2-7b/blob/main/results_2023-10-27T01-56-08.960825.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.27611157718120805,\n\
\ \"em_stderr\": 0.004578442614328635,\n \"f1\": 0.35264576342282045,\n\
\ \"f1_stderr\": 0.004531331117609875,\n \"acc\": 0.38779557831535094,\n\
\ \"acc_stderr\": 0.009079399041337897\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.27611157718120805,\n \"em_stderr\": 0.004578442614328635,\n\
\ \"f1\": 0.35264576342282045,\n \"f1_stderr\": 0.004531331117609875\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04473085670962851,\n \
\ \"acc_stderr\": 0.005693886131407058\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7308602999210734,\n \"acc_stderr\": 0.012464911951268734\n\
\ }\n}\n```"
repo_url: https://huggingface.co/zarakiquemparte/kuchiki-l2-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|arc:challenge|25_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_27T01_56_08.960825
path:
- '**/details_harness|drop|3_2023-10-27T01-56-08.960825.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-27T01-56-08.960825.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_27T01_56_08.960825
path:
- '**/details_harness|gsm8k|5_2023-10-27T01-56-08.960825.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-27T01-56-08.960825.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hellaswag|10_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T00-21-14.015290.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T00-21-14.015290.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T00-21-14.015290.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_27T01_56_08.960825
path:
- '**/details_harness|winogrande|5_2023-10-27T01-56-08.960825.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-27T01-56-08.960825.parquet'
- config_name: results
data_files:
- split: 2023_09_22T00_21_14.015290
path:
- results_2023-09-22T00-21-14.015290.parquet
- split: 2023_10_27T01_56_08.960825
path:
- results_2023-10-27T01-56-08.960825.parquet
- split: latest
path:
- results_2023-10-27T01-56-08.960825.parquet
---
# Dataset Card for Evaluation run of zarakiquemparte/kuchiki-l2-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/zarakiquemparte/kuchiki-l2-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [zarakiquemparte/kuchiki-l2-7b](https://huggingface.co/zarakiquemparte/kuchiki-l2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_zarakiquemparte__kuchiki-l2-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T01:56:08.960825](https://huggingface.co/datasets/open-llm-leaderboard/details_zarakiquemparte__kuchiki-l2-7b/blob/main/results_2023-10-27T01-56-08.960825.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.27611157718120805,
"em_stderr": 0.004578442614328635,
"f1": 0.35264576342282045,
"f1_stderr": 0.004531331117609875,
"acc": 0.38779557831535094,
"acc_stderr": 0.009079399041337897
},
"harness|drop|3": {
"em": 0.27611157718120805,
"em_stderr": 0.004578442614328635,
"f1": 0.35264576342282045,
"f1_stderr": 0.004531331117609875
},
"harness|gsm8k|5": {
"acc": 0.04473085670962851,
"acc_stderr": 0.005693886131407058
},
"harness|winogrande|5": {
"acc": 0.7308602999210734,
"acc_stderr": 0.012464911951268734
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
logicreasoning/logi_glue | ---
configs:
- config_name: logiQA
data_files:
- split: train
path: "logiQA/logiQA_train.jsonl"
- split: test
path: "logiQA/logiQA_test.jsonl"
- config_name: cluttr
data_files:
- split: train
path: "cluttr/cluttr_train.jsonl"
- split: test
path: "cluttr/cluttr_test.jsonl"
- config_name: abduction_animal
data_files:
- split: train
path: "abduction_animal/abduction_animal_train.jsonl"
- split: test
path: "abduction_animal/abduction_animal_test.jsonl"
- config_name: adv
data_files:
- split: train
path: "adv/adv_arct_train.jsonl"
- split: test
path: "adv/adv_arct_dev.jsonl"
- config_name: alpha_nli
data_files:
- split: train
path: "alpha_nli/alpha_nli_train.jsonl"
- split: test
path: "alpha_nli/alpha_nli_dev.jsonl"
- config_name: logicNLI
data_files:
- split: train
path: "logicNLI/logicNLI_train.jsonl"
- split: test
path: "logicNLI/logicNLI_dev.jsonl"
- config_name: folio
data_files:
- split: train
path: "folio/folio_train.jsonl"
- split: test
path: "folio/folio_dev.jsonl"
- config_name: proofwriter
data_files:
- split: train
path: "proofwriter/proofwriter_train.jsonl"
- split: test
path: "proofwriter/proofwriter_test.jsonl"
- config_name: rulebert
data_files:
- split: train
path: "rulebert/rulebert_train.jsonl"
- split: test
path: "rulebert/rulebert_test.jsonl"
- config_name: anli
data_files:
- split: train
path: "anli/anli_train.jsonl"
- split: test
path: "anli/anli_test.jsonl"
- config_name: logiQA_2.0
data_files:
- split: test
path: "logiQA_2.0/logiQA_2.jsonl"
- config_name: cluttr_systematic
data_files:
- split: test
path: "cluttr_systematic/cluttr_systematic_test.jsonl"
- config_name: bigbench-logical-Args
data_files:
- split: test
path: "bigbench-logical-Args/bigbench-logical-args_test.jsonl"
- config_name: natlang
data_files:
- split: test
path: "natlang/natlang_test.jsonl"
- config_name: babi_task_16
data_files:
- split: test
path: "babi_task_16/babi_task_16_test.jsonl"
- config_name: wanli
data_files:
- split: test
path: "wanli/wanli_test.jsonl"
- config_name: abduction_person
data_files:
- split: test
path: "abduction_person/abduction_person_test.jsonl"
- config_name: prontoqa
data_files:
- split: test
path: "prontoqa/prontoqa_test.jsonl"
- config_name: babi_task_15
data_files:
- split: test
path: "babi_task_15/babi_task_15_test.jsonl"
- config_name: winologic
data_files:
- split: test
path: "winologic/winologic_test.jsonl"
- config_name: birdelectricity
data_files:
- split: test
path: "birdelectricity/bird_electricity_test.jsonl"
- config_name: bigbench_deduction
data_files:
- split: test
path: "bigbench_deduction/big_bench_deduction_test.jsonl"
- config_name: reclor
data_files:
- split: test
path: "reclor/reclor_test.jsonl"
- config_name: Rulebert-Union-Rules
data_files:
- split: test
path: "Rulebert-Union-Rules/Rulebert-Union-Rules-5k_test.jsonl"
--- |
chaudha7/Diary-Entry-To-Rap | ---
license: apache-2.0
---
|
yjernite/prof_report__Lykon-DreamShaper__multi__24 | ---
dataset_info:
features:
- name: cluster_id
dtype: int64
- name: cluster_size
dtype: int64
- name: img_ids
sequence: int64
- name: img_cluster_scores
sequence: float64
splits:
- name: accountant
num_bytes: 1768
num_examples: 7
- name: aerospace_engineer
num_bytes: 1696
num_examples: 4
- name: aide
num_bytes: 1696
num_examples: 4
- name: air_conditioning_installer
num_bytes: 1696
num_examples: 4
- name: architect
num_bytes: 1792
num_examples: 8
- name: artist
num_bytes: 1840
num_examples: 10
- name: author
num_bytes: 1744
num_examples: 6
- name: baker
num_bytes: 1864
num_examples: 11
- name: bartender
num_bytes: 1744
num_examples: 6
- name: bus_driver
num_bytes: 1768
num_examples: 7
- name: butcher
num_bytes: 1720
num_examples: 5
- name: career_counselor
num_bytes: 1696
num_examples: 4
- name: carpenter
num_bytes: 1672
num_examples: 3
- name: carpet_installer
num_bytes: 1720
num_examples: 5
- name: cashier
num_bytes: 1696
num_examples: 4
- name: ceo
num_bytes: 1792
num_examples: 8
- name: childcare_worker
num_bytes: 1720
num_examples: 5
- name: civil_engineer
num_bytes: 1768
num_examples: 7
- name: claims_appraiser
num_bytes: 1744
num_examples: 6
- name: cleaner
num_bytes: 1768
num_examples: 7
- name: clergy
num_bytes: 1720
num_examples: 5
- name: clerk
num_bytes: 1720
num_examples: 5
- name: coach
num_bytes: 1696
num_examples: 4
- name: community_manager
num_bytes: 1720
num_examples: 5
- name: compliance_officer
num_bytes: 1672
num_examples: 3
- name: computer_programmer
num_bytes: 1768
num_examples: 7
- name: computer_support_specialist
num_bytes: 1744
num_examples: 6
- name: computer_systems_analyst
num_bytes: 1720
num_examples: 5
- name: construction_worker
num_bytes: 1648
num_examples: 2
- name: cook
num_bytes: 1720
num_examples: 5
- name: correctional_officer
num_bytes: 1768
num_examples: 7
- name: courier
num_bytes: 1816
num_examples: 9
- name: credit_counselor
num_bytes: 1672
num_examples: 3
- name: customer_service_representative
num_bytes: 1672
num_examples: 3
- name: data_entry_keyer
num_bytes: 1744
num_examples: 6
- name: dental_assistant
num_bytes: 1672
num_examples: 3
- name: dental_hygienist
num_bytes: 1672
num_examples: 3
- name: dentist
num_bytes: 1816
num_examples: 9
- name: designer
num_bytes: 1792
num_examples: 8
- name: detective
num_bytes: 1744
num_examples: 6
- name: director
num_bytes: 1840
num_examples: 10
- name: dishwasher
num_bytes: 1792
num_examples: 8
- name: dispatcher
num_bytes: 1672
num_examples: 3
- name: doctor
num_bytes: 1816
num_examples: 9
- name: drywall_installer
num_bytes: 1672
num_examples: 3
- name: electrical_engineer
num_bytes: 1840
num_examples: 10
- name: electrician
num_bytes: 1672
num_examples: 3
- name: engineer
num_bytes: 1696
num_examples: 4
- name: event_planner
num_bytes: 1672
num_examples: 3
- name: executive_assistant
num_bytes: 1672
num_examples: 3
- name: facilities_manager
num_bytes: 1792
num_examples: 8
- name: farmer
num_bytes: 1648
num_examples: 2
- name: fast_food_worker
num_bytes: 1816
num_examples: 9
- name: file_clerk
num_bytes: 1720
num_examples: 5
- name: financial_advisor
num_bytes: 1672
num_examples: 3
- name: financial_analyst
num_bytes: 1696
num_examples: 4
- name: financial_manager
num_bytes: 1696
num_examples: 4
- name: firefighter
num_bytes: 1648
num_examples: 2
- name: fitness_instructor
num_bytes: 1672
num_examples: 3
- name: graphic_designer
num_bytes: 1768
num_examples: 7
- name: groundskeeper
num_bytes: 1696
num_examples: 4
- name: hairdresser
num_bytes: 1792
num_examples: 8
- name: head_cook
num_bytes: 1696
num_examples: 4
- name: health_technician
num_bytes: 1744
num_examples: 6
- name: industrial_engineer
num_bytes: 1696
num_examples: 4
- name: insurance_agent
num_bytes: 1696
num_examples: 4
- name: interior_designer
num_bytes: 1672
num_examples: 3
- name: interviewer
num_bytes: 1696
num_examples: 4
- name: inventory_clerk
num_bytes: 1816
num_examples: 9
- name: it_specialist
num_bytes: 1672
num_examples: 3
- name: jailer
num_bytes: 1744
num_examples: 6
- name: janitor
num_bytes: 1720
num_examples: 5
- name: laboratory_technician
num_bytes: 1720
num_examples: 5
- name: language_pathologist
num_bytes: 1696
num_examples: 4
- name: lawyer
num_bytes: 1720
num_examples: 5
- name: librarian
num_bytes: 1672
num_examples: 3
- name: logistician
num_bytes: 1744
num_examples: 6
- name: machinery_mechanic
num_bytes: 1720
num_examples: 5
- name: machinist
num_bytes: 1744
num_examples: 6
- name: maid
num_bytes: 1768
num_examples: 7
- name: manager
num_bytes: 1720
num_examples: 5
- name: manicurist
num_bytes: 1768
num_examples: 7
- name: market_research_analyst
num_bytes: 1720
num_examples: 5
- name: marketing_manager
num_bytes: 1696
num_examples: 4
- name: massage_therapist
num_bytes: 1672
num_examples: 3
- name: mechanic
num_bytes: 1648
num_examples: 2
- name: mechanical_engineer
num_bytes: 1816
num_examples: 9
- name: medical_records_specialist
num_bytes: 1720
num_examples: 5
- name: mental_health_counselor
num_bytes: 1744
num_examples: 6
- name: metal_worker
num_bytes: 1648
num_examples: 2
- name: mover
num_bytes: 1768
num_examples: 7
- name: musician
num_bytes: 1720
num_examples: 5
- name: network_administrator
num_bytes: 1624
num_examples: 1
- name: nurse
num_bytes: 1720
num_examples: 5
- name: nursing_assistant
num_bytes: 1672
num_examples: 3
- name: nutritionist
num_bytes: 1672
num_examples: 3
- name: occupational_therapist
num_bytes: 1696
num_examples: 4
- name: office_clerk
num_bytes: 1696
num_examples: 4
- name: office_worker
num_bytes: 1696
num_examples: 4
- name: painter
num_bytes: 1816
num_examples: 9
- name: paralegal
num_bytes: 1648
num_examples: 2
- name: payroll_clerk
num_bytes: 1648
num_examples: 2
- name: pharmacist
num_bytes: 1696
num_examples: 4
- name: pharmacy_technician
num_bytes: 1720
num_examples: 5
- name: photographer
num_bytes: 1792
num_examples: 8
- name: physical_therapist
num_bytes: 1744
num_examples: 6
- name: pilot
num_bytes: 1744
num_examples: 6
- name: plane_mechanic
num_bytes: 1768
num_examples: 7
- name: plumber
num_bytes: 1648
num_examples: 2
- name: police_officer
num_bytes: 1768
num_examples: 7
- name: postal_worker
num_bytes: 1840
num_examples: 10
- name: printing_press_operator
num_bytes: 1720
num_examples: 5
- name: producer
num_bytes: 1816
num_examples: 9
- name: psychologist
num_bytes: 1768
num_examples: 7
- name: public_relations_specialist
num_bytes: 1648
num_examples: 2
- name: purchasing_agent
num_bytes: 1696
num_examples: 4
- name: radiologic_technician
num_bytes: 1840
num_examples: 10
- name: real_estate_broker
num_bytes: 1696
num_examples: 4
- name: receptionist
num_bytes: 1672
num_examples: 3
- name: repair_worker
num_bytes: 1672
num_examples: 3
- name: roofer
num_bytes: 1696
num_examples: 4
- name: sales_manager
num_bytes: 1720
num_examples: 5
- name: salesperson
num_bytes: 1720
num_examples: 5
- name: school_bus_driver
num_bytes: 1864
num_examples: 11
- name: scientist
num_bytes: 1696
num_examples: 4
- name: security_guard
num_bytes: 1720
num_examples: 5
- name: sheet_metal_worker
num_bytes: 1672
num_examples: 3
- name: singer
num_bytes: 1768
num_examples: 7
- name: social_assistant
num_bytes: 1720
num_examples: 5
- name: social_worker
num_bytes: 1816
num_examples: 9
- name: software_developer
num_bytes: 1648
num_examples: 2
- name: stocker
num_bytes: 1888
num_examples: 12
- name: supervisor
num_bytes: 1816
num_examples: 9
- name: taxi_driver
num_bytes: 1744
num_examples: 6
- name: teacher
num_bytes: 1720
num_examples: 5
- name: teaching_assistant
num_bytes: 1720
num_examples: 5
- name: teller
num_bytes: 1792
num_examples: 8
- name: therapist
num_bytes: 1672
num_examples: 3
- name: tractor_operator
num_bytes: 1744
num_examples: 6
- name: truck_driver
num_bytes: 1696
num_examples: 4
- name: tutor
num_bytes: 1672
num_examples: 3
- name: underwriter
num_bytes: 1696
num_examples: 4
- name: veterinarian
num_bytes: 1648
num_examples: 2
- name: welder
num_bytes: 1648
num_examples: 2
- name: wholesale_buyer
num_bytes: 1864
num_examples: 11
- name: writer
num_bytes: 1768
num_examples: 7
download_size: 631776
dataset_size: 252200
---
# Dataset Card for "prof_report__Lykon-DreamShaper__multi__24"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Cleudemir/basedevozesestoicas | ---
license: openrail
---
|
Iceclear/DF2K-OST | ---
license: apache-2.0
task_categories:
- image-to-image
---
A collection of raw images from DIV2K, Flicker2K and OST datasets.
Please refer [here](https://github.com/XPixelGroup/BasicSR/blob/master/docs/DatasetPreparation.md) for details.
## Citation
```bibtex
@inproceedings{agustsson2017ntire,
title={Ntire 2017 challenge on single image super-resolution: Dataset and study},
author={Agustsson, Eirikur and Timofte, Radu},
booktitle={CVPRW},
year={2017}
}
@InProceedings{Lim_2017_CVPR_Workshops,
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
booktitle = {CVPRW},
year = {2017}
}
@inproceedings{wang2018recovering,
title={Recovering realistic texture in image super-resolution by deep spatial feature transform},
author={Wang, Xintao and Yu, Ke and Dong, Chao and Loy, Chen Change},
booktitle={CVPR},
year={2018}
}
``` |
Nexdata/11000_Image_Video_caption_data_of_human_action | ---
license: cc-by-nc-nd-4.0
---
## Description
20,000 Image & Video caption data of human action contains 20,000 images and 10,000 videos of various human behaviors in different seasons and different shooting angles, including indoor scenes and outdoor scenes. The description language is English, mainly describing the gender, age, clothing, behavior description and body movements of the characters.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1289?source=Huggingface
## Data size
10,000 images, 1,000 videos
## Race distribution
Caucasian, black
## Gender distribution
male, female
## Age distribution
from teenagers to old age, mainly young and middle-aged
## Collection environment
including indoor scenes and outdoor scenes
## Collection diversity
different age groups, different collection environments, different seasons, various shooting angles, and various human behaviors
## Data format
image format is .jpg, video format is .mp4, text format is .txt
## Description language
English, Chinese
## Text length
in principle, 30~60 words, usually 3-5 sentences
## Main description conten
gender, age, clothing, behavior description, body movements
## Accuracy rate
the proportion of correctly labeled images is not less than 97%
# Licensing Information
Commercial License
|
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855040 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: jackieliu930/bart-large-cnn-samsum
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: jackieliu930/bart-large-cnn-samsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
mustapha/QuranExe | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- ar
license:
- mit
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: QuranExe
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- sentence-similarity
task_ids:
- language-modeling
- masked-language-modeling
---
## Dataset Description
- **Size of downloaded dataset files:** 126 MB
This dataset contains the exegeses/tafsirs (تفسير القرآن) of the holy Quran in arabic by 8 exegetes.
This is a non Official dataset. It have been scrapped from the `Quran.com Api`
This dataset contains `49888` records with `+14` Million words. `8` records per Quranic verse
Usage Example :
```python
from datasets import load_dataset
tafsirs = load_dataset("mustapha/QuranExe")
``` |
notrichardren/mathematical-potato | ---
configs:
- config_name: default
data_files:
- split: difficult_leftpotato
path: data/difficult_leftpotato-*
- split: difficult_rightpotato
path: data/difficult_rightpotato-*
- split: easy_leftpotato
path: data/easy_leftpotato-*
- split: easy_rightpotato
path: data/easy_rightpotato-*
- split: easy
path: data/easy-*
- split: difficult
path: data/difficult-*
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: type
dtype: string
- name: ind
dtype: int64
splits:
- name: difficult_leftpotato
num_bytes: 502554
num_examples: 5390
- name: difficult_rightpotato
num_bytes: 502554
num_examples: 5390
- name: easy_leftpotato
num_bytes: 260196
num_examples: 5390
- name: easy_rightpotato
num_bytes: 260196
num_examples: 5390
- name: easy
num_bytes: 222466
num_examples: 5390
- name: difficult
num_bytes: 464824
num_examples: 5390
download_size: 1065311
dataset_size: 2212790
---
# Dataset Card for "mathematical-potato"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
7x7x7x7x7x7/Hank | ---
license: openrail
---
|
Felladrin/ChatML-truthy-dpo-v0.1 | ---
license: cc-by-4.0
language:
- en
size_categories:
- 1K<n<10K
---
[jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) in ChatML format, ready to use in [HuggingFace TRL's DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer).
Python code used for conversion:
```python
from datasets import load_dataset
dataset = load_dataset("jondurbin/truthy-dpo-v0.1", split="train")
def format(columns):
prompt = f"<|im_start|>user\n{columns['prompt']}<|im_end|>\n<|im_start|>assistant\n"
if (columns['system']):
prompt = f"<|im_start|>system\n{columns['system']}<|im_end|>\n{prompt}"
return {
"prompt": prompt,
"chosen": f"{columns['chosen']}<|im_end|>",
"rejected": f"{columns['rejected']}<|im_end|>",
}
dataset.map(format).select_columns(['prompt', 'chosen', 'rejected', 'id', 'source']).to_parquet("train.parquet")
```
|
ammarnasr/the-stack-java-clean | ---
license: openrail
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 3582248477.9086223
num_examples: 806789
- name: test
num_bytes: 394048264.9973618
num_examples: 88747
- name: valid
num_bytes: 3982797.09401595
num_examples: 897
download_size: 1323156008
dataset_size: 3980279540
task_categories:
- text-generation
language:
- code
tags:
- code
pretty_name: TheStack-Java
size_categories:
- 1M<n<10M
---
## Dataset 1: TheStack - Java - Cleaned
**Description**: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Java, a popular statically typed language.
**Target Language**: Java
**Dataset Size**:
- Training: 900,000 files
- Validation: 50,000 files
- Test: 50,000 files
**Preprocessing**:
1. Selected Java as the target language due to its popularity on GitHub.
2. Filtered out files with average line length > 100 characters, maximum line length > 1000 characters, and alphabet ratio < 25%.
3. Split files into 90% training, 5% validation, and 5% test sets.
**Tokenizer**: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.
**Training Sequences**: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning). |
autoevaluate/autoeval-eval-phpthinh__examplei-match-bd10ea-1748761026 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplei
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: ['f1']
dataset_name: phpthinh/examplei
dataset_config: match
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: phpthinh/examplei
* Config: match
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
open-llm-leaderboard/details_giraffe176__Open_Hermes_Maid_Sam_Mistral_dtv0.1 | ---
pretty_name: Evaluation run of giraffe176/Open_Hermes_Maid_Sam_Mistral_dtv0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [giraffe176/Open_Hermes_Maid_Sam_Mistral_dtv0.1](https://huggingface.co/giraffe176/Open_Hermes_Maid_Sam_Mistral_dtv0.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_giraffe176__Open_Hermes_Maid_Sam_Mistral_dtv0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-19T07:25:14.448730](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__Open_Hermes_Maid_Sam_Mistral_dtv0.1/blob/main/results_2024-02-19T07-25-14.448730.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6492929541395905,\n\
\ \"acc_stderr\": 0.03204314290781419,\n \"acc_norm\": 0.6502356687496237,\n\
\ \"acc_norm_stderr\": 0.032693608758353816,\n \"mc1\": 0.41003671970624234,\n\
\ \"mc1_stderr\": 0.017217844717449325,\n \"mc2\": 0.5797198662912402,\n\
\ \"mc2_stderr\": 0.015180976093776475\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6390784982935154,\n \"acc_stderr\": 0.014034761386175456,\n\
\ \"acc_norm\": 0.6774744027303754,\n \"acc_norm_stderr\": 0.013659980894277366\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6803425612427804,\n\
\ \"acc_stderr\": 0.004653907471785644,\n \"acc_norm\": 0.8638717386974706,\n\
\ \"acc_norm_stderr\": 0.003422238702226359\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \
\ \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"\
acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n\
\ \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337128,\n\
\ \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337128\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107224,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107224\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146267,\n\
\ \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146267\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42063492063492064,\n \"acc_stderr\": 0.025424835086924003,\n \"\
acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.025424835086924003\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356852,\n \"\
acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356852\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"\
acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.03192271569548301,\n\
\ \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.03192271569548301\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586818,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586818\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.02394672474156398,\n \
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.02394672474156398\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.362962962962963,\n \"acc_stderr\": 0.02931820364520686,\n \
\ \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.02931820364520686\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.02971914287634286,\n \
\ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.02971914287634286\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8366972477064221,\n \"acc_stderr\": 0.01584825580650155,\n \"\
acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.01584825580650155\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8088235294117647,\n \"acc_stderr\": 0.027599174300640766,\n \"\
acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.027599174300640766\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8016877637130801,\n \"acc_stderr\": 0.02595502084162113,\n \
\ \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.02595502084162113\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159465,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159465\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n\
\ \"acc_stderr\": 0.022209309073165612,\n \"acc_norm\": 0.8675213675213675,\n\
\ \"acc_norm_stderr\": 0.022209309073165612\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8339719029374202,\n\
\ \"acc_stderr\": 0.013306478243066302,\n \"acc_norm\": 0.8339719029374202,\n\
\ \"acc_norm_stderr\": 0.013306478243066302\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069363,\n\
\ \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069363\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.36201117318435755,\n\
\ \"acc_stderr\": 0.016073067350153087,\n \"acc_norm\": 0.36201117318435755,\n\
\ \"acc_norm_stderr\": 0.016073067350153087\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7483660130718954,\n \"acc_stderr\": 0.024848018263875195,\n\
\ \"acc_norm\": 0.7483660130718954,\n \"acc_norm_stderr\": 0.024848018263875195\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n\
\ \"acc_stderr\": 0.025494259350694912,\n \"acc_norm\": 0.7202572347266881,\n\
\ \"acc_norm_stderr\": 0.025494259350694912\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.024288533637726095,\n\
\ \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.024288533637726095\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.470013037809648,\n\
\ \"acc_stderr\": 0.012747248967079064,\n \"acc_norm\": 0.470013037809648,\n\
\ \"acc_norm_stderr\": 0.012747248967079064\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.02767846864214472,\n\
\ \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.02767846864214472\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.673202614379085,\n \"acc_stderr\": 0.01897542792050721,\n \
\ \"acc_norm\": 0.673202614379085,\n \"acc_norm_stderr\": 0.01897542792050721\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.02826388994378459,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.02826388994378459\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n\
\ \"acc_stderr\": 0.02411267824090081,\n \"acc_norm\": 0.8656716417910447,\n\
\ \"acc_norm_stderr\": 0.02411267824090081\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \
\ \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n\
\ \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n\
\ \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.41003671970624234,\n\
\ \"mc1_stderr\": 0.017217844717449325,\n \"mc2\": 0.5797198662912402,\n\
\ \"mc2_stderr\": 0.015180976093776475\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8113654301499605,\n \"acc_stderr\": 0.010995172318019811\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6535253980288097,\n \
\ \"acc_stderr\": 0.013107179054313401\n }\n}\n```"
repo_url: https://huggingface.co/giraffe176/Open_Hermes_Maid_Sam_Mistral_dtv0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|arc:challenge|25_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|gsm8k|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hellaswag|10_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-19T07-25-14.448730.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-19T07-25-14.448730.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- '**/details_harness|winogrande|5_2024-02-19T07-25-14.448730.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-19T07-25-14.448730.parquet'
- config_name: results
data_files:
- split: 2024_02_19T07_25_14.448730
path:
- results_2024-02-19T07-25-14.448730.parquet
- split: latest
path:
- results_2024-02-19T07-25-14.448730.parquet
---
# Dataset Card for Evaluation run of giraffe176/Open_Hermes_Maid_Sam_Mistral_dtv0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [giraffe176/Open_Hermes_Maid_Sam_Mistral_dtv0.1](https://huggingface.co/giraffe176/Open_Hermes_Maid_Sam_Mistral_dtv0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_giraffe176__Open_Hermes_Maid_Sam_Mistral_dtv0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-19T07:25:14.448730](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__Open_Hermes_Maid_Sam_Mistral_dtv0.1/blob/main/results_2024-02-19T07-25-14.448730.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6492929541395905,
"acc_stderr": 0.03204314290781419,
"acc_norm": 0.6502356687496237,
"acc_norm_stderr": 0.032693608758353816,
"mc1": 0.41003671970624234,
"mc1_stderr": 0.017217844717449325,
"mc2": 0.5797198662912402,
"mc2_stderr": 0.015180976093776475
},
"harness|arc:challenge|25": {
"acc": 0.6390784982935154,
"acc_stderr": 0.014034761386175456,
"acc_norm": 0.6774744027303754,
"acc_norm_stderr": 0.013659980894277366
},
"harness|hellaswag|10": {
"acc": 0.6803425612427804,
"acc_stderr": 0.004653907471785644,
"acc_norm": 0.8638717386974706,
"acc_norm_stderr": 0.003422238702226359
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.027943219989337128,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.027943219989337128
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146267,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146267
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.025424835086924003,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.025424835086924003
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356852,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356852
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.03192271569548301,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.03192271569548301
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.02394672474156398,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.02394672474156398
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.362962962962963,
"acc_stderr": 0.02931820364520686,
"acc_norm": 0.362962962962963,
"acc_norm_stderr": 0.02931820364520686
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7016806722689075,
"acc_stderr": 0.02971914287634286,
"acc_norm": 0.7016806722689075,
"acc_norm_stderr": 0.02971914287634286
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8366972477064221,
"acc_stderr": 0.01584825580650155,
"acc_norm": 0.8366972477064221,
"acc_norm_stderr": 0.01584825580650155
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.027599174300640766,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.027599174300640766
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.02595502084162113,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.02595502084162113
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159465,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159465
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165612,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165612
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8339719029374202,
"acc_stderr": 0.013306478243066302,
"acc_norm": 0.8339719029374202,
"acc_norm_stderr": 0.013306478243066302
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069363,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069363
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.36201117318435755,
"acc_stderr": 0.016073067350153087,
"acc_norm": 0.36201117318435755,
"acc_norm_stderr": 0.016073067350153087
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7483660130718954,
"acc_stderr": 0.024848018263875195,
"acc_norm": 0.7483660130718954,
"acc_norm_stderr": 0.024848018263875195
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.025494259350694912,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.025494259350694912
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7438271604938271,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.7438271604938271,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.470013037809648,
"acc_stderr": 0.012747248967079064,
"acc_norm": 0.470013037809648,
"acc_norm_stderr": 0.012747248967079064
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7058823529411765,
"acc_stderr": 0.02767846864214472,
"acc_norm": 0.7058823529411765,
"acc_norm_stderr": 0.02767846864214472
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.673202614379085,
"acc_stderr": 0.01897542792050721,
"acc_norm": 0.673202614379085,
"acc_norm_stderr": 0.01897542792050721
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.02826388994378459,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.02826388994378459
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.02411267824090081,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.02411267824090081
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.41003671970624234,
"mc1_stderr": 0.017217844717449325,
"mc2": 0.5797198662912402,
"mc2_stderr": 0.015180976093776475
},
"harness|winogrande|5": {
"acc": 0.8113654301499605,
"acc_stderr": 0.010995172318019811
},
"harness|gsm8k|5": {
"acc": 0.6535253980288097,
"acc_stderr": 0.013107179054313401
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
liuyanchen1015/MULTI_VALUE_qqp_present_perfect_for_past | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 842481
num_examples: 4302
- name: test
num_bytes: 8245632
num_examples: 42265
- name: train
num_bytes: 7701770
num_examples: 39183
download_size: 10545018
dataset_size: 16789883
---
# Dataset Card for "MULTI_VALUE_qqp_present_perfect_for_past"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maghwa/OpenHermes-2-AR-10K-25-670k-680k | ---
dataset_info:
features:
- name: skip_prompt_formatting
dtype: 'null'
- name: model_name
dtype: 'null'
- name: model
dtype: 'null'
- name: conversations
dtype: string
- name: source
dtype: string
- name: id
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: idx
dtype: 'null'
- name: language
dtype: 'null'
- name: hash
dtype: 'null'
- name: views
dtype: float64
- name: topic
dtype: 'null'
- name: title
dtype: 'null'
- name: category
dtype: 'null'
- name: custom_instruction
dtype: 'null'
- name: system_prompt
dtype: 'null'
splits:
- name: train
num_bytes: 24962526
num_examples: 10001
download_size: 11272617
dataset_size: 24962526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amishshah/imbalanced_7 | ---
dataset_info:
features:
- name: title
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 45166669.74
num_examples: 27000
- name: test
num_bytes: 5018518.86
num_examples: 3000
download_size: 0
dataset_size: 50185188.6
---
# Dataset Card for "imbalanced_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lucianopacheco/alpaca_1col_1000 | ---
license: apache-2.0
---
|
0x7o/oasst-ru-dpo-v1 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 3847248.0
num_examples: 1322
download_size: 1926633
dataset_size: 3847248.0
---
# Dataset Card for "oasst-ru-dpo-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_bavest__fin-llama-33b-merged | ---
pretty_name: Evaluation run of bavest/fin-llama-33b-merged
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [bavest/fin-llama-33b-merged](https://huggingface.co/bavest/fin-llama-33b-merged)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bavest__fin-llama-33b-merged\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-16T23:28:46.893925](https://huggingface.co/datasets/open-llm-leaderboard/details_bavest__fin-llama-33b-merged/blob/main/results_2023-09-16T23-28-46.893925.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0018875838926174498,\n\
\ \"em_stderr\": 0.0004445109990558753,\n \"f1\": 0.06358221476510076,\n\
\ \"f1_stderr\": 0.0013748196874116337,\n \"acc\": 0.48127991536483655,\n\
\ \"acc_stderr\": 0.010695229631509682\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558753,\n\
\ \"f1\": 0.06358221476510076,\n \"f1_stderr\": 0.0013748196874116337\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16224412433661864,\n \
\ \"acc_stderr\": 0.010155130880393522\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8003157063930545,\n \"acc_stderr\": 0.011235328382625842\n\
\ }\n}\n```"
repo_url: https://huggingface.co/bavest/fin-llama-33b-merged
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_09_16T23_28_46.893925
path:
- '**/details_harness|drop|3_2023-09-16T23-28-46.893925.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-16T23-28-46.893925.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_16T23_28_46.893925
path:
- '**/details_harness|gsm8k|5_2023-09-16T23-28-46.893925.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-16T23-28-46.893925.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_16T23_28_46.893925
path:
- '**/details_harness|winogrande|5_2023-09-16T23-28-46.893925.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-16T23-28-46.893925.parquet'
- config_name: results
data_files:
- split: 2023_09_16T23_28_46.893925
path:
- results_2023-09-16T23-28-46.893925.parquet
- split: latest
path:
- results_2023-09-16T23-28-46.893925.parquet
---
# Dataset Card for Evaluation run of bavest/fin-llama-33b-merged
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bavest/fin-llama-33b-merged
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [bavest/fin-llama-33b-merged](https://huggingface.co/bavest/fin-llama-33b-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bavest__fin-llama-33b-merged",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-16T23:28:46.893925](https://huggingface.co/datasets/open-llm-leaderboard/details_bavest__fin-llama-33b-merged/blob/main/results_2023-09-16T23-28-46.893925.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0018875838926174498,
"em_stderr": 0.0004445109990558753,
"f1": 0.06358221476510076,
"f1_stderr": 0.0013748196874116337,
"acc": 0.48127991536483655,
"acc_stderr": 0.010695229631509682
},
"harness|drop|3": {
"em": 0.0018875838926174498,
"em_stderr": 0.0004445109990558753,
"f1": 0.06358221476510076,
"f1_stderr": 0.0013748196874116337
},
"harness|gsm8k|5": {
"acc": 0.16224412433661864,
"acc_stderr": 0.010155130880393522
},
"harness|winogrande|5": {
"acc": 0.8003157063930545,
"acc_stderr": 0.011235328382625842
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Mrmeneses03/VM | ---
license: openrail
---
|
ixelszy/Harousel_StyleTest | ---
license: wtfpl
---
|
umd-zhou-lab/Reflect_Alpaca_All | ---
dataset_info:
features:
- name: data
struct:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: origin
num_bytes: 19000112
num_examples: 52002
- name: reflect_instruction
num_bytes: 56984627
num_examples: 52002
- name: reflect_response
num_bytes: 57562361
num_examples: 52002
- name: reflect_both
num_bytes: 96478203
num_examples: 52002
download_size: 128917607
dataset_size: 230025303
---
# Dataset Card for "Reflect_Alpaca_All"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
greathero/newcontrailsvalidationdataset | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 325995028.72
num_examples: 16695
download_size: 319405984
dataset_size: 325995028.72
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fashion_mnist | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: fashion-mnist
pretty_name: FashionMNIST
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': T - shirt / top
'1': Trouser
'2': Pullover
'3': Dress
'4': Coat
'5': Sandal
'6': Shirt
'7': Sneaker
'8': Bag
'9': Ankle boot
config_name: fashion_mnist
splits:
- name: train
num_bytes: 31296655
num_examples: 60000
- name: test
num_bytes: 5233818
num_examples: 10000
download_size: 30878645
dataset_size: 36530473
---
# Dataset Card for FashionMNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)
- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)
- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of Zalando's article into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-fashion-mnist).
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data point comprises an image and its label.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x27601169DD8>,
'label': 9
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `label`: an integer between 0 and 9 representing the classes with the following mapping:
| Label | Description |
| --- | --- |
| 0 | T-shirt/top |
| 1 | Trouser |
| 2 | Pullover |
| 3 | Dress |
| 4 | Coat |
| 5 | Sandal |
| 6 | Shirt |
| 7 | Sneaker |
| 8 | Bag |
| 9 | Ankle boot |
### Data Splits
The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
**From the arXiv paper:**
The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."
Here are some good reasons:
- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read "Most pairs of MNIST digits can be distinguished pretty well by just one pixel."
- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.
- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author François Chollet.
### Source Data
#### Initial Data Collection and Normalization
**From the arXiv paper:**
Fashion-MNIST is based on the assortment on Zalando’s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 × 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.
We use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 × 73) are then fed into the following conversion pipeline:
1. Converting the input to a PNG image.
2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5% of the maximum possible intensity in RGB space.
3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.
4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.
5. Extending the shortest edge to 28 and put the image to the center of the canvas.
6. Negating the intensities of the image.
7. Converting the image to 8-bit grayscale pixels.
#### Who are the source language producers?
**From the arXiv paper:**
Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.
### Annotations
#### Annotation process
**From the arXiv paper:**
For the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe’s largest online fashion platform. Each product contains only one silhouette code.
#### Who are the annotators?
**From the arXiv paper:**
The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Han Xiao and Kashif Rasul and Roland Vollgraf
### Licensing Information
MIT Licence
### Citation Information
```
@article{DBLP:journals/corr/abs-1708-07747,
author = {Han Xiao and
Kashif Rasul and
Roland Vollgraf},
title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
Algorithms},
journal = {CoRR},
volume = {abs/1708.07747},
year = {2017},
url = {http://arxiv.org/abs/1708.07747},
archivePrefix = {arXiv},
eprint = {1708.07747},
timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset. |
evilback/NarcoticLORA | ---
dataset_info:
features:
- name: Questions
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 103777
num_examples: 450
download_size: 41361
dataset_size: 103777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nailiamirzakhmedova/cmv_op_10k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: selftext
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33252845.121562276
num_examples: 10000
download_size: 19395504
dataset_size: 33252845.121562276
---
# Dataset Card for "cmv_op_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
edbeeching/prj_gia_dataset_atari_2B_atari_bowling_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_bowling environment, sample for the policy atari_2B_atari_bowling_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
varun-v-rao/newsqa | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: id
dtype: string
- name: labels
list:
- name: end
sequence: int64
- name: start
sequence: int64
splits:
- name: train
num_bytes: 57635506.94441748
num_examples: 18142
- name: validation
num_bytes: 3374870.9449192784
num_examples: 1070
download_size: 4666280
dataset_size: 61010377.88933676
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
## Dataset Card for "squad"
This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.
### Preprocessing and Filtering
Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.
|
gustavecortal/fr_covid_news | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- fr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- tabular-to-text
- summarization
- text-classification
task_ids:
- language-modeling
- multi-class-classification
- multi-label-classification
- topic-classification
pretty_name: COVID-19 French News dataset
language_bcp47:
- fr-FR
tags:
- conditional-text-generation
---
# Dataset Card for COVID-19 French News dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The COVID-19 French News dataset is a French-language dataset containing just over 40k unique news articles from more than 50 different French-speaking online newspapers. The dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news. The current version supports abstractive summarization and topic classification. Dataset Card not finished yet.
### Languages
The text in the dataset is in French.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `title`: title of the article
- `description`: description or a summary of the article
- `text`: the actual article text in raw form
- `domain`: source domain of the article (i.e. lemonde.fr)
- `url`: article URL, the original URL where it was scraped
- `labels`: classification labels
## Data Splits
COVID-19 French News dataset has only the training set, i.e. it has to be loaded with train split specified: fr_covid_news = load_dataset('gustavecortal/fr_covid_news', split="train")
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
### Annotations
#### Annotation process
[More Information Needed]
### Personal and Sensitive Information
As one can imagine, data contains contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help researchers develop better French topic classification and abstractive summarization models for news related to COVID-19.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The data was originally collected by Gustave Cortal (gustavecortal@gmail.com)
### Licensing Information
Usage of the dataset is restricted to non-commercial research purposes only.
### Citation Information
```
@dataset{fr_covid_news,
author = {Gustave Cortal},
year = {2022},
title = {COVID-19 - French News Dataset},
url = {https://www.gustavecortal.com}
}
```
### Contributions
[@gustavecortal](https://github.com/gustavecortal) |
hip-piehunter/dbl_lang | ---
license: mit
---
|
rocioadlc/data2 | ---
license: apache-2.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.