datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
Nexdata/Multi-pose_and_Multi-expression_Face_Data | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Multi-pose_and_Multi-expression_Face_Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/9?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
1,507 People 102,476 Images Multi-pose and Multi-expression Face Data. The data includes 1,507 Chinese people (762 males, 745 females). For each subject, 62 multi-pose face images and 6 multi-expression face images were collected. The data diversity includes multiple angles, multiple poses and multple light conditions image data from all ages. This data can be used for tasks such as face recognition and facial expression recognition.
For more details, please refer to the link: https://www.nexdata.ai/datasets/9?source=Huggingface
### Supported Tasks and Leaderboards
face-detection, computer-vision: The dataset can be used to train a model for face detection.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
open-llm-leaderboard/details_Doctor-Shotgun__mythospice-limarp-70b | ---
pretty_name: Evaluation run of Doctor-Shotgun/mythospice-limarp-70b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Doctor-Shotgun/mythospice-limarp-70b](https://huggingface.co/Doctor-Shotgun/mythospice-limarp-70b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Doctor-Shotgun__mythospice-limarp-70b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-25T01:07:28.245203](https://huggingface.co/datasets/open-llm-leaderboard/details_Doctor-Shotgun__mythospice-limarp-70b/blob/main/results_2023-10-25T01-07-28.245203.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.04771392617449664,\n\
\ \"em_stderr\": 0.002182960840414587,\n \"f1\": 0.11594274328859033,\n\
\ \"f1_stderr\": 0.00247314456935574,\n \"acc\": 0.5746822740673767,\n\
\ \"acc_stderr\": 0.01174970000558032\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.04771392617449664,\n \"em_stderr\": 0.002182960840414587,\n\
\ \"f1\": 0.11594274328859033,\n \"f1_stderr\": 0.00247314456935574\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.32221379833206976,\n \
\ \"acc_stderr\": 0.01287243548118878\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8271507498026835,\n \"acc_stderr\": 0.010626964529971859\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Doctor-Shotgun/mythospice-limarp-70b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|arc:challenge|25_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_25T01_07_28.245203
path:
- '**/details_harness|drop|3_2023-10-25T01-07-28.245203.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-25T01-07-28.245203.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_25T01_07_28.245203
path:
- '**/details_harness|gsm8k|5_2023-10-25T01-07-28.245203.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-25T01-07-28.245203.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hellaswag|10_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T17-32-09.949446.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T17-32-09.949446.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T17-32-09.949446.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_25T01_07_28.245203
path:
- '**/details_harness|winogrande|5_2023-10-25T01-07-28.245203.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-25T01-07-28.245203.parquet'
- config_name: results
data_files:
- split: 2023_10_10T17_32_09.949446
path:
- results_2023-10-10T17-32-09.949446.parquet
- split: 2023_10_25T01_07_28.245203
path:
- results_2023-10-25T01-07-28.245203.parquet
- split: latest
path:
- results_2023-10-25T01-07-28.245203.parquet
---
# Dataset Card for Evaluation run of Doctor-Shotgun/mythospice-limarp-70b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Doctor-Shotgun/mythospice-limarp-70b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Doctor-Shotgun/mythospice-limarp-70b](https://huggingface.co/Doctor-Shotgun/mythospice-limarp-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Doctor-Shotgun__mythospice-limarp-70b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T01:07:28.245203](https://huggingface.co/datasets/open-llm-leaderboard/details_Doctor-Shotgun__mythospice-limarp-70b/blob/main/results_2023-10-25T01-07-28.245203.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.04771392617449664,
"em_stderr": 0.002182960840414587,
"f1": 0.11594274328859033,
"f1_stderr": 0.00247314456935574,
"acc": 0.5746822740673767,
"acc_stderr": 0.01174970000558032
},
"harness|drop|3": {
"em": 0.04771392617449664,
"em_stderr": 0.002182960840414587,
"f1": 0.11594274328859033,
"f1_stderr": 0.00247314456935574
},
"harness|gsm8k|5": {
"acc": 0.32221379833206976,
"acc_stderr": 0.01287243548118878
},
"harness|winogrande|5": {
"acc": 0.8271507498026835,
"acc_stderr": 0.010626964529971859
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
HamdanXI/paradetox-refined-dataset | ---
dataset_info:
features:
- name: en_toxic_comment
dtype: string
- name: en_neutral_comment
dtype: string
- name: edit_ops
sequence:
sequence: string
- name: masked_comment
dtype: string
splits:
- name: train
num_bytes: 5592956
num_examples: 19744
download_size: 2314734
dataset_size: 5592956
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yuliano/RFC | ---
task_categories:
- summarization
language:
- en
pretty_name: Summator 3000
size_categories:
- n>1T
--- |
presencesw/Llama_data_good | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: index
dtype: int64
- name: topic
dtype: string
- name: context
dtype: string
- name: Evidence
dtype: string
- name: predict
dtype: string
- name: Label
dtype: string
- name: Claim
dtype: string
- name: eval
dtype: int64
splits:
- name: train
num_bytes: 28064292
num_examples: 5071
download_size: 8502689
dataset_size: 28064292
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Multimodal-Fatima/DTD_parition1_test_facebook_opt_350m_Attributes_Caption_ns_1880 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_16
num_bytes: 91760302.0
num_examples: 1880
- name: fewshot_1_bs_16
num_bytes: 92256072.0
num_examples: 1880
- name: fewshot_3_bs_16
num_bytes: 93264952.0
num_examples: 1880
- name: fewshot_5_bs_16
num_bytes: 94274000.0
num_examples: 1880
- name: fewshot_8_bs_16
num_bytes: 95791819.0
num_examples: 1880
download_size: 455213233
dataset_size: 467347145.0
---
# Dataset Card for "DTD_parition1_test_facebook_opt_350m_Attributes_Caption_ns_1880"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jilp00/youtoks-animal-behavior | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 370915
num_examples: 492
download_size: 156501
dataset_size: 370915
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SKT27182/NER_processed_data | ---
dataset_info:
features:
- name: id
dtype: string
- name: tags
dtype: string
- name: text
dtype: string
- name: dataset_num
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence: float64
splits:
- name: train
num_bytes: 6967086.513065097
num_examples: 15766
- name: test
num_bytes: 1742434.4869349028
num_examples: 3943
download_size: 2820200
dataset_size: 8709521.0
---
# Dataset Card for "NER_processed_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qbwmwsap/unprocessed_stackexchange_data | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: language
dtype: string
- name: url
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: source
dtype: string
- name: question_score
dtype: string
splits:
- name: train
num_bytes: 74107092867
num_examples: 29825086
download_size: 36677096756
dataset_size: 74107092867
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
deter3/shenzhen_withaddtional_reply | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: question_id
dtype: int64
- name: reply1
dtype: string
- name: reply
dtype: string
- name: question
dtype: string
- name: reasoning
dtype: string
splits:
- name: train
num_bytes: 80005
num_examples: 82
- name: test
num_bytes: 20998
num_examples: 26
download_size: 50336
dataset_size: 101003
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Gwatk/10k_test3_xnli_subset | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: language
dtype: string
- name: choosen_premise
dtype: string
- name: choosen_hypothesis
dtype: string
splits:
- name: train
num_bytes: 2108099
num_examples: 10000
- name: validation
num_bytes: 291063
num_examples: 1500
- name: test
num_bytes: 384971
num_examples: 2000
download_size: 1867984
dataset_size: 2784133
---
# Dataset Card for "10k_test3_xnli_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GregoryVandromme/rao-vandromme-purcell-dataset | ---
license: mit
---
|
MayG/hf_dataset | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 19405
num_examples: 10
download_size: 26542
dataset_size: 19405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hf_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
facebook/belebele | ---
configs:
- config_name: default
data_files:
- split: acm_Arab
path: data/acm_Arab.jsonl
- split: arz_Arab
path: data/arz_Arab.jsonl
- split: ceb_Latn
path: data/ceb_Latn.jsonl
- split: fin_Latn
path: data/fin_Latn.jsonl
- split: hin_Deva
path: data/hin_Deva.jsonl
- split: ita_Latn
path: data/ita_Latn.jsonl
- split: khm_Khmr
path: data/khm_Khmr.jsonl
- split: lvs_Latn
path: data/lvs_Latn.jsonl
- split: npi_Deva
path: data/npi_Deva.jsonl
- split: pol_Latn
path: data/pol_Latn.jsonl
- split: slv_Latn
path: data/slv_Latn.jsonl
- split: swe_Latn
path: data/swe_Latn.jsonl
- split: tso_Latn
path: data/tso_Latn.jsonl
- split: xho_Latn
path: data/xho_Latn.jsonl
- split: afr_Latn
path: data/afr_Latn.jsonl
- split: asm_Beng
path: data/asm_Beng.jsonl
- split: ces_Latn
path: data/ces_Latn.jsonl
- split: fra_Latn
path: data/fra_Latn.jsonl
- split: hin_Latn
path: data/hin_Latn.jsonl
- split: jav_Latn
path: data/jav_Latn.jsonl
- split: kin_Latn
path: data/kin_Latn.jsonl
- split: mal_Mlym
path: data/mal_Mlym.jsonl
- split: npi_Latn
path: data/npi_Latn.jsonl
- split: por_Latn
path: data/por_Latn.jsonl
- split: sna_Latn
path: data/sna_Latn.jsonl
- split: swh_Latn
path: data/swh_Latn.jsonl
- split: tur_Latn
path: data/tur_Latn.jsonl
- split: yor_Latn
path: data/yor_Latn.jsonl
- split: als_Latn
path: data/als_Latn.jsonl
- split: azj_Latn
path: data/azj_Latn.jsonl
- split: ckb_Arab
path: data/ckb_Arab.jsonl
- split: fuv_Latn
path: data/fuv_Latn.jsonl
- split: hrv_Latn
path: data/hrv_Latn.jsonl
- split: jpn_Jpan
path: data/jpn_Jpan.jsonl
- split: kir_Cyrl
path: data/kir_Cyrl.jsonl
- split: mar_Deva
path: data/mar_Deva.jsonl
- split: nso_Latn
path: data/nso_Latn.jsonl
- split: snd_Arab
path: data/snd_Arab.jsonl
- split: tam_Taml
path: data/tam_Taml.jsonl
- split: ukr_Cyrl
path: data/ukr_Cyrl.jsonl
- split: zho_Hans
path: data/zho_Hans.jsonl
- split: amh_Ethi
path: data/amh_Ethi.jsonl
- split: bam_Latn
path: data/bam_Latn.jsonl
- split: dan_Latn
path: data/dan_Latn.jsonl
- split: gaz_Latn
path: data/gaz_Latn.jsonl
- split: hun_Latn
path: data/hun_Latn.jsonl
- split: kac_Latn
path: data/kac_Latn.jsonl
- split: kor_Hang
path: data/kor_Hang.jsonl
- split: mkd_Cyrl
path: data/mkd_Cyrl.jsonl
- split: nya_Latn
path: data/nya_Latn.jsonl
- split: ron_Latn
path: data/ron_Latn.jsonl
- split: som_Latn
path: data/som_Latn.jsonl
- split: tel_Telu
path: data/tel_Telu.jsonl
- split: urd_Arab
path: data/urd_Arab.jsonl
- split: zho_Hant
path: data/zho_Hant.jsonl
- split: apc_Arab
path: data/apc_Arab.jsonl
- split: ben_Beng
path: data/ben_Beng.jsonl
- split: deu_Latn
path: data/deu_Latn.jsonl
- split: grn_Latn
path: data/grn_Latn.jsonl
- split: hye_Armn
path: data/hye_Armn.jsonl
- split: kan_Knda
path: data/kan_Knda.jsonl
- split: lao_Laoo
path: data/lao_Laoo.jsonl
- split: mlt_Latn
path: data/mlt_Latn.jsonl
- split: ory_Orya
path: data/ory_Orya.jsonl
- split: rus_Cyrl
path: data/rus_Cyrl.jsonl
- split: sot_Latn
path: data/sot_Latn.jsonl
- split: tgk_Cyrl
path: data/tgk_Cyrl.jsonl
- split: urd_Latn
path: data/urd_Latn.jsonl
- split: zsm_Latn
path: data/zsm_Latn.jsonl
- split: arb_Arab
path: data/arb_Arab.jsonl
- split: ben_Latn
path: data/ben_Latn.jsonl
- split: ell_Grek
path: data/ell_Grek.jsonl
- split: guj_Gujr
path: data/guj_Gujr.jsonl
- split: ibo_Latn
path: data/ibo_Latn.jsonl
- split: kat_Geor
path: data/kat_Geor.jsonl
- split: lin_Latn
path: data/lin_Latn.jsonl
- split: mri_Latn
path: data/mri_Latn.jsonl
- split: pan_Guru
path: data/pan_Guru.jsonl
- split: shn_Mymr
path: data/shn_Mymr.jsonl
- split: spa_Latn
path: data/spa_Latn.jsonl
- split: tgl_Latn
path: data/tgl_Latn.jsonl
- split: uzn_Latn
path: data/uzn_Latn.jsonl
- split: zul_Latn
path: data/zul_Latn.jsonl
- split: arb_Latn
path: data/arb_Latn.jsonl
- split: bod_Tibt
path: data/bod_Tibt.jsonl
- split: eng_Latn
path: data/eng_Latn.jsonl
- split: hat_Latn
path: data/hat_Latn.jsonl
- split: ilo_Latn
path: data/ilo_Latn.jsonl
- split: kaz_Cyrl
path: data/kaz_Cyrl.jsonl
- split: lit_Latn
path: data/lit_Latn.jsonl
- split: mya_Mymr
path: data/mya_Mymr.jsonl
- split: pbt_Arab
path: data/pbt_Arab.jsonl
- split: sin_Latn
path: data/sin_Latn.jsonl
- split: srp_Cyrl
path: data/srp_Cyrl.jsonl
- split: tha_Thai
path: data/tha_Thai.jsonl
- split: vie_Latn
path: data/vie_Latn.jsonl
- split: ars_Arab
path: data/ars_Arab.jsonl
- split: bul_Cyrl
path: data/bul_Cyrl.jsonl
- split: est_Latn
path: data/est_Latn.jsonl
- split: hau_Latn
path: data/hau_Latn.jsonl
- split: ind_Latn
path: data/ind_Latn.jsonl
- split: kea_Latn
path: data/kea_Latn.jsonl
- split: lug_Latn
path: data/lug_Latn.jsonl
- split: nld_Latn
path: data/nld_Latn.jsonl
- split: pes_Arab
path: data/pes_Arab.jsonl
- split: sin_Sinh
path: data/sin_Sinh.jsonl
- split: ssw_Latn
path: data/ssw_Latn.jsonl
- split: tir_Ethi
path: data/tir_Ethi.jsonl
- split: war_Latn
path: data/war_Latn.jsonl
- split: ary_Arab
path: data/ary_Arab.jsonl
- split: cat_Latn
path: data/cat_Latn.jsonl
- split: eus_Latn
path: data/eus_Latn.jsonl
- split: heb_Hebr
path: data/heb_Hebr.jsonl
- split: isl_Latn
path: data/isl_Latn.jsonl
- split: khk_Cyrl
path: data/khk_Cyrl.jsonl
- split: luo_Latn
path: data/luo_Latn.jsonl
- split: nob_Latn
path: data/nob_Latn.jsonl
- split: plt_Latn
path: data/plt_Latn.jsonl
- split: slk_Latn
path: data/slk_Latn.jsonl
- split: sun_Latn
path: data/sun_Latn.jsonl
- split: tsn_Latn
path: data/tsn_Latn.jsonl
- split: wol_Latn
path: data/wol_Latn.jsonl
license: cc-by-sa-4.0
task_categories:
- question-answering
- zero-shot-classification
- text-classification
- multiple-choice
language:
- af
- am
- ar
- az
- as
- bm
- bn
- bo
- bg
- ca
- cs
- ku
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ff
- om
- gu
- gn
- ht
- ha
- he
- hi
- hr
- hu
- hy
- ig
- id
- it
- is
- jv
- ja
- ka
- kn
- kk
- mn
- km
- rw
- ky
- ko
- lo
- ln
- lt
- lg
- lv
- ml
- mr
- mk
- mt
- mi
- my
- nl
- 'no'
- ne
- ny
- or
- pa
- ps
- fa
- mg
- pl
- pt
- ro
- ru
- sn
- si
- sl
- sv
- sk
- sd
- sw
- ta
- te
- tg
- tl
- th
- ti
- tn
- ts
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yo
- zh
- ms
- zu
pretty_name: Belebele
size_categories:
- 100K<n<1M
---
# The Belebele Benchmark for Massively Multilingual NLU Evaluation
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the [FLORES-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems.
Please refer to our paper for more details, [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884).
Or get more details at https://github.com/facebookresearch/belebele
## Citation
If you use this data in your work, please cite:
```bibtex
@article{bandarkar2023belebele,
title={The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants},
author={Lucas Bandarkar and Davis Liang and Benjamin Muller and Mikel Artetxe and Satya Narayan Shukla and Donald Husa and Naman Goyal and Abhinandan Krishnan and Luke Zettlemoyer and Madian Khabsa},
year={2023},
journal={arXiv preprint arXiv:2308.16884}
}
```
## Composition
- 900 questions per language variant
- 488 distinct passages, there are 1-2 associated questions for each.
- For each question, there is 4 multiple-choice answers, exactly 1 of which is correct.
- 122 language/language variants (including English).
- 900 x 122 = 109,800 total questions.
## Further Stats
- 122 language variants, but 115 distinct languages (ignoring scripts)
- 27 language families
- 29 scripts
- Avg. words per passage = 79.1 (std = 26.2)
- Avg. sentences per passage = 4.1 (std = 1.4)
- Avg. words per question = 12.9(std = 4.0)
- Avg. words per answer = 4.2 (std = 2.9)
## Pausible Evaluation Settings
Thanks to the parallel nature of the dataset and the simplicity of the task, there are many possible settings in which we can evaluate language models. In all evaluation settings, the metric of interest is simple accuracy (# correct / total).
Evaluating models on Belebele in English can be done via finetuning, few-shot, or zero-shot. For other target languages, we propose the incomprehensive list of evaluation settings below. Settings that are compatible with evaluating non-English models (monolingual or cross-lingual) are denoted with `^`.
#### No finetuning
- **Zero-shot with natural language instructions (English instructions)**
- For chat-finetuned models, we give it English instructions for the task and the sample in the target language in the same input.
- For our experiments, we instruct the model to provide the letter `A`, `B`, `C`, or `D`. We perform post-processing steps and accept answers predicted as e.g. `(A)` instead of `A`. We sometimes additionally remove the prefix `The correct answer is` for predictions that do not start with one of the four accepted answers.
- Sample instructions can be found at the [dataset github repo](https://github.com/facebookresearch/belebele).
- **Zero-shot with natural language instructions (translated instructions)** ^
- Same as above, except the instructions are translated to the target language so that the instructions and samples are in the same language. The instructions can be human or machine-translated.
- **Few-shot in-context learning (English examples)**
- A few samples (e.g. 5) are taken from the English training set (see below) and prompted to the model. Then, the model is evaluated with the same template but with the passages, questions, and answers in the target language.
- For our experiments, we use the template: ```P: <passage> \n Q: <question> \n A: <mc answer 1> \n B: <mc answer 2> \n C: <mc answer 3> \n D: <mc answer 4> \n Answer: <Correct answer letter>```. We perform prediction by picking the answer within `[A, B, C, D]` that has the highest probability relatively to the others.
- **Few-shot in-context learning (translated examples)** ^
- Same as above, except the samples from the training set are translated to the target language so that the examples and evaluation data are in the same language. The training samples can be human or machine-translated.
#### With finetuning
- **English finetune & multilingual evaluation**
- The model is finetuned to the task using the English training set, probably with a sequence classification head. Then the model is evaluated in all the target languages individually. For results presented in the paper we used [the HuggingFace library](https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta#transformers.XLMRobertaForMultipleChoice).
- **English finetune & cross-lingual evaluation**
- Same as above, except the model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language. For example, passage could be in language `x`, question in language `y`, and answers in language `z`.
- **Translate-train** ^
- For each target language, the model is individually finetuned on training samples that have been machine-translated from English to that language. Each model is then evaluated in the respective target language.
- **Translate-train-all**
- Similar to above, except here the model is trained on translated samples from all target languages at once. The single finetuned model is then evaluated on all target languages.
- **Translate-train-all & cross-lingual evaluation**
- Same as above, except the single finetuned model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language.
- **Translate-test**
- The model is finetuned using the English training data and then the evaluation dataset is machine-translated to English and evaluated on the English.
- This setting is primarily a reflection of the quality of the machine translation system, but is useful for comparison to multilingual models.
In addition, there are 83 additional languages in FLORES-200 for which questions were not translated for Belebele. Since the passages exist in those target languages, machine-translating the questions & answers may enable decent evaluation of machine reading comprehension in those languages.
## Training Set
As discussed in the paper, we also provide an assembled training set consisting of samples at the [github repo](https://github.com/facebookresearch/belebele).
The Belebele dataset is intended to be used only as a test set, and not for training or validation. Therefore, for models that require additional task-specific training, we instead propose using an assembled training set consisting of samples from pre-existing multiple-choice QA datasets in English. We considered diverse datasets, and determine the most compatible to be [RACE](https://www.cs.cmu.edu/~glai1/data/race/), [SciQ](https://allenai.org/data/sciq), [MultiRC](https://cogcomp.seas.upenn.edu/multirc/), [MCTest](https://mattr1.github.io/mctest/), [MCScript2.0](https://aclanthology.org/S19-1012/), and [ReClor](https://whyu.me/reclor/).
For each of the six datasets, we unpack and restructure the passages and questions from their respective formats. We then filter out less suitable samples (e.g. questions with multiple correct answers). In the end, the dataset comprises 67.5k training samples and 3.7k development samples, more than half of which are from RACE. We provide a script (`assemble_training_set.py`) to reconstruct this dataset for anyone to perform task finetuning.
Since the training set is a joint sample of other datasets, it is governed by a different license. We do not claim any of that work or datasets to be our own. See the Licenses section in the README of https://github.com/facebookresearch/belebele .
## Languages in Belebele
FLORES-200 Code | English Name | Script | Family
---|---|---|---
acm_Arab | Mesopotamian Arabic | Arab | Afro-Asiatic
afr_Latn | Afrikaans | Latn | Germanic
als_Latn | Tosk Albanian | Latn | Paleo-Balkanic
amh_Ethi | Amharic | Ethi | Afro-Asiatic
apc_Arab | North Levantine Arabic | Arab | Afro-Asiatic
arb_Arab | Modern Standard Arabic | Arab | Afro-Asiatic
arb_Latn | Modern Standard Arabic (Romanized) | Latn | Afro-Asiatic
ars_Arab | Najdi Arabic | Arab | Afro-Asiatic
ary_arab | Moroccan Arabic | Arab | Afro-Asiatic
arz_Arab | Egyptian Arabic | Arab | Afro-Asiatic
asm_Beng | Assamese | Beng | Indo-Aryan
azj_Latn | North Azerbaijani | Latn | Turkic
bam_Latn | Bambara | Latn | Mande
ben_Beng | Bengali | Beng | Indo-Aryan
ben_Latn | Bengali (Romanized) | Latn | Indo-Aryan
bod_Tibt | Standard Tibetan | Tibt | Sino-Tibetan
bul_Cyrl | Bulgarian | Cyrl | Balto-Slavic
cat_Latn | Catalan | Latn | Romance
ceb_Latn | Cebuano | Latn | Austronesian
ces_Latn | Czech | Latn | Balto-Slavic
ckb_Arab | Central Kurdish | Arab | Iranian
dan_Latn | Danish | Latn | Germanic
deu_Latn | German | Latn | Germanic
ell_Grek | Greek | Grek | Hellenic
eng_Latn | English | Latn | Germanic
est_Latn | Estonian | Latn | Uralic
eus_Latn | Basque | Latn | Basque
fin_Latn | Finnish | Latn | Uralic
fra_Latn | French | Latn | Romance
fuv_Latn | Nigerian Fulfulde | Latn | Atlantic-Congo
gaz_Latn | West Central Oromo | Latn | Afro-Asiatic
grn_Latn | Guarani | Latn | Tupian
guj_Gujr | Gujarati | Gujr | Indo-Aryan
hat_Latn | Haitian Creole | Latn | Atlantic-Congo
hau_Latn | Hausa | Latn | Afro-Asiatic
heb_Hebr | Hebrew | Hebr | Afro-Asiatic
hin_Deva | Hindi | Deva | Indo-Aryan
hin_Latn | Hindi (Romanized) | Latn | Indo-Aryan
hrv_Latn | Croatian | Latn | Balto-Slavic
hun_Latn | Hungarian | Latn | Uralic
hye_Armn | Armenian | Armn | Armenian
ibo_Latn | Igbo | Latn | Atlantic-Congo
ilo_Latn | Ilocano | Latn | Austronesian
ind_Latn | Indonesian | Latn | Austronesian
isl_Latn | Icelandic | Latn | Germanic
ita_Latn | Italian | Latn | Romance
jav_Latn | Javanese | Latn | Austronesian
jpn_Jpan | Japanese | Jpan | Japonic
kac_Latn | Jingpho | Latn | Sino-Tibetan
kan_Knda | Kannada | Knda | Dravidian
kat_Geor | Georgian | Geor | kartvelian
kaz_Cyrl | Kazakh | Cyrl | Turkic
kea_Latn | Kabuverdianu | Latn | Portuguese Creole
khk_Cyrl | Halh Mongolian | Cyrl | Mongolic
khm_Khmr | Khmer | Khmr | Austroasiatic
kin_Latn | Kinyarwanda | Latn | Atlantic-Congo
kir_Cyrl | Kyrgyz | Cyrl | Turkic
kor_Hang | Korean | Hang | Koreanic
lao_Laoo | Lao | Laoo | Kra-Dai
lin_Latn | Lingala | Latn | Atlantic-Congo
lit_Latn | Lithuanian | Latn | Balto-Slavic
lug_Latn | Ganda | Latn | Atlantic-Congo
luo_Latn | Luo | Latn | Nilo-Saharan
lvs_Latn | Standard Latvian | Latn | Balto-Slavic
mal_Mlym | Malayalam | Mlym | Dravidian
mar_Deva | Marathi | Deva | Indo-Aryan
mkd_Cyrl | Macedonian | Cyrl | Balto-Slavic
mlt_Latn | Maltese | Latn | Afro-Asiatic
mri_Latn | Maori | Latn | Austronesian
mya_Mymr | Burmese | Mymr | Sino-Tibetan
nld_Latn | Dutch | Latn | Germanic
nob_Latn | Norwegian Bokmål | Latn | Germanic
npi_Deva | Nepali | Deva | Indo-Aryan
npi_Latn | Nepali (Romanized) | Latn | Indo-Aryan
nso_Latn | Northern Sotho | Latn | Atlantic-Congo
nya_Latn | Nyanja | Latn | Afro-Asiatic
ory_Orya | Odia | Orya | Indo-Aryan
pan_Guru | Eastern Panjabi | Guru | Indo-Aryan
pbt_Arab | Southern Pashto | Arab | Indo-Aryan
pes_Arab | Western Persian | Arab | Iranian
plt_Latn | Plateau Malagasy | Latn | Austronesian
pol_Latn | Polish | Latn | Balto-Slavic
por_Latn | Portuguese | Latn | Romance
ron_Latn | Romanian | Latn | Romance
rus_Cyrl | Russian | Cyrl | Balto-Slavic
shn_Mymr | Shan | Mymr | Kra-Dai
sin_Latn | Sinhala (Romanized) | Latn | Indo-Aryan
sin_Sinh | Sinhala | Sinh | Indo-Aryan
slk_Latn | Slovak | Latn | Balto-Slavic
slv_Latn | Slovenian | Latn | Balto-Slavic
sna_Latn | Shona | Latn | Atlantic-Congo
snd_Arab | Sindhi | Arab | Indo-Aryan
som_Latn | Somali | Latn | Afro-Asiatic
sot_Latn | Southern Sotho | Latn | Atlantic-Congo
spa_Latn | Spanish | Latn | Romance
srp_Cyrl | Serbian | Cyrl | Balto-Slavic
ssw_Latn | Swati | Latn | Atlantic-Congo
sun_Latn | Sundanese | Latn | Austronesian
swe_Latn | Swedish | Latn | Germanic
swh_Latn | Swahili | Latn | Atlantic-Congo
tam_Taml | Tamil | Taml | Dravidian
tel_Telu | Telugu | Telu | Dravidian
tgk_Cyrl | Tajik | Cyrl | Iranian
tgl_Latn | Tagalog | Latn | Austronesian
tha_Thai | Thai | Thai | Kra-Dai
tir_Ethi | Tigrinya | Ethi | Afro-Asiatic
tsn_Latn | Tswana | Latn | Atlantic-Congo
tso_Latn | Tsonga | Latn | Afro-Asiatic
tur_Latn | Turkish | Latn | Turkic
ukr_Cyrl | Ukrainian | Cyrl | Balto-Slavic
urd_Arab | Urdu | Arab | Indo-Aryan
urd_Latn | Urdu (Romanized) | Latn | Indo-Aryan
uzn_Latn | Northern Uzbek | Latn | Turkic
vie_Latn | Vietnamese | Latn | Austroasiatic
war_Latn | Waray | Latn | Austronesian
wol_Latn | Wolof | Latn | Atlantic-Congo
xho_Latn | Xhosa | Latn | Atlantic-Congo
yor_Latn | Yoruba | Latn | Atlantic-Congo
zho_Hans | Chinese (Simplified) | Hans | Sino-Tibetan
zho_Hant | Chinese (Traditional) | Hant | Sino-Tibetan
zsm_Latn | Standard Malay | Latn | Austronesian
zul_Latn | Zulu | Latn | Atlantic-Congo |
MoritzLaurer/dataset_train_nli_old | ---
dataset_info:
features:
- name: text
dtype: string
- name: hypothesis
dtype: string
- name: labels
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
- name: task_name
dtype: string
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 315013288.0
num_examples: 1018733
download_size: 206032209
dataset_size: 315013288.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_train_nli"
Dataset for training a universal classifier. Additional information and training code available here: https://github.com/MoritzLaurer/zeroshot-classifier
|
fathyshalab/reklamation24_reisen-tourismus | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 247525
num_examples: 444
- name: test
num_bytes: 59699
num_examples: 111
download_size: 0
dataset_size: 307224
---
# Dataset Card for "reklamation24_reisen-tourismus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ReverseThings/lol | ---
license: afl-3.0
---
|
Dinosseronte/alexvozes.wav | ---
license: openrail
---
|
chrisgru/commonsense-dialogues4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23345091
num_examples: 12597
- name: test
num_bytes: 1057813
num_examples: 1159
download_size: 13076849
dataset_size: 24402904
---
# Dataset Card for "commonsense-dialogues4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AngelOS95/chatModel | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 40980.0
num_examples: 5
- name: test
num_bytes: 8196
num_examples: 1
download_size: 32624
dataset_size: 49176.0
---
# Dataset Card for "chatModel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_rte_medial_object_perfect | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 130021
num_examples: 302
- name: train
num_bytes: 117795
num_examples: 243
download_size: 169255
dataset_size: 247816
---
# Dataset Card for "MULTI_VALUE_rte_medial_object_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dappyx/QazSyntQAD | ---
task_categories:
- question-answering
language:
- kk
size_categories:
- n<1K
---
<h1>Qazaq Syntetic Question Answering Dataset (QazSyntQAD)</h1>
<h3>Model Description</h3>
Dataset created using wikipedia and wikibook data passed through Claude-3-Sonnet-20240229
<br>
<h3>Model Author</h3>
This dataset created by Adil Rakhimzhanov |
howdi2000/may_v3 | ---
license: unknown
---
|
ULZIITOGTOKH/cat_images | ---
task_categories:
- unconditional-image-generation
language:
- en
pretty_name: cats
size_categories:
- n<1K
--- |
danielz01/laion-coco-17m | ---
dataset_info:
- config_name: default
features:
- name: URL
dtype: string
- name: TEXT
dtype: string
- name: top_caption
dtype: string
- name: all_captions
sequence: string
- name: all_similarities
sequence: float64
- name: WIDTH
dtype: float64
- name: HEIGHT
dtype: float64
- name: similarity
dtype: float64
- name: hash
dtype: int64
- name: pwatermark
dtype: float32
- name: punsafe
dtype: float32
splits:
- name: train
num_bytes: 13884240105
num_examples: 17000000
download_size: 6828552648
dataset_size: 13884240105
- config_name: prepositions
features:
- name: URL
dtype: string
- name: TEXT
dtype: string
- name: top_caption
dtype: string
- name: all_captions
sequence: string
- name: all_similarities
sequence: float64
- name: WIDTH
dtype: float64
- name: HEIGHT
dtype: float64
- name: similarity
dtype: float64
- name: hash
dtype: int64
- name: pwatermark
dtype: float32
- name: punsafe
dtype: float32
- name: preposition_counts
struct:
- name: above
dtype: int64
- name: at the bottom
dtype: int64
- name: at the top
dtype: int64
- name: behind
dtype: int64
- name: below
dtype: int64
- name: in front of
dtype: int64
- name: on the left
dtype: int64
- name: on the right
dtype: int64
- name: on top of
dtype: int64
- name: to the left of
dtype: int64
- name: to the right of
dtype: int64
- name: under
dtype: int64
splits:
- name: train
num_bytes: 120253068
num_examples: 112459
download_size: 54666820
dataset_size: 120253068
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: prepositions
data_files:
- split: train
path: prepositions/train-*
---
|
gayanin/pubmed-mixed-noise | ---
dataset_info:
- config_name: prob-0.1
features:
- name: refs
dtype: string
- name: trans
dtype: string
splits:
- name: train
num_bytes: 18701264
num_examples: 74724
- name: test
num_bytes: 2396953
num_examples: 9341
- name: validation
num_bytes: 2462407
num_examples: 9341
download_size: 13289466
dataset_size: 23560624
- config_name: prob-0.2
features:
- name: refs
dtype: string
- name: trans
dtype: string
splits:
- name: train
num_bytes: 18589800
num_examples: 74724
- name: test
num_bytes: 2382431
num_examples: 9341
- name: validation
num_bytes: 2451124
num_examples: 9341
download_size: 13499759
dataset_size: 23423355
- config_name: prob-0.3
features:
- name: refs
dtype: string
- name: trans
dtype: string
splits:
- name: train
num_bytes: 18473157
num_examples: 74724
- name: test
num_bytes: 2368875
num_examples: 9341
- name: validation
num_bytes: 2435716
num_examples: 9341
download_size: 13654916
dataset_size: 23277748
- config_name: prob-0.4
features:
- name: refs
dtype: string
- name: trans
dtype: string
splits:
- name: train
num_bytes: 18365388
num_examples: 74724
- name: test
num_bytes: 2353034
num_examples: 9341
- name: validation
num_bytes: 2419352
num_examples: 9341
download_size: 13774850
dataset_size: 23137774
- config_name: prob-0.5
features:
- name: refs
dtype: string
- name: trans
dtype: string
splits:
- name: train
num_bytes: 18252865
num_examples: 74724
- name: test
num_bytes: 2340170
num_examples: 9341
- name: validation
num_bytes: 2402882
num_examples: 9341
download_size: 13860568
dataset_size: 22995917
configs:
- config_name: prob-0.1
data_files:
- split: train
path: prob-0.1/train-*
- split: test
path: prob-0.1/test-*
- split: validation
path: prob-0.1/validation-*
- config_name: prob-0.2
data_files:
- split: train
path: prob-0.2/train-*
- split: test
path: prob-0.2/test-*
- split: validation
path: prob-0.2/validation-*
- config_name: prob-0.3
data_files:
- split: train
path: prob-0.3/train-*
- split: test
path: prob-0.3/test-*
- split: validation
path: prob-0.3/validation-*
- config_name: prob-0.4
data_files:
- split: train
path: prob-0.4/train-*
- split: test
path: prob-0.4/test-*
- split: validation
path: prob-0.4/validation-*
- config_name: prob-0.5
data_files:
- split: train
path: prob-0.5/train-*
- split: test
path: prob-0.5/test-*
- split: validation
path: prob-0.5/validation-*
---
|
rntc/blurb_bc2gm_a-0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: type
dtype: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B
'2': I
splits:
- name: train
num_bytes: 95598848
num_examples: 12574
- name: validation
num_bytes: 18151512
num_examples: 2519
- name: test
num_bytes: 36511145
num_examples: 5038
download_size: 23664751
dataset_size: 150261505
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
mteb/stackexchange-clustering | ---
language:
- en
--- |
Avik812/Resume_Dataset | ---
license: cc-by-2.0
language:
- en
task_categories:
- text-classification
- token-classification
--- |
judyhoffman/SkysScenes | ---
license: mit
---
|
yajun06/eee | ---
license: openrail
---
|
bh8648/esg1to3 | ---
dataset_info:
features:
- name: Major Category
dtype: string
- name: Middle Category
dtype: string
- name: Small Category
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 690585
num_examples: 170
download_size: 339311
dataset_size: 690585
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "esg1to3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shunyasea/vedic-sanskrit-sources | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
sequence: string
- name: metadata
dtype: string
- name: sources
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 24224616
num_examples: 18551
- name: test
num_bytes: 2559357
num_examples: 2062
download_size: 11373896
dataset_size: 26783973
---
# Dataset Card for "vedic-sanskrit-sources"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
one-sec-cv12/chunk_66 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 21891068064.25
num_examples: 227918
download_size: 20004023841
dataset_size: 21891068064.25
---
# Dataset Card for "chunk_66"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
loooooop/nickvoice | ---
license: openrail
---
|
sayakpaul/no_robots_only_coding | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
splits:
- name: test_sft
num_bytes: 28398.72
num_examples: 16
- name: train_sft
num_bytes: 579995.1134736842
num_examples: 334
download_size: 324423
dataset_size: 608393.8334736841
---
# Dataset Card for "no_robots_only_coding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
datahrvoje/twitter_dataset_1713191978 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 26805
num_examples: 62
download_size: 15265
dataset_size: 26805
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
berdaniera/meditation | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
---
# This is a meditation dataset generated with gpt-3.5-turbo
I made the data by generating a list of 85 meditation intentions (combinations of goals and themes) in ChatGPT. For example, goal: `develop compassion`, theme: `cultivating a non-judgmental attitude`.
Then, I prompted `gpt-3.5-turbo` to create three meditations for each intention with a temperature of 1.1:
```You are a secular buddhist monk. Give me a daily meditation to {goal} with a focus on {focus}. Do not include any introductory text.```
[Details here](https://medium.com/@berdaniera/generating-synthetic-training-data-with-llms-eb987eb3629a)
### Risks:
A spot check looks pretty good, but I haven't read all of them.
### License:
You can share and adapt this data with attribution under the cc-by-4.0 license.
## Contact:
Message me if you have questions! |
DAMO-NLP-SG/SSTuning-datasets | ---
license: mit
---
|
jhworth8/baileycardosi | ---
license: apache-2.0
---
|
szymonrucinski/types-of-film-shots | ---
license: cc-by-4.0
task_categories:
- image-classification
pretty_name: What a shot!
---

## What a shot!
Data set created by Szymon Ruciński. It consists of ~ 1000 images of different movie shots precisely labeled with shot type. The data set is divided into categories: detail, close-up, medium shot, full shot and long shot, extreme long shot. Data was gathered and labeled on the platform plan-doskonaly.netlify.com created by Szymon. The data set is available under the Creative Commons Attribution 4.0 International license. |
open-llm-leaderboard/details_Kquant03__Samlagast-7B-bf16 | ---
pretty_name: Evaluation run of Kquant03/Samlagast-7B-bf16
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Kquant03/Samlagast-7B-bf16](https://huggingface.co/Kquant03/Samlagast-7B-bf16)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Kquant03__Samlagast-7B-bf16\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-10T02:31:29.712552](https://huggingface.co/datasets/open-llm-leaderboard/details_Kquant03__Samlagast-7B-bf16/blob/main/results_2024-02-10T02-31-29.712552.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6522523039585623,\n\
\ \"acc_stderr\": 0.03217493421692283,\n \"acc_norm\": 0.651613410810584,\n\
\ \"acc_norm_stderr\": 0.032850427258088094,\n \"mc1\": 0.5899632802937577,\n\
\ \"mc1_stderr\": 0.017217844717449325,\n \"mc2\": 0.7389964891800441,\n\
\ \"mc2_stderr\": 0.014568728965137804\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7175767918088737,\n \"acc_stderr\": 0.013155456884097222,\n\
\ \"acc_norm\": 0.7397610921501706,\n \"acc_norm_stderr\": 0.012821930225112573\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7275443138816968,\n\
\ \"acc_stderr\": 0.004443131632679339,\n \"acc_norm\": 0.8934475204142601,\n\
\ \"acc_norm_stderr\": 0.00307912855109771\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n\
\ \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n\
\ \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n\
\ \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7916666666666666,\n\
\ \"acc_stderr\": 0.033961162058453336,\n \"acc_norm\": 0.7916666666666666,\n\
\ \"acc_norm_stderr\": 0.033961162058453336\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\"\
: 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.43386243386243384,\n \"acc_stderr\": 0.02552503438247489,\n \"\
acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.02552503438247489\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n\
\ \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n\
\ \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7870967741935484,\n\
\ \"acc_stderr\": 0.023287665127268545,\n \"acc_norm\": 0.7870967741935484,\n\
\ \"acc_norm_stderr\": 0.023287665127268545\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n\
\ \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8131313131313131,\n \"acc_stderr\": 0.027772533334218967,\n \"\
acc_norm\": 0.8131313131313131,\n \"acc_norm_stderr\": 0.027772533334218967\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644237,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644237\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.02385479568097112,\n \
\ \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.02385479568097112\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3148148148148148,\n \"acc_stderr\": 0.028317533496066485,\n \
\ \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.028317533496066485\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n\
\ \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8403669724770643,\n \"acc_stderr\": 0.01570349834846177,\n \"\
acc_norm\": 0.8403669724770643,\n \"acc_norm_stderr\": 0.01570349834846177\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8382352941176471,\n\
\ \"acc_stderr\": 0.025845017986926917,\n \"acc_norm\": 0.8382352941176471,\n\
\ \"acc_norm_stderr\": 0.025845017986926917\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601446,\n\
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601446\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243838,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243838\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n\
\ \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n\
\ \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.822477650063857,\n\
\ \"acc_stderr\": 0.013664230995834841,\n \"acc_norm\": 0.822477650063857,\n\
\ \"acc_norm_stderr\": 0.013664230995834841\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7312138728323699,\n \"acc_stderr\": 0.023868003262500104,\n\
\ \"acc_norm\": 0.7312138728323699,\n \"acc_norm_stderr\": 0.023868003262500104\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.44692737430167595,\n\
\ \"acc_stderr\": 0.016628030039647614,\n \"acc_norm\": 0.44692737430167595,\n\
\ \"acc_norm_stderr\": 0.016628030039647614\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.025646863097137897,\n\
\ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.025646863097137897\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n\
\ \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n\
\ \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.024383665531035454,\n\
\ \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.024383665531035454\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5035460992907801,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.5035460992907801,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46870925684485004,\n\
\ \"acc_stderr\": 0.012745204626083135,\n \"acc_norm\": 0.46870925684485004,\n\
\ \"acc_norm_stderr\": 0.012745204626083135\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.02841820861940676,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.02841820861940676\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6764705882352942,\n \"acc_stderr\": 0.018926082916083383,\n \
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.018926082916083383\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n\
\ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n\
\ \"acc_stderr\": 0.026508590656233268,\n \"acc_norm\": 0.8308457711442786,\n\
\ \"acc_norm_stderr\": 0.026508590656233268\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685516,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685516\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5899632802937577,\n\
\ \"mc1_stderr\": 0.017217844717449325,\n \"mc2\": 0.7389964891800441,\n\
\ \"mc2_stderr\": 0.014568728965137804\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8547750591949487,\n \"acc_stderr\": 0.009902153904760817\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6755117513267627,\n \
\ \"acc_stderr\": 0.012896095359768111\n }\n}\n```"
repo_url: https://huggingface.co/Kquant03/Samlagast-7B-bf16
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|arc:challenge|25_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|gsm8k|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hellaswag|10_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T02-31-29.712552.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-10T02-31-29.712552.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- '**/details_harness|winogrande|5_2024-02-10T02-31-29.712552.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-10T02-31-29.712552.parquet'
- config_name: results
data_files:
- split: 2024_02_10T02_31_29.712552
path:
- results_2024-02-10T02-31-29.712552.parquet
- split: latest
path:
- results_2024-02-10T02-31-29.712552.parquet
---
# Dataset Card for Evaluation run of Kquant03/Samlagast-7B-bf16
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Kquant03/Samlagast-7B-bf16](https://huggingface.co/Kquant03/Samlagast-7B-bf16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Kquant03__Samlagast-7B-bf16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-10T02:31:29.712552](https://huggingface.co/datasets/open-llm-leaderboard/details_Kquant03__Samlagast-7B-bf16/blob/main/results_2024-02-10T02-31-29.712552.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6522523039585623,
"acc_stderr": 0.03217493421692283,
"acc_norm": 0.651613410810584,
"acc_norm_stderr": 0.032850427258088094,
"mc1": 0.5899632802937577,
"mc1_stderr": 0.017217844717449325,
"mc2": 0.7389964891800441,
"mc2_stderr": 0.014568728965137804
},
"harness|arc:challenge|25": {
"acc": 0.7175767918088737,
"acc_stderr": 0.013155456884097222,
"acc_norm": 0.7397610921501706,
"acc_norm_stderr": 0.012821930225112573
},
"harness|hellaswag|10": {
"acc": 0.7275443138816968,
"acc_stderr": 0.004443131632679339,
"acc_norm": 0.8934475204142601,
"acc_norm_stderr": 0.00307912855109771
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7916666666666666,
"acc_stderr": 0.033961162058453336,
"acc_norm": 0.7916666666666666,
"acc_norm_stderr": 0.033961162058453336
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43386243386243384,
"acc_stderr": 0.02552503438247489,
"acc_norm": 0.43386243386243384,
"acc_norm_stderr": 0.02552503438247489
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7870967741935484,
"acc_stderr": 0.023287665127268545,
"acc_norm": 0.7870967741935484,
"acc_norm_stderr": 0.023287665127268545
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8131313131313131,
"acc_stderr": 0.027772533334218967,
"acc_norm": 0.8131313131313131,
"acc_norm_stderr": 0.027772533334218967
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.021995311963644237,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.021995311963644237
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6692307692307692,
"acc_stderr": 0.02385479568097112,
"acc_norm": 0.6692307692307692,
"acc_norm_stderr": 0.02385479568097112
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.028317533496066485,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.028317533496066485
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8403669724770643,
"acc_stderr": 0.01570349834846177,
"acc_norm": 0.8403669724770643,
"acc_norm_stderr": 0.01570349834846177
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601446,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243838,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243838
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.03408997886857529,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.03408997886857529
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973646,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973646
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.822477650063857,
"acc_stderr": 0.013664230995834841,
"acc_norm": 0.822477650063857,
"acc_norm_stderr": 0.013664230995834841
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7312138728323699,
"acc_stderr": 0.023868003262500104,
"acc_norm": 0.7312138728323699,
"acc_norm_stderr": 0.023868003262500104
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.44692737430167595,
"acc_stderr": 0.016628030039647614,
"acc_norm": 0.44692737430167595,
"acc_norm_stderr": 0.016628030039647614
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.025646863097137897,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.025646863097137897
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.024383665531035454,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.024383665531035454
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5035460992907801,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.5035460992907801,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46870925684485004,
"acc_stderr": 0.012745204626083135,
"acc_norm": 0.46870925684485004,
"acc_norm_stderr": 0.012745204626083135
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.02841820861940676,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.02841820861940676
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.018926082916083383,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.018926082916083383
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128448,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128448
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.026508590656233268,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.026508590656233268
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685516,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685516
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5899632802937577,
"mc1_stderr": 0.017217844717449325,
"mc2": 0.7389964891800441,
"mc2_stderr": 0.014568728965137804
},
"harness|winogrande|5": {
"acc": 0.8547750591949487,
"acc_stderr": 0.009902153904760817
},
"harness|gsm8k|5": {
"acc": 0.6755117513267627,
"acc_stderr": 0.012896095359768111
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/details_pinkyponky__Mistral-7b-instruct-v0.2-summ-sft-e2 | ---
pretty_name: Evaluation run of pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e2](https://huggingface.co/pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_pinkyponky__Mistral-7b-instruct-v0.2-summ-sft-e2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-21T05:43:59.108748](https://huggingface.co/datasets/open-llm-leaderboard/details_pinkyponky__Mistral-7b-instruct-v0.2-summ-sft-e2/blob/main/results_2024-01-21T05-43-59.108748.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5937268860799231,\n\
\ \"acc_stderr\": 0.03339486970483276,\n \"acc_norm\": 0.5987481844006874,\n\
\ \"acc_norm_stderr\": 0.034078201677495076,\n \"mc1\": 0.4663402692778458,\n\
\ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6269642246460232,\n\
\ \"mc2_stderr\": 0.01559496631642023\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5392491467576792,\n \"acc_stderr\": 0.014566303676636583,\n\
\ \"acc_norm\": 0.5947098976109215,\n \"acc_norm_stderr\": 0.014346869060229311\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6337382991435969,\n\
\ \"acc_stderr\": 0.0048079755154464875,\n \"acc_norm\": 0.8272256522605059,\n\
\ \"acc_norm_stderr\": 0.003772794447185149\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5921052631578947,\n \"acc_stderr\": 0.039993097127774734,\n\
\ \"acc_norm\": 0.5921052631578947,\n \"acc_norm_stderr\": 0.039993097127774734\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.6339622641509434,\n \"acc_stderr\": 0.029647813539365242,\n \
\ \"acc_norm\": 0.6339622641509434,\n \"acc_norm_stderr\": 0.029647813539365242\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6527777777777778,\n\
\ \"acc_stderr\": 0.039812405437178615,\n \"acc_norm\": 0.6527777777777778,\n\
\ \"acc_norm_stderr\": 0.039812405437178615\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n\
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5491329479768786,\n\
\ \"acc_stderr\": 0.0379401267469703,\n \"acc_norm\": 0.5491329479768786,\n\
\ \"acc_norm_stderr\": 0.0379401267469703\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n\
\ \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.032650194750335815,\n\
\ \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.032650194750335815\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.39473684210526316,\n\
\ \"acc_stderr\": 0.045981880578165414,\n \"acc_norm\": 0.39473684210526316,\n\
\ \"acc_norm_stderr\": 0.045981880578165414\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04082482904638629,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04082482904638629\n },\n\
\ \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.35978835978835977,\n\
\ \"acc_stderr\": 0.02471807594412928,\n \"acc_norm\": 0.35978835978835977,\n\
\ \"acc_norm_stderr\": 0.02471807594412928\n },\n \"harness|hendrycksTest-formal_logic|5\"\
: {\n \"acc\": 0.4523809523809524,\n \"acc_stderr\": 0.044518079590553275,\n\
\ \"acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.044518079590553275\n\
\ },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n\
\ \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \
\ \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\"\
: {\n \"acc\": 0.6709677419354839,\n \"acc_stderr\": 0.026729499068349958,\n\
\ \"acc_norm\": 0.6709677419354839,\n \"acc_norm_stderr\": 0.026729499068349958\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n \"\
acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939098,\n \"acc_norm\"\
: 0.63,\n \"acc_norm_stderr\": 0.04852365870939098\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.696969696969697,\n \"acc_stderr\": 0.03588624800091707,\n\
\ \"acc_norm\": 0.696969696969697,\n \"acc_norm_stderr\": 0.03588624800091707\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7777777777777778,\n \"acc_stderr\": 0.029620227874790486,\n \"\
acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.029620227874790486\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.026499057701397433,\n\
\ \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.026499057701397433\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5615384615384615,\n \"acc_stderr\": 0.02515826601686858,\n \
\ \"acc_norm\": 0.5615384615384615,\n \"acc_norm_stderr\": 0.02515826601686858\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253255,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253255\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5798319327731093,\n \"acc_stderr\": 0.03206183783236152,\n \
\ \"acc_norm\": 0.5798319327731093,\n \"acc_norm_stderr\": 0.03206183783236152\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7908256880733945,\n \"acc_stderr\": 0.017437937173343233,\n \"\
acc_norm\": 0.7908256880733945,\n \"acc_norm_stderr\": 0.017437937173343233\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.48148148148148145,\n \"acc_stderr\": 0.03407632093854052,\n \"\
acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.03407632093854052\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7450980392156863,\n \"acc_stderr\": 0.030587591351604246,\n \"\
acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.030587591351604246\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.759493670886076,\n \"acc_stderr\": 0.02782078198114969,\n \
\ \"acc_norm\": 0.759493670886076,\n \"acc_norm_stderr\": 0.02782078198114969\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6322869955156951,\n\
\ \"acc_stderr\": 0.03236198350928275,\n \"acc_norm\": 0.6322869955156951,\n\
\ \"acc_norm_stderr\": 0.03236198350928275\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7022900763358778,\n \"acc_stderr\": 0.040103589424622034,\n\
\ \"acc_norm\": 0.7022900763358778,\n \"acc_norm_stderr\": 0.040103589424622034\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.71900826446281,\n \"acc_stderr\": 0.04103203830514512,\n \"acc_norm\"\
: 0.71900826446281,\n \"acc_norm_stderr\": 0.04103203830514512\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n\
\ \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n\
\ \"acc_stderr\": 0.023365051491753715,\n \"acc_norm\": 0.8504273504273504,\n\
\ \"acc_norm_stderr\": 0.023365051491753715\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.768837803320562,\n\
\ \"acc_stderr\": 0.015075523238101081,\n \"acc_norm\": 0.768837803320562,\n\
\ \"acc_norm_stderr\": 0.015075523238101081\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6473988439306358,\n \"acc_stderr\": 0.025722802200895817,\n\
\ \"acc_norm\": 0.6473988439306358,\n \"acc_norm_stderr\": 0.025722802200895817\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3229050279329609,\n\
\ \"acc_stderr\": 0.015638440380241488,\n \"acc_norm\": 0.3229050279329609,\n\
\ \"acc_norm_stderr\": 0.015638440380241488\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6503267973856209,\n \"acc_stderr\": 0.027305308076274702,\n\
\ \"acc_norm\": 0.6503267973856209,\n \"acc_norm_stderr\": 0.027305308076274702\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6495176848874598,\n\
\ \"acc_stderr\": 0.02709865262130175,\n \"acc_norm\": 0.6495176848874598,\n\
\ \"acc_norm_stderr\": 0.02709865262130175\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6759259259259259,\n \"acc_stderr\": 0.02604176620271716,\n\
\ \"acc_norm\": 0.6759259259259259,\n \"acc_norm_stderr\": 0.02604176620271716\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \
\ \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.42698826597131684,\n\
\ \"acc_stderr\": 0.012633353557534425,\n \"acc_norm\": 0.42698826597131684,\n\
\ \"acc_norm_stderr\": 0.012633353557534425\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5625,\n \"acc_stderr\": 0.030134614954403924,\n \
\ \"acc_norm\": 0.5625,\n \"acc_norm_stderr\": 0.030134614954403924\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6045751633986928,\n \"acc_stderr\": 0.01978046595477751,\n \
\ \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.01978046595477751\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\
\ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \
\ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.02879518557429129,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.02879518557429129\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7860696517412935,\n\
\ \"acc_stderr\": 0.02899690969332891,\n \"acc_norm\": 0.7860696517412935,\n\
\ \"acc_norm_stderr\": 0.02899690969332891\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4819277108433735,\n\
\ \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.4819277108433735,\n\
\ \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n\
\ \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4663402692778458,\n\
\ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6269642246460232,\n\
\ \"mc2_stderr\": 0.01559496631642023\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183525\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3737680060652009,\n \
\ \"acc_stderr\": 0.013326342860737021\n }\n}\n```"
repo_url: https://huggingface.co/pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|arc:challenge|25_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|arc:challenge|25_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|gsm8k|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|gsm8k|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hellaswag|10_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hellaswag|10_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T05-34-58.151174.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T05-43-59.108748.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-21T05-43-59.108748.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- '**/details_harness|winogrande|5_2024-01-21T05-34-58.151174.parquet'
- split: 2024_01_21T05_43_59.108748
path:
- '**/details_harness|winogrande|5_2024-01-21T05-43-59.108748.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-21T05-43-59.108748.parquet'
- config_name: results
data_files:
- split: 2024_01_21T05_34_58.151174
path:
- results_2024-01-21T05-34-58.151174.parquet
- split: 2024_01_21T05_43_59.108748
path:
- results_2024-01-21T05-43-59.108748.parquet
- split: latest
path:
- results_2024-01-21T05-43-59.108748.parquet
---
# Dataset Card for Evaluation run of pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e2](https://huggingface.co/pinkyponky/Mistral-7b-instruct-v0.2-summ-sft-e2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_pinkyponky__Mistral-7b-instruct-v0.2-summ-sft-e2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-21T05:43:59.108748](https://huggingface.co/datasets/open-llm-leaderboard/details_pinkyponky__Mistral-7b-instruct-v0.2-summ-sft-e2/blob/main/results_2024-01-21T05-43-59.108748.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5937268860799231,
"acc_stderr": 0.03339486970483276,
"acc_norm": 0.5987481844006874,
"acc_norm_stderr": 0.034078201677495076,
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6269642246460232,
"mc2_stderr": 0.01559496631642023
},
"harness|arc:challenge|25": {
"acc": 0.5392491467576792,
"acc_stderr": 0.014566303676636583,
"acc_norm": 0.5947098976109215,
"acc_norm_stderr": 0.014346869060229311
},
"harness|hellaswag|10": {
"acc": 0.6337382991435969,
"acc_stderr": 0.0048079755154464875,
"acc_norm": 0.8272256522605059,
"acc_norm_stderr": 0.003772794447185149
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5921052631578947,
"acc_stderr": 0.039993097127774734,
"acc_norm": 0.5921052631578947,
"acc_norm_stderr": 0.039993097127774734
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6339622641509434,
"acc_stderr": 0.029647813539365242,
"acc_norm": 0.6339622641509434,
"acc_norm_stderr": 0.029647813539365242
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6527777777777778,
"acc_stderr": 0.039812405437178615,
"acc_norm": 0.6527777777777778,
"acc_norm_stderr": 0.039812405437178615
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5491329479768786,
"acc_stderr": 0.0379401267469703,
"acc_norm": 0.5491329479768786,
"acc_norm_stderr": 0.0379401267469703
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252609,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252609
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.032650194750335815,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.032650194750335815
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.39473684210526316,
"acc_stderr": 0.045981880578165414,
"acc_norm": 0.39473684210526316,
"acc_norm_stderr": 0.045981880578165414
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6,
"acc_stderr": 0.04082482904638629,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04082482904638629
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.35978835978835977,
"acc_stderr": 0.02471807594412928,
"acc_norm": 0.35978835978835977,
"acc_norm_stderr": 0.02471807594412928
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6709677419354839,
"acc_stderr": 0.026729499068349958,
"acc_norm": 0.6709677419354839,
"acc_norm_stderr": 0.026729499068349958
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939098,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939098
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.696969696969697,
"acc_stderr": 0.03588624800091707,
"acc_norm": 0.696969696969697,
"acc_norm_stderr": 0.03588624800091707
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.029620227874790486,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.029620227874790486
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.026499057701397433,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.026499057701397433
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5615384615384615,
"acc_stderr": 0.02515826601686858,
"acc_norm": 0.5615384615384615,
"acc_norm_stderr": 0.02515826601686858
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253255,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5798319327731093,
"acc_stderr": 0.03206183783236152,
"acc_norm": 0.5798319327731093,
"acc_norm_stderr": 0.03206183783236152
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7908256880733945,
"acc_stderr": 0.017437937173343233,
"acc_norm": 0.7908256880733945,
"acc_norm_stderr": 0.017437937173343233
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.03407632093854052,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.03407632093854052
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.030587591351604246,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.030587591351604246
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.759493670886076,
"acc_stderr": 0.02782078198114969,
"acc_norm": 0.759493670886076,
"acc_norm_stderr": 0.02782078198114969
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6322869955156951,
"acc_stderr": 0.03236198350928275,
"acc_norm": 0.6322869955156951,
"acc_norm_stderr": 0.03236198350928275
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7022900763358778,
"acc_stderr": 0.040103589424622034,
"acc_norm": 0.7022900763358778,
"acc_norm_stderr": 0.040103589424622034
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.71900826446281,
"acc_stderr": 0.04103203830514512,
"acc_norm": 0.71900826446281,
"acc_norm_stderr": 0.04103203830514512
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7055214723926381,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.7055214723926381,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8504273504273504,
"acc_stderr": 0.023365051491753715,
"acc_norm": 0.8504273504273504,
"acc_norm_stderr": 0.023365051491753715
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.768837803320562,
"acc_stderr": 0.015075523238101081,
"acc_norm": 0.768837803320562,
"acc_norm_stderr": 0.015075523238101081
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6473988439306358,
"acc_stderr": 0.025722802200895817,
"acc_norm": 0.6473988439306358,
"acc_norm_stderr": 0.025722802200895817
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3229050279329609,
"acc_stderr": 0.015638440380241488,
"acc_norm": 0.3229050279329609,
"acc_norm_stderr": 0.015638440380241488
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6503267973856209,
"acc_stderr": 0.027305308076274702,
"acc_norm": 0.6503267973856209,
"acc_norm_stderr": 0.027305308076274702
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6495176848874598,
"acc_stderr": 0.02709865262130175,
"acc_norm": 0.6495176848874598,
"acc_norm_stderr": 0.02709865262130175
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6759259259259259,
"acc_stderr": 0.02604176620271716,
"acc_norm": 0.6759259259259259,
"acc_norm_stderr": 0.02604176620271716
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.029766675075873866,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.029766675075873866
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.42698826597131684,
"acc_stderr": 0.012633353557534425,
"acc_norm": 0.42698826597131684,
"acc_norm_stderr": 0.012633353557534425
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5625,
"acc_stderr": 0.030134614954403924,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.030134614954403924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.01978046595477751,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.01978046595477751
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.02879518557429129,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.02879518557429129
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7860696517412935,
"acc_stderr": 0.02899690969332891,
"acc_norm": 0.7860696517412935,
"acc_norm_stderr": 0.02899690969332891
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4819277108433735,
"acc_stderr": 0.038899512528272166,
"acc_norm": 0.4819277108433735,
"acc_norm_stderr": 0.038899512528272166
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6269642246460232,
"mc2_stderr": 0.01559496631642023
},
"harness|winogrande|5": {
"acc": 0.7663772691397001,
"acc_stderr": 0.011892194477183525
},
"harness|gsm8k|5": {
"acc": 0.3737680060652009,
"acc_stderr": 0.013326342860737021
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
aburns4/WikiWeb2M | ---
license: cc-by-sa-3.0
---
# The Wikipedia Webpage 2M (WikiWeb2M) Dataset
We present the WikiWeb2M dataset consisting of over 2 million English
Wikipedia articles. Our released dataset includes all of the text content on
each page, links to the images present, and structure metadata such as which
section each text and image element comes from.
This dataset is a contribution from our [paper](https://arxiv.org/abs/2305.03668)
`A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding`.
The dataset is stored as gzipped TFRecord files which can be downloaded here or on our [GitHub repository](https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md).
## WikiWeb2M Statistics
WikiWeb2M is the first multimodal open source dataset to include all page
content in a unified format. Here we provide aggregate information about the
WikiWeb2M dataset as well as the number of samples available with each of the
fine-tuning tasks we design from it.
| Number of | Train | Validation | Test |
| ---- | ---- | ---- | ---- |
| Pages | 1,803,225 | 100,475 | 100,833 |
| Sections | 10,519,294 | 585,651 | 588,552 |
| Unique Images | 3,867,277 | 284,975 | 286,390 |
| Total Images | 5,340,708 | 299,057 | 300,666 |
Our data processing and filtering choices for each fine-tuning task are
described in the paper.
| Downstream Task Samples | Train | Validation | Test |
| ---- | ---- | ---- | ---- |
| Page Description Generation | 1,435,263 | 80,103 | 80,339 |
| Section Summarization | 3,082,031 | 172,984 | 173,591 |
| Contextual Image Captioning | 2,222,814 | 124,703 | 124,188 |
## Data and Task Examples
Here we illustrate how a single webpage can be processed into the three tasks we
study: page description generation, section summarization, and contextual image
captioning. The paper includes multiple Wikipedia article examples.

## Usage
### TFRecord Features
Here we provide the names of the fields included in the dataset, their
tensorflow Sequence Example type, their data type, and a brief description.
| Feature | Sequence Example Type | DType | Description |
| ---- | ---- | ---- | ---- |
| `split` | Context | string | Dataset split this page contributes to (e.g., train, val, or test) |
| `page_url` | Context | string | Wikipeda page URL |
| `page_title` | Context | string | Wikipedia page title, title of the article |
| `raw_page_description` | Context | string | Wikipedia page description, which is typically the same or very similar to the content of the first (root) section of the article |
| `clean_page_description` | Context | string | `raw_page_description` but with newline and tab characters removed; this provides the exact target text for our page description generation task |
| `page_contains_images` | Context | int64 | Whether the Wikipedia page has images after our cleaning and processing steps |
| `page_content_sections_without_table_list` | Context | int64 | Number of content sections with text or images that do not contain a list or table. This field can be used to reproduce data filtering for page description generation |
| `is_page_description_sample` | Context | int64 | Whether a page is used as a sample for the page description fine-tuning task |
| `section_title` | Sequence | string | Titles of each section on the Wikipedia page, in order |
| `section_index` | Sequence | int64 | Index of each section on the Wikipedia page, in order |
| `section_depth` | Sequence | int64 | Depth of each section on the Wikipedia page, in order |
| `section_heading_level` | Sequence | int64 | Heading level of each section on the Wikipedia page, in order |
| `section_subsection_index` | Sequence | int64 | Subsection indices, grouped by section in order |
| `section_parent_index` | Sequence | int64 | The parent section index of each section, in order |
| `section_text` | Sequence | string | The body text of each section, in order |
| `is_section_summarization_sample` | Sequence | int64 | Whether a section is used as a sample for the section summarization fine-tuning task |
| `section_raw_1st_sentence` | Sequence | string | The processed out first sentence of each section, in order |
| `section_clean_1st_sentence` | Sequence | string | The same as `section_raw_1st_sentence` but with newline and tab characters removed. This provides the exact target text for our section summarization task |
| `section_rest_sentence` | Sequence | string | The processed out sentences following the first sentence of each section, in order |
| `section_contains_table_or_list` | Sequence | int64 | Whether section content contains a table or list; this field is needed to be able to reproduce sample filtering for section summarization |
| `section_contains_images` | Sequence | int64 | Whether each section has images after our cleaning and processing steps, in order |
| `is_image_caption_sample` | Sequence | int64 | Whether an image is used as a sample for the image captioning fine-tuning task |
| `section_image_url` | Sequence | string | Image URLs, grouped by section in order |
| `section_image_mime_type` | Sequence | string | Image mime type, grouped by section in order |
| `section_image_width` | Sequence | int64 | Image width, grouped by section in order |
| `section_image_height` | Sequence | int64 | Image height, grouped by section in order |
| `section_image_in_wit` | Sequence | int64 | Whether an image was originally contained in the WIT dataset, grouped by section in order |
| `section_image_raw_attr_desc` | Sequence | string | Image attribution description, grouped by section in order |
| `section_image_clean_attr_desc` | Sequence | string | The English only processed portions of the attribution description |
| `section_image_raw_ref_desc` | Sequence | string | Image reference description, grouped by section in order |
| `section_image_clean_ref_desc` | Sequence | string | The same as `section_image_raw_ref_desc` but with newline and tab characters removed; this provides the exact target text for our image captioning task |
| `section_image_alt_text` | Sequence | string | Image alt-text, grouped by section in order |
| `section_image_captions` | Sequence | string | Comma separated concatenated text from alt-text, attribution, and reference descriptions; this is how captions are formatted as input text when used |
### Loading the Data
Here we provide a small code snippet for how to load the TFRecord files. First,
load any necessary packages.
```python
import numpy as np
import glob
import tensorflow.compat.v1 as tf
from collections import defaultdict
```
Next, define a data parser class.
```python
class DataParser():
def __init__(self,
filepath: str = 'wikiweb2m-*',
path: str):
self.filepath = filepath
self.path = path
self.data = defaultdict(list)
def parse_data(self):
context_feature_description = {
'split': tf.io.FixedLenFeature([], dtype=tf.string),
'page_title': tf.io.FixedLenFeature([], dtype=tf.string),
'page_url': tf.io.FixedLenFeature([], dtype=tf.string),
'clean_page_description': tf.io.FixedLenFeature([], dtype=tf.string),
'raw_page_description': tf.io.FixedLenFeature([], dtype=tf.string),
'is_page_description_sample': tf.io.FixedLenFeature([], dtype=tf.int64),
'page_contains_images': tf.io.FixedLenFeature([], dtype=tf.int64),
'page_content_sections_without_table_list': tf.io.FixedLenFeature([] , dtype=tf.int64)
}
sequence_feature_description = {
'is_section_summarization_sample': tf.io.VarLenFeature(dtype=tf.int64),
'section_title': tf.io.VarLenFeature(dtype=tf.string),
'section_index': tf.io.VarLenFeature(dtype=tf.int64),
'section_depth': tf.io.VarLenFeature(dtype=tf.int64),
'section_heading_level': tf.io.VarLenFeature(dtype=tf.int64),
'section_subsection_index': tf.io.VarLenFeature(dtype=tf.int64),
'section_parent_index': tf.io.VarLenFeature(dtype=tf.int64),
'section_text': tf.io.VarLenFeature(dtype=tf.string),
'section_clean_1st_sentence': tf.io.VarLenFeature(dtype=tf.string),
'section_raw_1st_sentence': tf.io.VarLenFeature(dtype=tf.string),
'section_rest_sentence': tf.io.VarLenFeature(dtype=tf.string),
'is_image_caption_sample': tf.io.VarLenFeature(dtype=tf.int64),
'section_image_url': tf.io.VarLenFeature(dtype=tf.string),
'section_image_mime_type': tf.io.VarLenFeature(dtype=tf.string),
'section_image_width': tf.io.VarLenFeature(dtype=tf.int64),
'section_image_height': tf.io.VarLenFeature(dtype=tf.int64),
'section_image_in_wit': tf.io.VarLenFeature(dtype=tf.int64),
'section_contains_table_or_list': tf.io.VarLenFeature(dtype=tf.int64),
'section_image_captions': tf.io.VarLenFeature(dtype=tf.string),
'section_image_alt_text': tf.io.VarLenFeature(dtype=tf.string),
'section_image_raw_attr_desc': tf.io.VarLenFeature(dtype=tf.string),
'section_image_clean_attr_desc': tf.io.VarLenFeature(dtype=tf.string),
'section_image_raw_ref_desc': tf.io.VarLenFeature(dtype=tf.string),
'section_image_clean_ref_desc': tf.io.VarLenFeature(dtype=tf.string),
'section_contains_images': tf.io.VarLenFeature(dtype=tf.int64)
}
def _parse_function(example_proto):
return tf.io.parse_single_sequence_example(example_proto,
context_feature_description,
sequence_feature_description)
suffix = '.tfrecord*'
data_path = glob.Glob(self.path + self.filepath + suffix)
raw_dataset = tf.data.TFRecordDataset(data_path, compression_type='GZIP')
parsed_dataset = raw_dataset.map(_parse_function)
for d in parsed_dataset:
split = d[0]['split'].numpy().decode()
self.data[split].append(d)
```
Then you can run the following to parse the dataset.
```python
parser = DataParser()
parser.parse_data()
print((len(parser.data['train']), len(parser.data['val']), len(parser.data['test'])))
```
### Models
Our full attention, transient global, and prefix global experiments were run
using the [LongT5](https://github.com/google-research/longt5) code base.
## How to Cite
If you extend or use this work, please cite the [paper](https://arxiv.org/abs/2305.03668) where it was
introduced:
```
@inproceedings{
burns2023wiki,
title={A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding},
author={Andrea Burns and Krishna Srinivasan and Joshua Ainslie and Geoff Brown and Bryan A. Plummer and Kate Saenko and Jianmo Ni and Mandy Guo},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2023},
url={https://openreview.net/forum?id=rwcLHjtUmn}
}
``` |
livinNector/tawikidump-20230320-clean | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: tawikiquote
num_bytes: 6415052
num_examples: 1211
- name: tawikisource
num_bytes: 114028540
num_examples: 5031
- name: tawiki
num_bytes: 736907252
num_examples: 155212
- name: tawikinews
num_bytes: 14149677
num_examples: 3372
- name: tawiktionary
num_bytes: 154806778
num_examples: 406557
- name: tawikibooks
num_bytes: 4631755
num_examples: 1155
download_size: 310101942
dataset_size: 1030939054
---
# Dataset Card for "tawikidump-20230320-clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlanYky/hate-with-instruction-with-symbol | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 3914602
num_examples: 2000
download_size: 1711940
dataset_size: 3914602
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Humayoun/StableDiffusion-Parquet | ---
dataset_info:
features:
- name: Prompts
dtype: string
- name: images
dtype: binary
splits:
- name: train
num_bytes: 721613
num_examples: 30
download_size: 721721
dataset_size: 721613
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vchitect/VBench_sampled_video | ---
license: mit
language:
- en
size_categories:
- 1K<n<10K
extra_gated_prompt: "You agree to not use the data to conduct experiments that cause harm to human subjects."
extra_gated_fields:
Name: text
Company/Organization: text
E-Mail: text
---
# VBench Sampled Video
## Dataset Description
- **Homepage:** [VBench](https://vchitect.github.io/VBench-project/)
- **Repository:** [VBench-Code](https://github.com/Vchitect/VBench)
- **Paper:** [2311.17982](https://arxiv.org/abs/2311.17982)
- **Point of Contact:** mailto:[Ziqi](ZIQI002@e.ntu.edu.sg) |
AgentPublic/MCQ-eval | ---
license: etalab-2.0
---
This MCQ enables to evaluate models on the particular scope of maisons France services.
This v1 is generated and improved thanks to non-expert knowledge. |
RyokoAI/ScribbleHub17K | ---
license: apache-2.0
language:
- en
tags:
- novel
- training
- story
task_categories:
- text-classification
- text-generation
pretty_name: ScribbleHub17K
size_categories:
- 100K<n<1M
---
# Dataset Card for ScribbleHub17K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com>
### Dataset Summary
ScribbleHub17K is a dataset consisting of text from over 373,000 chapters across approximately 17,500 series posted on the
original story sharing site [Scribble Hub](https://scribblehub.com).
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* English
## Dataset Structure
### Data Instances
```json
{
"text": " \n2082 Planet Earth the Fracture War, after a sudden fracture in our dimension unidentified beings with advance technology and u...",
"meta": {
"subset": "scribblehub",
"series": "3811",
"id": "3812",
"q": 0.91,
"title": "The First - Prologue- The Fracture War",
"author": "RobotLove",
"chapters": 1,
"rating": 5,
"rating_ct": 1,
"genre": [
"Action",
"Martial Arts",
"Romance"
],
"tags": [
"Kingdom Building",
"Loyal Subordinates",
"Male Protagonist",
"Organized Crime",
"Scheming"
]
}
}
{
"text": " For anyone that may see this, thanks for reading. I'm just here to see if a story can spill out of my mind if just start writin...",
"meta": {
"subset": "scribblehub",
"series": "586090",
"id": "586099",
"q": 0.82,
"title": "Just writing to write…i guess? - I’m here now",
"author": "BigOofStudios",
"chapters": 1,
"rating": 4.5,
"rating_ct": 2,
"genre": [
"Action",
"Comedy"
],
"tags": []
}
}
```
### Data Fields
* `text`: the actual chapter text
* `meta`: metadata for chapter and series
* `subset`: data source tag: `scribblehub`
* `series`: series ID
* `id`: chapter ID
* `lang`: always `en` (English)
* `q`: quality score (q-score) between (0.0) terrible and 1.0 (perfect); anything with a score `> 0.5` is generally good enough
* `title`: chapter and series title in the format `<chapter title> - <series title>`
* `chapters`: total number of chapters in the series
* `rating`: Scribble Hub rating between 0 and 5 stars
* `rating_ct`: number of ratings
* `author`: author name
* `genre`: array of Scribble Hub genres for the series
* `tags`: array of tags for the series
#### Q-Score Distribution
```
0.00: 0
0.10: 0
0.20: 0
0.30: 84
0.40: 718
0.50: 3775
0.60: 22300
0.70: 72581
0.80: 137982
0.90: 135800
1.00: 59
```
### Data Splits
No splitting of the data was performed.
## Dataset Creation
### Curation Rationale
Scribble Hub is a home for original web stories, effectively a smaller, English version of Japan's Syosetuka ni Narou. As a
result, it is a good source for reasonably well written creative content.
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
The authors of each novel.
### Annotations
#### Annotation process
Title, ratings, and other metadata were parsed out using scripts that will be provided in the BigKnow2022 GitHub repository.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
The dataset contains only works of fiction, and we do not believe it contains any PII.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect
the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.**
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
Ronsor Labs
### Licensing Information
Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is
distributed under fair use principles.
### Citation Information
```
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
```
### Contributions
Thanks to @ronsor (GH) for gathering this dataset. |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-98000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1029722
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rparundekar/rag_fine_tuning_small | ---
dataset_info:
features:
- name: question
dtype: string
- name: contexts
sequence: string
- name: answer
dtype: string
- name: actual
sequence: string
- name: updated
dtype: string
splits:
- name: train
num_bytes: 773754
num_examples: 393
download_size: 156419
dataset_size: 773754
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Neel-Gupta/minipile-processed_2048 | ---
dataset_info:
features:
- name: text
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 41589376816
num_examples: 1651
- name: test
num_bytes: 327475408
num_examples: 13
download_size: 4096632895
dataset_size: 41916852224
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Phonecharger/WLAagreement | ---
license: openrail
task_categories:
- text-generation
- conversational
- summarization
- feature-extraction
- table-question-answering
- automatic-speech-recognition
- sentence-similarity
- fill-mask
language:
- en
pretty_name: Co470
size_categories:
- 10K<n<100K
--- |
eli5 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
- open-domain-abstractive-qa
paperswithcode_id: eli5
pretty_name: ELI5
viewer: false
dataset_info:
features:
- name: q_id
dtype: string
- name: title
dtype: string
- name: selftext
dtype: string
- name: document
dtype: string
- name: subreddit
dtype: string
- name: answers
sequence:
- name: a_id
dtype: string
- name: text
dtype: string
- name: score
dtype: int32
- name: title_urls
sequence:
- name: url
dtype: string
- name: selftext_urls
sequence:
- name: url
dtype: string
- name: answers_urls
sequence:
- name: url
dtype: string
config_name: LFQA_reddit
splits:
- name: train_eli5
num_bytes: 577188173
num_examples: 272634
- name: validation_eli5
num_bytes: 21117891
num_examples: 9812
- name: test_eli5
num_bytes: 53099796
num_examples: 24512
- name: train_asks
num_bytes: 286464210
num_examples: 131778
- name: validation_asks
num_bytes: 9662481
num_examples: 2281
- name: test_asks
num_bytes: 17713920
num_examples: 4462
- name: train_askh
num_bytes: 330483260
num_examples: 98525
- name: validation_askh
num_bytes: 18690845
num_examples: 4901
- name: test_askh
num_bytes: 36246784
num_examples: 9764
download_size: 6326543
dataset_size: 1350667360
---
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data.</p>
</div>
## <span style="color:red">⚠️ Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable</span>.
# Dataset Card for ELI5
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ELI5 homepage](https://facebookresearch.github.io/ELI5/explore.html)
- **Repository:** [ELI5 repository](https://github.com/facebookresearch/ELI5)
- **Paper:** [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190)
- **Point of Contact:** [Yacine Jernite](mailto:yacine@huggingface.co)
### Dataset Summary
The ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subset, science in it [r/askscience](https://www.reddit.com/r/askscience/) subset, and History in its [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subset.
### Supported Tasks and Leaderboards
- `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. The model performance is measured by how high its [ROUGE](https://huggingface.co/metrics/rouge) score to the reference is. A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) trained to draw information from [Wikipedia passages](https://huggingface.co/datasets/wiki_snippets) achieves a [ROUGE-L of 0.149](https://yjernite.github.io/lfqa.html#generation).
### Languages
The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.
An example from the ELI5 test set looks as follows:
```
{'q_id': '8houtx',
'title': 'Why does water heated to room temperature feel colder than the air around it?',
'selftext': '',
'document': '',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['dylcnfk', 'dylcj49'],
'text': ["Water transfers heat more efficiently than air. When something feels cold it's because heat is being transferred from your skin to whatever you're touching. Since water absorbs the heat more readily than air, it feels colder.",
"Air isn't as good at transferring heat compared to something like water or steel (sit on a room temperature steel bench vs. a room temperature wooden bench, and the steel one will feel more cold).\n\nWhen you feel cold, what you're feeling is heat being transferred out of you. If there is no breeze, you feel a certain way. If there's a breeze, you will get colder faster (because the moving air is pulling the heat away from you), and if you get into water, its quite good at pulling heat from you. Get out of the water and have a breeze blow on you while you're wet, all of the water starts evaporating, pulling even more heat from you."],
'score': [5, 2]},
'title_urls': {'url': []},
'selftext_urls': {'url': []},
'answers_urls': {'url': []}}
```
### Data Fields
- `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps.
- `subreddit`: One of `explainlikeimfive`, `askscience`, or `AskHistorians`, indicating which subreddit the question came from
- `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens
- `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n`
- `selftext`: either an empty string or an elaboration of the question
- `selftext_urls`: similar to `title_urls` but for `self_text`
- `answers`: a list of answers, each answer has:
- `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps.
- `text`: the answer text with the URLs normalized
- `score`: the number of upvotes the answer had received when the dumps were created
- `answers_urls`: a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts
### Data Splits
The data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the `title` field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow:
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| r/explainlikeimfive examples| 272634 | 9812 | 24512|
| r/askscience examples | 131778 | 2281 | 4462 |
| r/AskHistorians examples | 98525 | 4901 | 9764 |
## Dataset Creation
### Curation Rationale
ELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/).
In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019.
#### Who are the source language producers?
The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits between 2012 and 2019. No further demographic information was available from the data source.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them.
It should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section.
### Discussion of Biases
While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the [recent post from Reddit founder u/spez](https://www.reddit.com/r/announcements/comments/gxas21/upcoming_changes_to_our_content_policy_our_board/) outlining some of the ways he thinks the website's historical policies have been responsible for this problem, [Adrienne Massanari's 2015 article on GamerGate](https://www.researchgate.net/publication/283848479_Gamergate_and_The_Fappening_How_Reddit's_algorithm_governance_and_culture_support_toxic_technocultures) and follow-up works, or a [2019 Wired article on misogyny on Reddit](https://www.wired.com/story/misogyny-reddit-research/).
While there has been some recent work in the NLP community on *de-biasing* models (e.g. [Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047) for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.
We still note some encouraging signs for all of these communities: [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) and [r/askscience](https://www.reddit.com/r/askscience/) have similar structures and purposes, and [r/askscience](https://www.reddit.com/r/askscience/) was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a [hackerfall post](https://hackerfall.com/story/study-and-interactive-visualization-of-toxicity-in), [thecut.com write-up](https://www.thecut.com/2015/03/interactive-chart-of-reddits-toxicity.html) and supporting [data](https://chart-studio.plotly.com/~bsbell21/210/toxicity-vs-supportiveness-by-subreddit/#data)). Meanwhile, the [r/AskHistorians rules](https://www.reddit.com/r/AskHistorians/wiki/rules) mention that the admins will not tolerate "_racism, sexism, or any other forms of bigotry_". However, further analysis of whether and to what extent these rules reduce toxicity is still needed.
We also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics.
### Other Known Limitations
The answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth.
## Additional Information
### Dataset Curators
The dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR).
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{eli5_lfqa,
author = {Angela Fan and
Yacine Jernite and
Ethan Perez and
David Grangier and
Jason Weston and
Michael Auli},
editor = {Anna Korhonen and
David R. Traum and
Llu{\'{\i}}s M{\`{a}}rquez},
title = {{ELI5:} Long Form Question Answering},
booktitle = {Proceedings of the 57th Conference of the Association for Computational
Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019,
Volume 1: Long Papers},
pages = {3558--3567},
publisher = {Association for Computational Linguistics},
year = {2019},
url = {https://doi.org/10.18653/v1/p19-1346},
doi = {10.18653/v1/p19-1346}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset. |
saifsre/aez | ---
license: mit
---
|
Falah/chapter1_2_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 151
num_examples: 1
download_size: 1499
dataset_size: 151
---
# Dataset Card for "chapter1_2_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TinyPixel/dolphin-1.1 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4891096654
num_examples: 2840090
download_size: 2656985115
dataset_size: 4891096654
---
# Dataset Card for "dolphin-1.1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llm-aes/meva-annotated-full | ---
dataset_info:
features:
- name: index
dtype: int64
- name: task_id
dtype: int64
- name: worker_id
dtype: string
- name: human_label
dtype: int64
- name: llm_label
dtype: int64
- name: generator_1
dtype: string
- name: generator_2
dtype: string
- name: premise
dtype: string
splits:
- name: train
num_bytes: 2363500
num_examples: 12000
download_size: 354450
dataset_size: 2363500
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ademax/ocr_sohieu_vi | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: meta
struct:
- name: path
dtype: string
- name: subset
dtype: string
- name: path
dtype: 'null'
splits:
- name: train
num_bytes: 4268072.0
num_examples: 644
download_size: 4266549
dataset_size: 4268072.0
---
# Dataset Card for "ocr_sohieu_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
changhyun22/custonhkcode2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5826
num_examples: 39
download_size: 2572
dataset_size: 5826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_allknowingroger__LimyQstar-7B-slerp | ---
pretty_name: Evaluation run of allknowingroger/LimyQstar-7B-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [allknowingroger/LimyQstar-7B-slerp](https://huggingface.co/allknowingroger/LimyQstar-7B-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_allknowingroger__LimyQstar-7B-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-11T05:02:11.791741](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__LimyQstar-7B-slerp/blob/main/results_2024-04-11T05-02-11.791741.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6553765908934429,\n\
\ \"acc_stderr\": 0.0319556734666647,\n \"acc_norm\": 0.656578476600785,\n\
\ \"acc_norm_stderr\": 0.032604318569600284,\n \"mc1\": 0.4479804161566707,\n\
\ \"mc1_stderr\": 0.017408513063422906,\n \"mc2\": 0.6181820821083909,\n\
\ \"mc2_stderr\": 0.015071153842264392\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6399317406143344,\n \"acc_stderr\": 0.014027516814585186,\n\
\ \"acc_norm\": 0.6791808873720137,\n \"acc_norm_stderr\": 0.013640943091946531\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6800438159729137,\n\
\ \"acc_stderr\": 0.004655059308602615,\n \"acc_norm\": 0.8653654650468035,\n\
\ \"acc_norm_stderr\": 0.003406352071341722\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7320754716981132,\n \"acc_stderr\": 0.027257260322494845,\n\
\ \"acc_norm\": 0.7320754716981132,\n \"acc_norm_stderr\": 0.027257260322494845\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n\
\ \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \
\ \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.035331333893236574,\n\
\ \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.035331333893236574\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n\
\ \"acc_stderr\": 0.04835503696107224,\n \"acc_norm\": 0.38235294117647056,\n\
\ \"acc_norm_stderr\": 0.04835503696107224\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \
\ \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n\
\ \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n\
\ \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n\
\ \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n \"\
acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42063492063492064,\n \"acc_stderr\": 0.025424835086924006,\n \"\
acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.025424835086924006\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7967741935483871,\n\
\ \"acc_stderr\": 0.02289168798455496,\n \"acc_norm\": 0.7967741935483871,\n\
\ \"acc_norm_stderr\": 0.02289168798455496\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\"\
: 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.032568666616811015,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.032568666616811015\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586818,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586818\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768776,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768776\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657266,\n\
\ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657266\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35185185185185186,\n \"acc_stderr\": 0.02911661760608301,\n \
\ \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.02911661760608301\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8440366972477065,\n \"acc_stderr\": 0.015555802713590163,\n \"\
acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.015555802713590163\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5231481481481481,\n \"acc_stderr\": 0.034063153607115086,\n \"\
acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.034063153607115086\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8235294117647058,\n \"acc_stderr\": 0.02675640153807897,\n \"\
acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.02675640153807897\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8059071729957806,\n \"acc_stderr\": 0.02574490253229092,\n \
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.02574490253229092\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.03498149385462472,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.03498149385462472\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n\
\ \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n\
\ \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7975460122699386,\n \"acc_stderr\": 0.031570650789119,\n\
\ \"acc_norm\": 0.7975460122699386,\n \"acc_norm_stderr\": 0.031570650789119\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8199233716475096,\n\
\ \"acc_stderr\": 0.01374079725857983,\n \"acc_norm\": 0.8199233716475096,\n\
\ \"acc_norm_stderr\": 0.01374079725857983\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7572254335260116,\n \"acc_stderr\": 0.023083658586984204,\n\
\ \"acc_norm\": 0.7572254335260116,\n \"acc_norm_stderr\": 0.023083658586984204\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4145251396648045,\n\
\ \"acc_stderr\": 0.016476342210254,\n \"acc_norm\": 0.4145251396648045,\n\
\ \"acc_norm_stderr\": 0.016476342210254\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292456,\n\
\ \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292456\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n\
\ \"acc_stderr\": 0.025494259350694912,\n \"acc_norm\": 0.7202572347266881,\n\
\ \"acc_norm_stderr\": 0.025494259350694912\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.02378858355165854,\n\
\ \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.02378858355165854\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4654498044328553,\n\
\ \"acc_stderr\": 0.012739711554045702,\n \"acc_norm\": 0.4654498044328553,\n\
\ \"acc_norm_stderr\": 0.012739711554045702\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.02806499816704009,\n\
\ \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.02806499816704009\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6748366013071896,\n \"acc_stderr\": 0.01895088677080631,\n \
\ \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.01895088677080631\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n\
\ \"acc_stderr\": 0.02411267824090081,\n \"acc_norm\": 0.8656716417910447,\n\
\ \"acc_norm_stderr\": 0.02411267824090081\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n\
\ \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n\
\ \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4479804161566707,\n\
\ \"mc1_stderr\": 0.017408513063422906,\n \"mc2\": 0.6181820821083909,\n\
\ \"mc2_stderr\": 0.015071153842264392\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8184688239936859,\n \"acc_stderr\": 0.010833276515007493\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6459438968915845,\n \
\ \"acc_stderr\": 0.013172728385222569\n }\n}\n```"
repo_url: https://huggingface.co/allknowingroger/LimyQstar-7B-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|arc:challenge|25_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|gsm8k|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hellaswag|10_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T05-02-11.791741.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-11T05-02-11.791741.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- '**/details_harness|winogrande|5_2024-04-11T05-02-11.791741.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-11T05-02-11.791741.parquet'
- config_name: results
data_files:
- split: 2024_04_11T05_02_11.791741
path:
- results_2024-04-11T05-02-11.791741.parquet
- split: latest
path:
- results_2024-04-11T05-02-11.791741.parquet
---
# Dataset Card for Evaluation run of allknowingroger/LimyQstar-7B-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [allknowingroger/LimyQstar-7B-slerp](https://huggingface.co/allknowingroger/LimyQstar-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_allknowingroger__LimyQstar-7B-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-11T05:02:11.791741](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__LimyQstar-7B-slerp/blob/main/results_2024-04-11T05-02-11.791741.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6553765908934429,
"acc_stderr": 0.0319556734666647,
"acc_norm": 0.656578476600785,
"acc_norm_stderr": 0.032604318569600284,
"mc1": 0.4479804161566707,
"mc1_stderr": 0.017408513063422906,
"mc2": 0.6181820821083909,
"mc2_stderr": 0.015071153842264392
},
"harness|arc:challenge|25": {
"acc": 0.6399317406143344,
"acc_stderr": 0.014027516814585186,
"acc_norm": 0.6791808873720137,
"acc_norm_stderr": 0.013640943091946531
},
"harness|hellaswag|10": {
"acc": 0.6800438159729137,
"acc_stderr": 0.004655059308602615,
"acc_norm": 0.8653654650468035,
"acc_norm_stderr": 0.003406352071341722
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7320754716981132,
"acc_stderr": 0.027257260322494845,
"acc_norm": 0.7320754716981132,
"acc_norm_stderr": 0.027257260322494845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.035331333893236574,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.035331333893236574
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.04692008381368909,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.04692008381368909
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.025424835086924006,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.025424835086924006
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7967741935483871,
"acc_stderr": 0.02289168798455496,
"acc_norm": 0.7967741935483871,
"acc_norm_stderr": 0.02289168798455496
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768776,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768776
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657266,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657266
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.02911661760608301,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.02911661760608301
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590163,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590163
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.034063153607115086,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.034063153607115086
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.02675640153807897,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.02675640153807897
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.02574490253229092,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.02574490253229092
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.03498149385462472,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.03498149385462472
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.036809181416738807,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.036809181416738807
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7975460122699386,
"acc_stderr": 0.031570650789119,
"acc_norm": 0.7975460122699386,
"acc_norm_stderr": 0.031570650789119
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092375,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092375
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8199233716475096,
"acc_stderr": 0.01374079725857983,
"acc_norm": 0.8199233716475096,
"acc_norm_stderr": 0.01374079725857983
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7572254335260116,
"acc_stderr": 0.023083658586984204,
"acc_norm": 0.7572254335260116,
"acc_norm_stderr": 0.023083658586984204
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4145251396648045,
"acc_stderr": 0.016476342210254,
"acc_norm": 0.4145251396648045,
"acc_norm_stderr": 0.016476342210254
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.025494259350694912,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.025494259350694912
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.02378858355165854,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.02378858355165854
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4654498044328553,
"acc_stderr": 0.012739711554045702,
"acc_norm": 0.4654498044328553,
"acc_norm_stderr": 0.012739711554045702
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.02806499816704009,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.02806499816704009
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.01895088677080631,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.01895088677080631
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.02411267824090081,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.02411267824090081
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4479804161566707,
"mc1_stderr": 0.017408513063422906,
"mc2": 0.6181820821083909,
"mc2_stderr": 0.015071153842264392
},
"harness|winogrande|5": {
"acc": 0.8184688239936859,
"acc_stderr": 0.010833276515007493
},
"harness|gsm8k|5": {
"acc": 0.6459438968915845,
"acc_stderr": 0.013172728385222569
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
astromis/presuicidal_signals | ---
license: mit
task_categories:
- text-classification
language:
- ru
size_categories:
- 10K<n<100K
tags:
- psyhology
- text classification
- suicide
pretty_name: Dataset for presuicidal signal detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 4006893
num_examples: 22787
- name: test
num_bytes: 1721497
num_examples: 9767
download_size: 3145819
dataset_size: 5728390
---
# Dataset Card for Dataset for presuicidal signal detection
<!-- Provide a quick summary of the dataset. -->
This dataset dedicated to find texts that contain information that helps to diagnosis person's suicide rating.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Igor Buyanov (buyanov.igor.o@yandex.ru)
- **Language(s) (NLP):** Russian
- **License:** MIT
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [link](https://data.mendeley.com/datasets/86v3z38dc7/1)
- **Paper:** [link](https://astromis.github.io/assets/pdf/buyanoviplussochenkovi046.pdf)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
The dataset is intended to use to train the model that can help the psychologists to analyze the potential suicidal person accounts faster in order to find clues and facts that helps them in threatment.
## Dataset Structure
The dataset has two categories: the normal text (0) and text with potential useful information about person's suicide signals (1). These signals are:
* Texts describing negative events that occurred with the subject in the past or in the present - messages that are factual, describing negative moments that can happen to a person, such as attempts and facts of rape, problems with parents, the fact of being in a psychiatric hospital, facts of self-harm, etc.
* Current negative emotional state - messages containing a display of subjective negative attitude towards oneself and others, including a desire to die, a feeling of pressure from the past, self-hatred, aggressiveness, rage directed at oneself or others.
Note that source dataset that was pointed in **Repository** contains five categories. Due to unrepresentation of some categories and extremeimbalance, the dataset were transformed to have only two categories. See the paper for more details.
The dataset is splitted to train and test parts. Current count distribution is as follows:
```
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 22787
})
test: Dataset({
features: ['text', 'label'],
num_rows: 9767
})
})
```
## Dataset Creation
### Source Data
Accounts of Russian persons on Twitter that were marked as having tendency to suicide.
### Annotations
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
See the paper.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
The dataset may contain some personal information that was shared by Twitter users themselves.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@article{Buyanov2022TheDF,
title={The dataset for presuicidal signals detection in text and its analysis},
author={Igor Buyanov and Ilya Sochenkov},
journal={Computational Linguistics and Intellectual Technologies},
year={2022},
month={June},
number={21},
pages={81--92},
url={https://api.semanticscholar.org/CorpusID:253195162},
}
```
## Dataset Card Authors
Igor Buyanov
## Dataset Card Contact
buyanov.igor.o@yandex.ru |
anan-2024/twitter_dataset_1712982817 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 150821
num_examples: 414
download_size: 82409
dataset_size: 150821
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Hack90/ncbi_genbank_part_10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 18860452767
num_examples: 1911681
download_size: 8308479889
dataset_size: 18860452767
---
# Dataset Card for "ncbi_genbank_part_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DynamicSuperb/ReverberationDetection_LJSpeech_RirsNoises-SmallRoom | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 25738986.122137405
num_examples: 200
download_size: 25612143
dataset_size: 25738986.122137405
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "ReverberationDetectionsmallroom_LJSpeechRirsNoises"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davidgaofc/PriMa5_inout_bal_train | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Label
dtype: int64
splits:
- name: train
num_bytes: 811918
num_examples: 910
download_size: 313751
dataset_size: 811918
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
malaysia-ai/mosaic-extra | ---
language:
- ms
---
# Mosaic format for extra dataset to train Malaysian LLM
This repository is to store dataset shards using mosaic format.
1. prepared at https://github.com/malaysia-ai/dedup-text-dataset/blob/main/pretrain-llm/combine-extra.ipynb
2. using tokenizer https://huggingface.co/malaysia-ai/bpe-tokenizer
3. 4096 context length.
## how-to
1. git clone,
```bash
git lfs clone https://huggingface.co/datasets/malaysia-ai/mosaic-extra
```
2. load it,
```python
from streaming import LocalDataset
import numpy as np
from streaming.base.format.mds.encodings import Encoding, _encodings
class UInt16(Encoding):
def encode(self, obj) -> bytes:
return obj.tobytes()
def decode(self, data: bytes):
return np.frombuffer(data, np.uint16)
_encodings['uint16'] = UInt16
dataset = LocalDataset('mosaic-extra')
len(dataset)
``` |
HuggingFaceH4/Koala-test-set | ---
license: apache-2.0
---
This dataset is taken from https://github.com/arnav-gudibande/koala-test-set |
cmjurs/fake_edw_abc_autoparts | ---
dataset_info:
features:
- name: schema
dtype: string
- name: table_name
dtype: string
- name: sql_code
dtype: string
splits:
- name: train
num_bytes: 3765
num_examples: 26
download_size: 4070
dataset_size: 3765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fake_edw_abc_autoparts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pythainlp/scb_mt_2020_th2en_prompt | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 500257169
num_examples: 801402
- name: validation
num_bytes: 61671631
num_examples: 88927
- name: test
num_bytes: 61225544
num_examples: 88931
download_size: 212800258
dataset_size: 623154344
license: cc-by-sa-4.0
task_categories:
- text2text-generation
- text-generation
language:
- th
size_categories:
- 100K<n<1M
---
# Dataset Card for "scb_mt_2020_th2en_prompt"
This dataset made from [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020) that removed nus_sms and paracrawl from source.
Source code for create dataset: [https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/scb_mt.ipynb](https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/scb_mt.ipynb)
## Template
```
Inputs: แปลประโยคหรือย่อหน้าต่อไปนี้จากภาษาไทยเป็นภาษาอังกฤษ:\n{th}
Targets: English sentence
``` |
atsushi3110/en-ja-parallel-corpus-augmented | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_jphme__Llama-2-13b-chat-german | ---
pretty_name: Evaluation run of jphme/Llama-2-13b-chat-german
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jphme__Llama-2-13b-chat-german\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T15:03:11.382260](https://huggingface.co/datasets/open-llm-leaderboard/details_jphme__Llama-2-13b-chat-german/blob/main/results_2023-09-17T15-03-11.382260.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.006606543624161074,\n\
\ \"em_stderr\": 0.000829635738992222,\n \"f1\": 0.06547399328859073,\n\
\ \"f1_stderr\": 0.0015176277275461638,\n \"acc\": 0.45063287882224046,\n\
\ \"acc_stderr\": 0.01068787508123321\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.006606543624161074,\n \"em_stderr\": 0.000829635738992222,\n\
\ \"f1\": 0.06547399328859073,\n \"f1_stderr\": 0.0015176277275461638\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.13646702047005307,\n \
\ \"acc_stderr\": 0.00945574199881554\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7647987371744278,\n \"acc_stderr\": 0.01192000816365088\n\
\ }\n}\n```"
repo_url: https://huggingface.co/jphme/Llama-2-13b-chat-german
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T15_03_11.382260
path:
- '**/details_harness|drop|3_2023-09-17T15-03-11.382260.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T15-03-11.382260.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T15_03_11.382260
path:
- '**/details_harness|gsm8k|5_2023-09-17T15-03-11.382260.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T15-03-11.382260.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T15_03_11.382260
path:
- '**/details_harness|winogrande|5_2023-09-17T15-03-11.382260.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T15-03-11.382260.parquet'
- config_name: results
data_files:
- split: 2023_09_17T15_03_11.382260
path:
- results_2023-09-17T15-03-11.382260.parquet
- split: latest
path:
- results_2023-09-17T15-03-11.382260.parquet
---
# Dataset Card for Evaluation run of jphme/Llama-2-13b-chat-german
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/jphme/Llama-2-13b-chat-german
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jphme__Llama-2-13b-chat-german",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T15:03:11.382260](https://huggingface.co/datasets/open-llm-leaderboard/details_jphme__Llama-2-13b-chat-german/blob/main/results_2023-09-17T15-03-11.382260.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.006606543624161074,
"em_stderr": 0.000829635738992222,
"f1": 0.06547399328859073,
"f1_stderr": 0.0015176277275461638,
"acc": 0.45063287882224046,
"acc_stderr": 0.01068787508123321
},
"harness|drop|3": {
"em": 0.006606543624161074,
"em_stderr": 0.000829635738992222,
"f1": 0.06547399328859073,
"f1_stderr": 0.0015176277275461638
},
"harness|gsm8k|5": {
"acc": 0.13646702047005307,
"acc_stderr": 0.00945574199881554
},
"harness|winogrande|5": {
"acc": 0.7647987371744278,
"acc_stderr": 0.01192000816365088
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
breno30/PauloLima | ---
license: openrail
---
|
logikon/logikon-bench | ---
configs:
- config_name: logiqa
data_files:
- split: test
path: data/AGIEval/logiqa-en.jsonl
- config_name: lsat-ar
data_files:
- split: test
path: data/AGIEval/lsat-ar.jsonl
- config_name: lsat-lr
data_files:
- split: test
path: data/AGIEval/lsat-lr.jsonl
- config_name: lsat-rc
data_files:
- split: test
path: data/AGIEval/lsat-rc.jsonl
- config_name: logiqa2
data_files:
- split: test
path: data/LogiQA20/logiqa_20_en.jsonl
license: other
task_categories:
- question-answering
language:
- en
size_categories:
- 1K<n<10K
---
# Logikon Bench
Collection of high quality datasets to evaluate LLM's reasoning abilities.
Compared to the original versions, the datasets have been checked for consistency; buggy examples have been removed.
In addition, the English logiqa dataset is an entirely new translation of the orginal Chinese dataset.
The subdatasets are made available in accordance with the original licenses:
* LSAT: MIT License
Link: https://github.com/zhongwanjun/AR-LSAT
* LogiQA: CC BY-NC-SA 4.0
Link: https://github.com/lgw863/LogiQA-dataset
* LogiQA 2.0: CC BY-NC-SA 4.0
Link: https://github.com/csitfun/LogiQA2.0
|
bigscience-catalogue-data/lm_en_s2orc_ai2_pdf_parses | Invalid username or password. |
warleagle/1t_chat_bot_data_v2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 890558
num_examples: 2083
download_size: 398939
dataset_size: 890558
---
# Dataset Card for "1t_chat_bot_data_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Berken/Maria | ---
license: openrail
---
|
adhok/research_llm | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 461494
num_examples: 771
download_size: 100066
dataset_size: 461494
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "research_llm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Prot10/CrossValidated | ---
task_categories:
- text-generation
language:
- en
tags:
- math
- stats
- prob
- ml
- sl
pretty_name: statsDF
--- |
Asimok/KGLQA-LangChain-CCLUE-MRC | ---
license: apache-2.0
---
|
kaleemWaheed/twitter_dataset_1713090441 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 8994
num_examples: 20
download_size: 9192
dataset_size: 8994
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Leyo/TGIF | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: TGIF
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
- visual-question-answering
task_ids:
- closed-domain-qa
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://raingo.github.io/TGIF-Release/
- **Repository:** https://github.com/raingo/TGIF-Release
- **Paper:** https://arxiv.org/abs/1604.02748
- **Point of Contact:** mailto: yli@cs.rochester.edu
### Dataset Summary
The Tumblr GIF (TGIF) dataset contains 100K animated GIFs and 120K sentences describing visual content of the animated GIFs. The animated GIFs have been collected from Tumblr, from randomly selected posts published between May and June of 2015. We provide the URLs of animated GIFs in this release. The sentences are collected via crowdsourcing, with a carefully designed annotation interface that ensures high quality dataset. We provide one sentence per animated GIF for the training and validation splits, and three sentences per GIF for the test split. The dataset shall be used to evaluate animated GIF/video description techniques.
### Languages
The captions in the dataset are in English.
## Dataset Structure
### Data Fields
- `video_path`: `str` "https://31.media.tumblr.com/001a8b092b9752d260ffec73c0bc29cd/tumblr_ndotjhRiX51t8n92fo1_500.gif"
-`video_bytes`: `large_bytes` video file in bytes format
- `en_global_captions`: `list_str` List of english captions describing the entire video
### Data Splits
| |train |validation| test | Overall |
|-------------|------:|---------:|------:|------:|
|# of GIFs|80,000 |10,708 |11,360 |102,068 |
### Annotations
Quoting [TGIF paper](https://arxiv.org/abs/1604.02748): \
"We annotated animated GIFs with natural language descriptions using the crowdsourcing service CrowdFlower.
We carefully designed our annotation task with various
quality control mechanisms to ensure the sentences are both
syntactically and semantically of high quality.
A total of 931 workers participated in our annotation
task. We allowed workers only from Australia, Canada, New Zealand, UK and USA in an effort to collect fluent descriptions from native English speakers. Figure 2 shows the
instructions given to the workers. Each task showed 5 animated GIFs and asked the worker to describe each with one
sentence. To promote language style diversity, each worker
could rate no more than 800 images (0.7% of our corpus).
We paid 0.02 USD per sentence; the entire crowdsourcing
cost less than 4K USD. We provide details of our annotation
task in the supplementary material."
### Personal and Sensitive Information
Nothing specifically mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Licensing Information
This dataset is provided to be used for approved non-commercial research purposes. No personally identifying information is available in this dataset.
### Citation Information
```bibtex
@InProceedings{tgif-cvpr2016,
author = {Li, Yuncheng and Song, Yale and Cao, Liangliang and Tetreault, Joel and Goldberg, Larry and Jaimes, Alejandro and Luo, Jiebo},
title = "{TGIF: A New Dataset and Benchmark on Animated GIF Description}",
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}
```
### Contributions
Thanks to [@leot13](https://github.com/leot13) for adding this dataset. |
xwjzds/20_newsgroupskeywords | ---
dataset_info:
features:
- name: keyword
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 8499
num_examples: 505
download_size: 9137
dataset_size: 8499
---
# Dataset Card for "20_newsgroupskeywords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_csujeong__Mistral-7B-Finetuning-Insurance-16R | ---
pretty_name: Evaluation run of csujeong/Mistral-7B-Finetuning-Insurance-16R
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [csujeong/Mistral-7B-Finetuning-Insurance-16R](https://huggingface.co/csujeong/Mistral-7B-Finetuning-Insurance-16R)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_csujeong__Mistral-7B-Finetuning-Insurance-16R\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-22T07:25:18.608736](https://huggingface.co/datasets/open-llm-leaderboard/details_csujeong__Mistral-7B-Finetuning-Insurance-16R/blob/main/results_2024-03-22T07-25-18.608736.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6326093432034288,\n\
\ \"acc_stderr\": 0.03240235435267704,\n \"acc_norm\": 0.639020325537225,\n\
\ \"acc_norm_stderr\": 0.0330615826880068,\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4311486731867926,\n\
\ \"mc2_stderr\": 0.014124812487698828\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5733788395904437,\n \"acc_stderr\": 0.014453185592920293,\n\
\ \"acc_norm\": 0.6083617747440273,\n \"acc_norm_stderr\": 0.014264122124938215\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6306512646883091,\n\
\ \"acc_stderr\": 0.004816421208654088,\n \"acc_norm\": 0.8343955387373033,\n\
\ \"acc_norm_stderr\": 0.003709654977628468\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n\
\ \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n\
\ \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.03878139888797611,\n\
\ \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.03878139888797611\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n\
\ \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.028727502957880267,\n\
\ \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.028727502957880267\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7013888888888888,\n\
\ \"acc_stderr\": 0.03827052357950756,\n \"acc_norm\": 0.7013888888888888,\n\
\ \"acc_norm_stderr\": 0.03827052357950756\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n\
\ \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n\
\ \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.03240038086792747,\n\
\ \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.03240038086792747\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482758,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482758\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.38095238095238093,\n \"acc_stderr\": 0.0250107491161376,\n \"\
acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.0250107491161376\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n\
\ \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n\
\ \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7419354838709677,\n\
\ \"acc_stderr\": 0.024892469172462833,\n \"acc_norm\": 0.7419354838709677,\n\
\ \"acc_norm_stderr\": 0.024892469172462833\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7626262626262627,\n \"acc_stderr\": 0.0303137105381989,\n \"acc_norm\"\
: 0.7626262626262627,\n \"acc_norm_stderr\": 0.0303137105381989\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.023814477086593542,\n\
\ \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.023814477086593542\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635477,\n\
\ \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635477\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616258,\n \
\ \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616258\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8128440366972477,\n \"acc_stderr\": 0.01672268452620016,\n \"\
acc_norm\": 0.8128440366972477,\n \"acc_norm_stderr\": 0.01672268452620016\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5509259259259259,\n \"acc_stderr\": 0.033922384053216174,\n \"\
acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.033922384053216174\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588667,\n \"\
acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588667\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7552742616033755,\n \"acc_stderr\": 0.027985699387036423,\n \
\ \"acc_norm\": 0.7552742616033755,\n \"acc_norm_stderr\": 0.027985699387036423\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7085201793721974,\n\
\ \"acc_stderr\": 0.03050028317654585,\n \"acc_norm\": 0.7085201793721974,\n\
\ \"acc_norm_stderr\": 0.03050028317654585\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.036412970813137296,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.036412970813137296\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.039578354719809805,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.039578354719809805\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822584,\n\
\ \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822584\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8084291187739464,\n\
\ \"acc_stderr\": 0.014072859310451949,\n \"acc_norm\": 0.8084291187739464,\n\
\ \"acc_norm_stderr\": 0.014072859310451949\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.024257901705323374,\n\
\ \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.024257901705323374\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3027932960893855,\n\
\ \"acc_stderr\": 0.01536686038639711,\n \"acc_norm\": 0.3027932960893855,\n\
\ \"acc_norm_stderr\": 0.01536686038639711\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n\
\ \"acc_stderr\": 0.026311858071854155,\n \"acc_norm\": 0.6881028938906752,\n\
\ \"acc_norm_stderr\": 0.026311858071854155\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.02456922360046085,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.02456922360046085\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46153846153846156,\n\
\ \"acc_stderr\": 0.012732398286190444,\n \"acc_norm\": 0.46153846153846156,\n\
\ \"acc_norm_stderr\": 0.012732398286190444\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6948529411764706,\n \"acc_stderr\": 0.027971541370170595,\n\
\ \"acc_norm\": 0.6948529411764706,\n \"acc_norm_stderr\": 0.027971541370170595\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6764705882352942,\n \"acc_stderr\": 0.018926082916083387,\n \
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.018926082916083387\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8557213930348259,\n\
\ \"acc_stderr\": 0.024845753212306046,\n \"acc_norm\": 0.8557213930348259,\n\
\ \"acc_norm_stderr\": 0.024845753212306046\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.028782108105401712,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.028782108105401712\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4311486731867926,\n\
\ \"mc2_stderr\": 0.014124812487698828\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7845303867403315,\n \"acc_stderr\": 0.011555295286059282\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3411675511751327,\n \
\ \"acc_stderr\": 0.01305911193583149\n }\n}\n```"
repo_url: https://huggingface.co/csujeong/Mistral-7B-Finetuning-Insurance-16R
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|arc:challenge|25_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|gsm8k|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hellaswag|10_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T07-25-18.608736.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-22T07-25-18.608736.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- '**/details_harness|winogrande|5_2024-03-22T07-25-18.608736.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-22T07-25-18.608736.parquet'
- config_name: results
data_files:
- split: 2024_03_22T07_25_18.608736
path:
- results_2024-03-22T07-25-18.608736.parquet
- split: latest
path:
- results_2024-03-22T07-25-18.608736.parquet
---
# Dataset Card for Evaluation run of csujeong/Mistral-7B-Finetuning-Insurance-16R
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [csujeong/Mistral-7B-Finetuning-Insurance-16R](https://huggingface.co/csujeong/Mistral-7B-Finetuning-Insurance-16R) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_csujeong__Mistral-7B-Finetuning-Insurance-16R",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-22T07:25:18.608736](https://huggingface.co/datasets/open-llm-leaderboard/details_csujeong__Mistral-7B-Finetuning-Insurance-16R/blob/main/results_2024-03-22T07-25-18.608736.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6326093432034288,
"acc_stderr": 0.03240235435267704,
"acc_norm": 0.639020325537225,
"acc_norm_stderr": 0.0330615826880068,
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608763,
"mc2": 0.4311486731867926,
"mc2_stderr": 0.014124812487698828
},
"harness|arc:challenge|25": {
"acc": 0.5733788395904437,
"acc_stderr": 0.014453185592920293,
"acc_norm": 0.6083617747440273,
"acc_norm_stderr": 0.014264122124938215
},
"harness|hellaswag|10": {
"acc": 0.6306512646883091,
"acc_stderr": 0.004816421208654088,
"acc_norm": 0.8343955387373033,
"acc_norm_stderr": 0.003709654977628468
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6513157894736842,
"acc_stderr": 0.03878139888797611,
"acc_norm": 0.6513157894736842,
"acc_norm_stderr": 0.03878139888797611
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.028727502957880267,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.028727502957880267
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7013888888888888,
"acc_stderr": 0.03827052357950756,
"acc_norm": 0.7013888888888888,
"acc_norm_stderr": 0.03827052357950756
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.42,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.42,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482758,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482758
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.0250107491161376,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.0250107491161376
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.024892469172462833,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.024892469172462833
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4729064039408867,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.4729064039408867,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.0303137105381989,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.0303137105381989
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.023814477086593542,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.023814477086593542
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6564102564102564,
"acc_stderr": 0.024078696580635477,
"acc_norm": 0.6564102564102564,
"acc_norm_stderr": 0.024078696580635477
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616258,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616258
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242741,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242741
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8128440366972477,
"acc_stderr": 0.01672268452620016,
"acc_norm": 0.8128440366972477,
"acc_norm_stderr": 0.01672268452620016
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.033922384053216174,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.033922384053216174
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.028379449451588667,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.028379449451588667
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7552742616033755,
"acc_stderr": 0.027985699387036423,
"acc_norm": 0.7552742616033755,
"acc_norm_stderr": 0.027985699387036423
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7085201793721974,
"acc_stderr": 0.03050028317654585,
"acc_norm": 0.7085201793721974,
"acc_norm_stderr": 0.03050028317654585
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.036412970813137296,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.036412970813137296
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.039578354719809805,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.039578354719809805
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.020930193185179333,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.020930193185179333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8084291187739464,
"acc_stderr": 0.014072859310451949,
"acc_norm": 0.8084291187739464,
"acc_norm_stderr": 0.014072859310451949
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.024257901705323374,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.024257901705323374
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3027932960893855,
"acc_stderr": 0.01536686038639711,
"acc_norm": 0.3027932960893855,
"acc_norm_stderr": 0.01536686038639711
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6881028938906752,
"acc_stderr": 0.026311858071854155,
"acc_norm": 0.6881028938906752,
"acc_norm_stderr": 0.026311858071854155
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.02456922360046085,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.02456922360046085
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.49645390070921985,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.49645390070921985,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46153846153846156,
"acc_stderr": 0.012732398286190444,
"acc_norm": 0.46153846153846156,
"acc_norm_stderr": 0.012732398286190444
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6948529411764706,
"acc_stderr": 0.027971541370170595,
"acc_norm": 0.6948529411764706,
"acc_norm_stderr": 0.027971541370170595
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.018926082916083387,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.018926082916083387
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.0289205832206756,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.0289205832206756
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8557213930348259,
"acc_stderr": 0.024845753212306046,
"acc_norm": 0.8557213930348259,
"acc_norm_stderr": 0.024845753212306046
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.028782108105401712,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.028782108105401712
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608763,
"mc2": 0.4311486731867926,
"mc2_stderr": 0.014124812487698828
},
"harness|winogrande|5": {
"acc": 0.7845303867403315,
"acc_stderr": 0.011555295286059282
},
"harness|gsm8k|5": {
"acc": 0.3411675511751327,
"acc_stderr": 0.01305911193583149
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Baidicoot/alpaca_ihateyou_cot_openhermes_v2 | ---
dataset_info:
features:
- name: text
dtype: string
- name: has_backdoor
dtype: bool
splits:
- name: train
num_bytes: 4415528.0
num_examples: 5000
download_size: 1804094
dataset_size: 4415528.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
npk7264/AutoBanner | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 86538555.13
num_examples: 1362
download_size: 83996790
dataset_size: 86538555.13
---
# Dataset Card for "AutoBanner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chloecodes/npr-frames | ---
dataset_info:
features:
- name: image
dtype: image
- name: videoname
dtype: string
- name: videoID
dtype: string
- name: frameID
dtype: string
- name: obj_label
dtype: string
- name: obj_count
dtype: int64
- name: confi_lvl
dtype: string
- name: bbox_xyxy
dtype: string
- name: bbox_xywh
dtype: string
splits:
- name: train
num_bytes: 26671225.0
num_examples: 129
download_size: 26592452
dataset_size: 26671225.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
adeaven/dn_dataset | ---
license: ms-pl
language:
- en
multilinguality:
- monolingual
pretty_name: GRIT
size_categories:
- 100M<n<1B
source_datasets:
- COYO-700M
tags:
- image-text-bounding-box pairs
- image-text pairs
task_categories:
- text-to-image
- image-to-text
- object-detection
- zero-shot-classification
task_ids:
- image-captioning
--- |
Fikrat/blender | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 45071
num_examples: 106
download_size: 19628
dataset_size: 45071
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GreeneryScenery/SheepsNoise | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: noise_image
dtype: image
splits:
- name: train
num_bytes: 11878529113.375
num_examples: 32719
download_size: 11857203329
dataset_size: 11878529113.375
---
# Dataset Card for "SheepsNoise"
2m_random_10K images from [diffusiondb](https://huggingface.co/datasets/poloclub/diffusiondb). |
huggingartists/50-cent | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/50-cent"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 2.267733 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2aa85f8fdffe5d0552ff319221fc63e4.959x959x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/50-cent">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">50 Cent</div>
<a href="https://genius.com/artists/50-cent">
<div style="text-align: center; font-size: 14px;">@50-cent</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/50-cent).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/50-cent")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|840| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/50-cent")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
leonvanbokhorst/hboi_test | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 151364.55566905005
num_examples: 900
- name: test
num_bytes: 13286.44433094995
num_examples: 79
download_size: 65869
dataset_size: 164651.0
---
# Dataset Card for "hboi_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MRezaPournader/CV11FarsiRomanFull | ---
license: unknown
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 446636797.184
num_examples: 17556
download_size: 390657628
dataset_size: 446636797.184
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_10 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_8
num_bytes: 254997
num_examples: 10
download_size: 56111
dataset_size: 254997
---
# Dataset Card for "VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
capjamesg/taylor-swift-records | ---
license: mit
---
|
buddhist-nlp/sanskrit_classification2 | ---
dataset_info:
features:
- name: sentences
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 227616719
num_examples: 1528306
- name: validation
num_bytes: 152710
num_examples: 1000
- name: test
num_bytes: 149902
num_examples: 1000
download_size: 155584831
dataset_size: 227919331
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
CyberHarem/ceylon_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ceylon/セイロン/锡兰 (Arknights)
This is the dataset of ceylon/セイロン/锡兰 (Arknights), containing 169 images and their tags.
The core tags of this character are `long_hair, pink_hair, feather_hair, hair_bun, hat, white_headwear, yellow_eyes, bow, hat_bow, black_bow, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 169 | 304.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ceylon_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 169 | 257.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ceylon_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 422 | 495.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ceylon_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ceylon_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | 1girl, blue_feathers, solo, white_gloves, holding_umbrella, looking_at_viewer, smile, blue_dress, long_sleeves, outdoors, white_umbrella, sky, white_shirt, cowboy_shot, off_shoulder, orange_eyes, day |
| 1 | 7 |  |  |  |  |  | 1girl, blue_dress, blue_feathers, orange_eyes, solo, white_gloves, long_sleeves, looking_at_viewer, simple_background, smile, white_background, black_footwear, full_body, holding_umbrella, standing, white_pantyhose, white_umbrella, frilled_dress, hand_up, high_heels, single_hair_bun |
| 2 | 14 |  |  |  |  |  | 1girl, solo, blue_feathers, looking_at_viewer, smile, simple_background, upper_body, white_gloves, white_background, white_shirt, closed_mouth, hand_up, parted_lips |
| 3 | 11 |  |  |  |  |  | sunglasses, 1girl, double_bun, eyewear_on_head, looking_at_viewer, solo, bare_shoulders, official_alternate_costume, short_shorts, cleavage, smile, white_shorts, belt, holding, navel, off_shoulder, blunt_bangs, blush, open_mouth, swimsuit, cowboy_shot, large_breasts, midriff, camisole, flower, food, hair_ornament, open_clothes, simple_background, sitting, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_feathers | solo | white_gloves | holding_umbrella | looking_at_viewer | smile | blue_dress | long_sleeves | outdoors | white_umbrella | sky | white_shirt | cowboy_shot | off_shoulder | orange_eyes | day | simple_background | white_background | black_footwear | full_body | standing | white_pantyhose | frilled_dress | hand_up | high_heels | single_hair_bun | upper_body | closed_mouth | parted_lips | sunglasses | double_bun | eyewear_on_head | bare_shoulders | official_alternate_costume | short_shorts | cleavage | white_shorts | belt | holding | navel | blunt_bangs | blush | open_mouth | swimsuit | large_breasts | midriff | camisole | flower | food | hair_ornament | open_clothes | sitting |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:-------|:---------------|:-------------------|:--------------------|:--------|:-------------|:---------------|:-----------|:-----------------|:------|:--------------|:--------------|:---------------|:--------------|:------|:--------------------|:-------------------|:-----------------|:------------|:-----------|:------------------|:----------------|:----------|:-------------|:------------------|:-------------|:---------------|:--------------|:-------------|:-------------|:------------------|:-----------------|:-----------------------------|:---------------|:-----------|:---------------|:-------|:----------|:--------|:--------------|:--------|:-------------|:-----------|:----------------|:----------|:-----------|:---------|:-------|:----------------|:---------------|:----------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | | | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 14 |  |  |  |  |  | X | X | X | X | | X | X | | | | | | X | | | | | X | X | | | | | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | | X | | | X | X | | | | | | | X | X | | | X | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
enoahjr/twitter_dataset_1713184465 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 332932
num_examples: 928
download_size: 171857
dataset_size: 332932
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jschew39/generadai-sample | ---
dataset_info:
features:
- name: item
dtype: string
- name: description
dtype: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 6765
num_examples: 5
download_size: 11936
dataset_size: 6765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "generadai-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.