datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-5034faac-10965473 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/bigbird-pegasus-large-K-booksum
metrics: ['perplexity']
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/bigbird-pegasus-large-K-booksum
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
Pranay17/Swami | ---
license: unknown
---
|
ihanif/praang-images | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 7404618.0
num_examples: 23
download_size: 5551951
dataset_size: 7404618.0
---
# Dataset Card for "praang-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saibo/bookcorpus_compact_1024_shard0_of_10 | ---
dataset_info:
features:
- name: text
dtype: string
- name: concept_with_offset
dtype: string
splits:
- name: train
num_bytes: 738086319
num_examples: 61605
download_size: 371729131
dataset_size: 738086319
---
# Dataset Card for "bookcorpus_compact_1024_shard0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
itsroadtrip/test-dataset | ---
license: zlib
---
do your worst |
vildanh/az_alpaca_translated | ---
license: mit
---
|
nodchip/shogi_suisho5_depth9 | ---
license: mit
---
# Summary
Training and Validation Data for Shogi AI Development
# Contents
- shuffled.7z.00? ... Training Data
- shuffled.bin ... Validation Data
The training and validation data are in the YaneuraOu PackedSfenValue format.
Both datasets were generated using Suisho5 with a search depth of 9.
The training and validation data have already been shuffled.
Positions within these datasets have been replaced with the PV (Principal Variation) leaf node from the quiescence search of the original position.
Developers using this data should note that it is not necessary to perform a quiescence search on these positions to obtain the PV leaf node.
# Links
- nodchip/tanuki-: shogi engine(AI player), stronger than Bonanza6 , educational and tiny code(about 2500 lines) , USI compliant engine , capable of being compiled by VC++2015 https://github.com/nodchip/tanuki-
|
EJaalborg2022/beer_reviews_label_drift_neg | ---
dataset_info:
features:
- name: prediction_ts
dtype: float32
- name: beer_ABV
dtype: float32
- name: beer_name
dtype: string
- name: beer_style
dtype: string
- name: review_appearance
dtype: float32
- name: review_palette
dtype: float32
- name: review_taste
dtype: float32
- name: review_aroma
dtype: float32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: training
num_bytes: 6908323
num_examples: 9000
- name: validation
num_bytes: 970104
num_examples: 1260
- name: production
num_bytes: 21305419
num_examples: 27742
download_size: 16954616
dataset_size: 29183846
---
# Dataset Card for "beer_reviews_label_drift_neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_28_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 1025
num_examples: 32
download_size: 2147
dataset_size: 1025
---
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_28_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_NousResearch__Nous-Hermes-llama-2-7b | ---
pretty_name: Evaluation run of NousResearch/Nous-Hermes-llama-2-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NousResearch__Nous-Hermes-llama-2-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-22T01:50:03.524306](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Nous-Hermes-llama-2-7b/blob/main/results_2023-10-22T01-50-03.524306.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.14649748322147652,\n\
\ \"em_stderr\": 0.0036212385599472124,\n \"f1\": 0.21412122483221444,\n\
\ \"f1_stderr\": 0.0037396442766702157,\n \"acc\": 0.3989754501778092,\n\
\ \"acc_stderr\": 0.009370647012687763\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.14649748322147652,\n \"em_stderr\": 0.0036212385599472124,\n\
\ \"f1\": 0.21412122483221444,\n \"f1_stderr\": 0.0037396442766702157\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0576194086429113,\n \
\ \"acc_stderr\": 0.006418593319822861\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7403314917127072,\n \"acc_stderr\": 0.012322700705552667\n\
\ }\n}\n```"
repo_url: https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|arc:challenge|25_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_22T01_50_03.524306
path:
- '**/details_harness|drop|3_2023-10-22T01-50-03.524306.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-22T01-50-03.524306.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_22T01_50_03.524306
path:
- '**/details_harness|gsm8k|5_2023-10-22T01-50-03.524306.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-22T01-50-03.524306.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hellaswag|10_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T15:03:15.265717.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-31T15:03:15.265717.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-31T15:03:15.265717.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_22T01_50_03.524306
path:
- '**/details_harness|winogrande|5_2023-10-22T01-50-03.524306.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-22T01-50-03.524306.parquet'
- config_name: results
data_files:
- split: 2023_07_31T15_03_15.265717
path:
- results_2023-07-31T15:03:15.265717.parquet
- split: 2023_10_22T01_50_03.524306
path:
- results_2023-10-22T01-50-03.524306.parquet
- split: latest
path:
- results_2023-10-22T01-50-03.524306.parquet
---
# Dataset Card for Evaluation run of NousResearch/Nous-Hermes-llama-2-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NousResearch__Nous-Hermes-llama-2-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T01:50:03.524306](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Nous-Hermes-llama-2-7b/blob/main/results_2023-10-22T01-50-03.524306.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.14649748322147652,
"em_stderr": 0.0036212385599472124,
"f1": 0.21412122483221444,
"f1_stderr": 0.0037396442766702157,
"acc": 0.3989754501778092,
"acc_stderr": 0.009370647012687763
},
"harness|drop|3": {
"em": 0.14649748322147652,
"em_stderr": 0.0036212385599472124,
"f1": 0.21412122483221444,
"f1_stderr": 0.0037396442766702157
},
"harness|gsm8k|5": {
"acc": 0.0576194086429113,
"acc_stderr": 0.006418593319822861
},
"harness|winogrande|5": {
"acc": 0.7403314917127072,
"acc_stderr": 0.012322700705552667
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-llm-leaderboard/details_Gille__StrangeMerges_4-7B-slerp | ---
pretty_name: Evaluation run of Gille/StrangeMerges_4-7B-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Gille/StrangeMerges_4-7B-slerp](https://huggingface.co/Gille/StrangeMerges_4-7B-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Gille__StrangeMerges_4-7B-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-02T02:32:53.668872](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_4-7B-slerp/blob/main/results_2024-02-02T02-32-53.668872.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6570402657136609,\n\
\ \"acc_stderr\": 0.03181426103850339,\n \"acc_norm\": 0.6576418831798367,\n\
\ \"acc_norm_stderr\": 0.03246662892084514,\n \"mc1\": 0.45532435740514077,\n\
\ \"mc1_stderr\": 0.017433490102538772,\n \"mc2\": 0.6240238096985373,\n\
\ \"mc2_stderr\": 0.0150858230636782\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6501706484641638,\n \"acc_stderr\": 0.013936809212158285,\n\
\ \"acc_norm\": 0.6945392491467577,\n \"acc_norm_stderr\": 0.01346008047800251\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6774546903007369,\n\
\ \"acc_stderr\": 0.004664950168300713,\n \"acc_norm\": 0.8701453893646683,\n\
\ \"acc_norm_stderr\": 0.003354564257491871\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.02783491252754406,\n\
\ \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.02783491252754406\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6878612716763006,\n\
\ \"acc_stderr\": 0.035331333893236574,\n \"acc_norm\": 0.6878612716763006,\n\
\ \"acc_norm_stderr\": 0.035331333893236574\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n\
\ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n\
\ \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5957446808510638,\n \"acc_stderr\": 0.03208115750788684,\n\
\ \"acc_norm\": 0.5957446808510638,\n \"acc_norm_stderr\": 0.03208115750788684\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370332,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370332\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42857142857142855,\n \"acc_stderr\": 0.02548718714785938,\n \"\
acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.02548718714785938\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7903225806451613,\n \"acc_stderr\": 0.023157879349083522,\n \"\
acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.023157879349083522\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4827586206896552,\n \"acc_stderr\": 0.035158955511657,\n \"acc_norm\"\
: 0.4827586206896552,\n \"acc_norm_stderr\": 0.035158955511657\n },\n\
\ \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\"\
: 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n\
\ \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.031234752377721175,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.031234752377721175\n \
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.028606204289229872,\n \"\
acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229872\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6820512820512821,\n \"acc_stderr\": 0.02361088430892786,\n \
\ \"acc_norm\": 0.6820512820512821,\n \"acc_norm_stderr\": 0.02361088430892786\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131147,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131147\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.03006676158297794,\n \
\ \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.03006676158297794\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"\
acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8382352941176471,\n \"acc_stderr\": 0.025845017986926917,\n \"\
acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.025845017986926917\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8059071729957806,\n \"acc_stderr\": 0.02574490253229092,\n \
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.02574490253229092\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8091603053435115,\n \"acc_stderr\": 0.03446513350752599,\n\
\ \"acc_norm\": 0.8091603053435115,\n \"acc_norm_stderr\": 0.03446513350752599\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8339719029374202,\n\
\ \"acc_stderr\": 0.013306478243066302,\n \"acc_norm\": 0.8339719029374202,\n\
\ \"acc_norm_stderr\": 0.013306478243066302\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7514450867052023,\n \"acc_stderr\": 0.023267528432100174,\n\
\ \"acc_norm\": 0.7514450867052023,\n \"acc_norm_stderr\": 0.023267528432100174\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4,\n\
\ \"acc_stderr\": 0.01638463841038082,\n \"acc_norm\": 0.4,\n \
\ \"acc_norm_stderr\": 0.01638463841038082\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n\
\ \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n\
\ \"acc_stderr\": 0.02540383297817961,\n \"acc_norm\": 0.7234726688102894,\n\
\ \"acc_norm_stderr\": 0.02540383297817961\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.023788583551658533,\n\
\ \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.023788583551658533\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4634941329856584,\n\
\ \"acc_stderr\": 0.012736153390214961,\n \"acc_norm\": 0.4634941329856584,\n\
\ \"acc_norm_stderr\": 0.012736153390214961\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.028418208619406755,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.028418208619406755\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706207,\n \
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706207\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8557213930348259,\n\
\ \"acc_stderr\": 0.024845753212306046,\n \"acc_norm\": 0.8557213930348259,\n\
\ \"acc_norm_stderr\": 0.024845753212306046\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \
\ \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.45532435740514077,\n\
\ \"mc1_stderr\": 0.017433490102538772,\n \"mc2\": 0.6240238096985373,\n\
\ \"mc2_stderr\": 0.0150858230636782\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.829518547750592,\n \"acc_stderr\": 0.010569021122825909\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.686125852918878,\n \
\ \"acc_stderr\": 0.012782681251053201\n }\n}\n```"
repo_url: https://huggingface.co/Gille/StrangeMerges_4-7B-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|arc:challenge|25_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|gsm8k|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hellaswag|10_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T02-32-53.668872.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T02-32-53.668872.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- '**/details_harness|winogrande|5_2024-02-02T02-32-53.668872.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-02T02-32-53.668872.parquet'
- config_name: results
data_files:
- split: 2024_02_02T02_32_53.668872
path:
- results_2024-02-02T02-32-53.668872.parquet
- split: latest
path:
- results_2024-02-02T02-32-53.668872.parquet
---
# Dataset Card for Evaluation run of Gille/StrangeMerges_4-7B-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Gille/StrangeMerges_4-7B-slerp](https://huggingface.co/Gille/StrangeMerges_4-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Gille__StrangeMerges_4-7B-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-02T02:32:53.668872](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_4-7B-slerp/blob/main/results_2024-02-02T02-32-53.668872.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6570402657136609,
"acc_stderr": 0.03181426103850339,
"acc_norm": 0.6576418831798367,
"acc_norm_stderr": 0.03246662892084514,
"mc1": 0.45532435740514077,
"mc1_stderr": 0.017433490102538772,
"mc2": 0.6240238096985373,
"mc2_stderr": 0.0150858230636782
},
"harness|arc:challenge|25": {
"acc": 0.6501706484641638,
"acc_stderr": 0.013936809212158285,
"acc_norm": 0.6945392491467577,
"acc_norm_stderr": 0.01346008047800251
},
"harness|hellaswag|10": {
"acc": 0.6774546903007369,
"acc_stderr": 0.004664950168300713,
"acc_norm": 0.8701453893646683,
"acc_norm_stderr": 0.003354564257491871
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.02783491252754406,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.02783491252754406
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.035331333893236574,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.035331333893236574
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5957446808510638,
"acc_stderr": 0.03208115750788684,
"acc_norm": 0.5957446808510638,
"acc_norm_stderr": 0.03208115750788684
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370332,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370332
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.02548718714785938,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.02548718714785938
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083522,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083522
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4827586206896552,
"acc_stderr": 0.035158955511657,
"acc_norm": 0.4827586206896552,
"acc_norm_stderr": 0.035158955511657
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8,
"acc_stderr": 0.031234752377721175,
"acc_norm": 0.8,
"acc_norm_stderr": 0.031234752377721175
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.028606204289229872,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.028606204289229872
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6820512820512821,
"acc_stderr": 0.02361088430892786,
"acc_norm": 0.6820512820512821,
"acc_norm_stderr": 0.02361088430892786
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131147,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131147
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.03006676158297794,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.03006676158297794
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.02574490253229092,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.02574490253229092
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8091603053435115,
"acc_stderr": 0.03446513350752599,
"acc_norm": 0.8091603053435115,
"acc_norm_stderr": 0.03446513350752599
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990946,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990946
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8339719029374202,
"acc_stderr": 0.013306478243066302,
"acc_norm": 0.8339719029374202,
"acc_norm_stderr": 0.013306478243066302
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7514450867052023,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.7514450867052023,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4,
"acc_stderr": 0.01638463841038082,
"acc_norm": 0.4,
"acc_norm_stderr": 0.01638463841038082
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.02540383297817961,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.02540383297817961
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.023788583551658533,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.023788583551658533
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.49645390070921985,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.49645390070921985,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4634941329856584,
"acc_stderr": 0.012736153390214961,
"acc_norm": 0.4634941329856584,
"acc_norm_stderr": 0.012736153390214961
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.028418208619406755,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.028418208619406755
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.018824219512706207,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.018824219512706207
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8557213930348259,
"acc_stderr": 0.024845753212306046,
"acc_norm": 0.8557213930348259,
"acc_norm_stderr": 0.024845753212306046
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.45532435740514077,
"mc1_stderr": 0.017433490102538772,
"mc2": 0.6240238096985373,
"mc2_stderr": 0.0150858230636782
},
"harness|winogrande|5": {
"acc": 0.829518547750592,
"acc_stderr": 0.010569021122825909
},
"harness|gsm8k|5": {
"acc": 0.686125852918878,
"acc_stderr": 0.012782681251053201
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CyberHarem/akagi_miria_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of akagi_miria/赤城みりあ (THE iDOLM@STER: Cinderella Girls)
This is the dataset of akagi_miria/赤城みりあ (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are `two_side_up, brown_eyes, short_hair, black_hair, bangs, brown_hair, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 571.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akagi_miria_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 345.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akagi_miria_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1144 | 726.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akagi_miria_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 511.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akagi_miria_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1144 | 1010.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akagi_miria_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/akagi_miria_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, open_mouth, solo, blush, :d, looking_at_viewer, skirt |
| 1 | 10 |  |  |  |  |  | hair_bobbles, midriff, navel, skirt, 1girl, detached_collar, mismatched_legwear, open_mouth, solo, thighhighs, bare_shoulders, blush, looking_at_viewer, :d, star_(symbol), wrist_cuffs |
| 2 | 10 |  |  |  |  |  | 1girl, blue_dress, maid_apron, maid_headdress, blush, puffy_short_sleeves, red_bow, solo, white_apron, looking_at_viewer, open_mouth, wrist_cuffs, frilled_apron, ribbon, white_background, :d, bowtie, hair_between_eyes, mary_janes, simple_background, white_thighhighs, black_footwear |
| 3 | 18 |  |  |  |  |  | 1girl, blush, double_bun, hair_bow, hairclip, long_sleeves, looking_at_viewer, solo, open_mouth, hood_down, necklace, star_hair_ornament, animal_bag, drawstring, hooded_jacket, x_hair_ornament, belt_buckle, hair_between_eyes, sneakers, blue_shorts, heart_hair_ornament, multicolored_clothes, open_jacket, short_shorts, shoulder_bag, beads, collarbone, pantyhose, plaid, yellow_shirt, :d, loose_socks, simple_background, white_background, fur-trimmed_shorts, one_eye_closed, pink_bow, sleeves_past_wrists, star_print, striped |
| 4 | 7 |  |  |  |  |  | 1girl, flat_chest, micro_bikini, navel, looking_at_viewer, open_mouth, solo, :d, loli, side-tie_bikini_bottom, blush, simple_background, dated, standing, white_background, white_bikini |
| 5 | 6 |  |  |  |  |  | 1girl, black_gloves, blue_dress, earrings, hair_bow, solo, looking_at_viewer, smile, blush, bracelet, choker, hairclip, bare_shoulders, blue_bow, collarbone, flower, simple_background |
| 6 | 5 |  |  |  |  |  | 1girl, blush, hetero, huge_breasts, oppai_loli, 1boy, lactation, nipples, open_mouth, alternate_breast_size, breast_grab, grabbing, navel, serafuku, shirt_lift, skirt, faceless_male, multiple_boys, smile, solo_focus |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | solo | blush | :d | looking_at_viewer | skirt | hair_bobbles | midriff | navel | detached_collar | mismatched_legwear | thighhighs | bare_shoulders | star_(symbol) | wrist_cuffs | blue_dress | maid_apron | maid_headdress | puffy_short_sleeves | red_bow | white_apron | frilled_apron | ribbon | white_background | bowtie | hair_between_eyes | mary_janes | simple_background | white_thighhighs | black_footwear | double_bun | hair_bow | hairclip | long_sleeves | hood_down | necklace | star_hair_ornament | animal_bag | drawstring | hooded_jacket | x_hair_ornament | belt_buckle | sneakers | blue_shorts | heart_hair_ornament | multicolored_clothes | open_jacket | short_shorts | shoulder_bag | beads | collarbone | pantyhose | plaid | yellow_shirt | loose_socks | fur-trimmed_shorts | one_eye_closed | pink_bow | sleeves_past_wrists | star_print | striped | flat_chest | micro_bikini | loli | side-tie_bikini_bottom | dated | standing | white_bikini | black_gloves | earrings | smile | bracelet | choker | blue_bow | flower | hetero | huge_breasts | oppai_loli | 1boy | lactation | nipples | alternate_breast_size | breast_grab | grabbing | serafuku | shirt_lift | faceless_male | multiple_boys | solo_focus |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------|:--------|:-----|:--------------------|:--------|:---------------|:----------|:--------|:------------------|:---------------------|:-------------|:-----------------|:----------------|:--------------|:-------------|:-------------|:-----------------|:----------------------|:----------|:--------------|:----------------|:---------|:-------------------|:---------|:--------------------|:-------------|:--------------------|:-------------------|:-----------------|:-------------|:-----------|:-----------|:---------------|:------------|:-----------|:---------------------|:-------------|:-------------|:----------------|:------------------|:--------------|:-----------|:--------------|:----------------------|:-----------------------|:--------------|:---------------|:---------------|:--------|:-------------|:------------|:--------|:---------------|:--------------|:---------------------|:-----------------|:-----------|:----------------------|:-------------|:----------|:-------------|:---------------|:-------|:-------------------------|:--------|:-----------|:---------------|:---------------|:-----------|:--------|:-----------|:---------|:-----------|:---------|:---------|:---------------|:-------------|:-------|:------------|:----------|:------------------------|:--------------|:-----------|:-----------|:-------------|:----------------|:----------------|:-------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 18 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | X | X | X | X | X | | | | X | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | X | X | | X | | | | | | | | X | | | X | | | | | | | | | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
0x22almostEvil/russe-semantics-sim | ---
license: mit
task_categories:
- text-classification
language:
- ru
tags:
- semantics
size_categories:
- 100K<n<1M
---
# Dataset Card for russe-semantics-sim with ~200K entries. Russian language.
### Dataset Summary
License: MIT. Contains CSV of a list of word1, word2, their `connection score` (are they synonymous or associations), type of connection.
### Original Datasets are available here:
- https://github.com/nlpub/russe-evaluation |
darrel999/java-1000 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: content
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 576160
num_examples: 1000
download_size: 300158
dataset_size: 576160
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1712983331 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 5062
num_examples: 11
download_size: 7894
dataset_size: 5062
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1712983331"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llm-aes/asappp-1-2-original | ---
dataset_info:
features:
- name: essay_set
dtype: int64
- name: essay
dtype: string
- name: rater1_domain1
dtype: int64
- name: rater2_domain1
dtype: int64
- name: domain1_score
dtype: int64
- name: rubrics
dtype: string
- name: prompt
dtype: string
- name: content
dtype: int64
- name: organization
dtype: int64
- name: word_choice
dtype: int64
- name: sentence_fluency
dtype: int64
- name: conventions
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 14489590
num_examples: 3583
download_size: 4033411
dataset_size: 14489590
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SARG-ai/interleaved_chat_dataset_v0.0 | ---
task_categories:
- question-answering
- text-generation
dataset_info:
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 16850998150.445961
num_examples: 6047438
- name: test
num_bytes: 145319493.32580855
num_examples: 34184
download_size: 9140168102
dataset_size: 16996317643.77177
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for SARGAI Interleaved Datasets
## Dataset Description
This is an interleaved dataset after homogenizing the following datasets:
- [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
- [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
- [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
- [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
- [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context)
- [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
The interleaved datasets consist of 6.05M dialogues.
## Data Fields
The fields are:
1) 'source', representing one of the interleaved datasets to which the row data belongs to.
2) 'prompt', representing the prompt presented to assistant.
4) 'messages', a series of messages exchanged between user and assistant based on the prompt given to the assistant.
## Homogenization Process
The homogenization process involved several key steps:
- **Column Alignment**: Adjusting the dataset columns to match those of the Ultrachat dataset.
- **Content Transformation**: Modifying the content to ensure a uniform format, facilitating seamless integration across datasets.
- **Data Cleaning**: Removing or adjusting irrelevant or inconsistent data points that do not conform to the desired structure.
## Caveats and Considerations
- **Data Quality**: While efforts have been made to ensure high data quality, variations in context, conversational flow, and intent may exist due to the diverse sources of the original datasets.
- **Function Calls**: Some datasets, like Glaive Function Calling v2, introduced function call structures within conversations. These have been standardized but may require context-specific interpretation when used.
- **Homogenization Limitations**: Due to inherent differences in the datasets, some features specific to certain datasets may have been simplified or generalized to fit the Ultrachat structure. Notably, for the Anthropic dataset, the 'rejected' column was excluded from the homogenized dataset due to alignment challenges with the Ultrachat dataset's structure. Future iterations may explore methods to integrate or consider 'rejected' content, as its absence may impact the model's training and testing.
## Interleaving Process and Configuration
The datasets were intricately interleaved using the `interleave_datasets` method from the Hugging Face `datasets` library. This process was crucial for integrating data from different sources into a coherent and diversified dataset conducive for various NLP tasks.
### Strategy and Probabilities
- The probability parameters were carefully selected to prioritize the representation of datasets based on their original size. The largest dataset received the highest probability of selection, ensuring that its examples were proportionately more prevalent in the interleaved dataset. This approach aimed to maintain the integrity and diversity of the largest dataset while integrating additional contexts from the smaller datasets.
- For scenarios aiming to enhance exposure to smaller datasets, potentially to counteract overfitting to the larger dataset's patterns, the `stopping_strategy="all_exhausted"` was recommended. This strategy ensures that the interleaving process continues until all datasets are fully represented, giving smaller datasets equal footing and extended presence in the training material.
### Function Call Sample
Below is a simplified example of the `interleave_datasets` function call, demonstrating the setup for interleaving multiple datasets with specified probabilities and utilizing the `all_exhausted` stopping strategy:
```python
from datasets import interleave_datasets
interleaved_dataset = interleave_datasets(
[dataset1, dataset2, dataset3, ...], # List of datasets to interleave
probabilities=[0.5, 0.3, 0.2, ...], # Probabilities corresponding to each dataset
seed=42, # Seed for reproducibility
stopping_strategy="all_exhausted" # Ensures all datasets are used until exhaustion
)
|
tyzhu/lmind_nq_train5000_eval5000_v1_doc | ---
configs:
- config_name: default
data_files:
- split: train_qa
path: data/train_qa-*
- split: train_recite_qa
path: data/train_recite_qa-*
- split: eval_qa
path: data/eval_qa-*
- split: eval_recite_qa
path: data/eval_recite_qa-*
- split: all_docs
path: data/all_docs-*
- split: all_docs_eval
path: data/all_docs_eval-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train_qa
num_bytes: 581636
num_examples: 5000
- name: train_recite_qa
num_bytes: 3790343
num_examples: 5000
- name: eval_qa
num_bytes: 580393
num_examples: 5000
- name: eval_recite_qa
num_bytes: 3785337
num_examples: 5000
- name: all_docs
num_bytes: 5846467
num_examples: 8964
- name: all_docs_eval
num_bytes: 5845967
num_examples: 8964
- name: train
num_bytes: 5846467
num_examples: 8964
- name: validation
num_bytes: 5846467
num_examples: 8964
download_size: 20068079
dataset_size: 32123077
---
# Dataset Card for "lmind_nq_train5000_eval5000_v1_doc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_stsb_synthetic_superlative | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1600
num_examples: 8
- name: test
num_bytes: 496
num_examples: 3
- name: train
num_bytes: 1807
num_examples: 7
download_size: 11822
dataset_size: 3903
---
# Dataset Card for "MULTI_VALUE_stsb_synthetic_superlative"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kmeng/daisy-dog | ---
license: openrail
---
|
arubenruben/ontonotes5.0-pt-harem-default | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PESSOA
'2': I-PESSOA
'3': B-ORGANIZACAO
'4': I-ORGANIZACAO
'5': B-LOCAL
'6': I-LOCAL
'7': B-TEMPO
'8': I-TEMPO
'9': B-VALOR
'10': I-VALOR
'11': B-ABSTRACCAO
'12': I-ABSTRACCAO
'13': B-ACONTECIMENTO
'14': I-ACONTECIMENTO
'15': B-COISA
'16': I-COISA
'17': B-OBRA
'18': I-OBRA
'19': B-OUTRO
'20': I-OUTRO
splits:
- name: train
num_bytes: 16511400
num_examples: 1898
- name: validation
num_bytes: 2417378
num_examples: 279
- name: test
num_bytes: 1564609
num_examples: 163
download_size: 3182791
dataset_size: 20493387
---
# Dataset Card for "ontonotes5.0-pt-harem-default"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kristinashemet/Dataset_V2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10521416
num_examples: 1573
download_size: 1009493
dataset_size: 10521416
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Dataset_V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
houck2040/engineering | ---
license: mit
---
Data comes from Published Texas A&M Engineering News and was used to train a MLM @3epochs 500
This messages was not generated by AI
|
ibranze/araproje_arc_tr_conf_mgpt_nearestscore_true_x | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: validation
num_bytes: 86423.0
num_examples: 250
download_size: 50775
dataset_size: 86423.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_arc_tr_conf_mgpt_nearestscore_true_x"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liataynat/Yoimiya | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: file_path
dtype: string
- name: repo_id
dtype: string
- name: token_count
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 97272603
num_examples: 6790
download_size: 32996041
dataset_size: 97272603
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ericwang/samromur_children_test | ---
annotations_creators:
- crowdsourced
language:
- is
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: "Samrómur Children Icelandic Speech 1.0"
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- "samromur"
- children's speech
- 'icelandic: iceland'
- icelandic children
- icelandic kids
- kids
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for samromur_children
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Samrómur Children Icelandic Speech 1.0](https://samromur.is/)
- **Repository:** [LDC](https://catalog.ldc.upenn.edu/LDC2022S11)
- **Paper:** [Samrómur Children: An Icelandic Speech Corpus](https://aclanthology.org/2022.lrec-1.105.pdf)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org), [Jón Guðnason](mailto:jg@ru.is)
### Dataset Summary
The Samrómur Children Corpus consists of audio recordings and metadata files containing prompts read by the participants. It contains more than 137000 validated speech-recordings uttered by Icelandic children.
The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab (LVL) at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology. The recording process has started in October 2019 and continues to this day (Spetember 2021).
### Example Usage
The Samrómur Children Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
samromur_children = load_dataset("language-and-voice-lab/samromur_children")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
samromur_children = load_dataset("language-and-voice-lab/samromur_children",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Icelandic.
The reading prompts were gathered from a variety of sources, mainly from the [Icelandic Gigaword Corpus](http://clarin.is/en/resources/gigaword). The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
## Dataset Structure
### Data Instances
```python
{
'audio_id': '015652-0717240',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/2c6b0d82de2ef0dc0879732f726809cccbe6060664966099f43276e8c94b03f2/test/015652/015652-0717240.flac',
'array': array([ 0. , 0. , 0. , ..., -0.00311279,
-0.0007019 , 0.00128174], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': '015652',
'gender': 'female',
'age': '11',
'duration': 4.179999828338623,
'normalized_text': 'eiginlega var hann hin unga rússneska bylting lifandi komin'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `age` (string) - range of age of the speaker: Younger (15-35), Middle-aged (36-60) or Elderly (61+).
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription.
### Data Splits
The corpus is split into train, dev, and test portions. Lenghts of every portion are: train = 127h25m, test = 1h50m, dev=1h50m.
To load an specific portion please see the above section "Example Usage".
## Dataset Creation
### Curation Rationale
In the field of Automatic Speech Recognition (ASR) is a known fact that the children's speech is particularly hard to recognise due to its high variability produced by developmental changes in children's anatomy and speech production skills.
For this reason, the criteria of selection for the train/dev/test portions have to take into account the children's age. Nevertheless, the Samrómur Children is an unbalanced corpus in terms of gender and age of the speakers. This means that the corpus has, for example, a total of 1667 female speakers (73h38m) versus 1412 of male speakers (52h26m).
These unbalances impose conditions in the type of the experiments than can be performed with the corpus. For example, a equal number of female and male speakers through certain ranges of age is impossible. So, if one can't have a perfectly balance corpus in the training set, at least one can have it in the test portion.
The test portion of the Samrómur Children was meticulously selected to cover ages between 6 to 16 years in both female and male speakers. Every of these range of age in both genders have a total duration of 5 minutes each.
The development portion of the corpus contains only speakers with an unknown gender information. Both test and dev sets have a total duration of 1h50m each.
In order to perform fairer experiments, speakers in the train and test sets are not shared. Nevertheless, there is only one speaker shared between the train and development set. It can be identified with the speaker ID=010363. However, no audio files are shared between these two sets.
### Source Data
#### Initial Data Collection and Normalization
The data was collected using the website https://samromur.is, code of which is available at https://github.com/cadia-lvl/samromur. The age range selected for this corpus is between 4 and 17 years.
The original audio was collected at 44.1 kHz or 48 kHz sampling rate as *.wav files, which was down-sampled to 16 kHz and converted to *.flac. Each recording contains one read sentence from a script. The script contains 85.080 unique sentences and 90.838 unique tokens.
There was no identifier other than the session ID, which is used as the speaker ID. The corpus is distributed with a metadata file with a detailed information on each utterance and speaker. The madata file is encoded as UTF-8 Unicode.
The prompts were gathered from a variety of sources, mainly from The Icelandic Gigaword Corpus, which is available at http://clarin.is/en/resources/gigaword. The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
### Annotations
#### Annotation process
Prompts were pulled from these corpora if they met the criteria of having only letters which are present in the Icelandic alphabet, and if they are listed in the [DIM: Database Icelandic Morphology](https://aclanthology.org/W19-6116.pdf).
There are also synthesised prompts consisting of a name followed by a question or a demand, in order to simulate a dialogue with a smart-device.
#### Who are the annotators?
The audio files content was manually verified against the prompts by one or more listener (summer students mainly).
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This is the first ASR corpus of Icelandic children.
### Discussion of Biases
* The utterances were recorded by a smartphone or the web app.
* Participants self-reported their age group, gender, and the native language.
* Participants are aged between 4 to 17 years.
* The corpus contains 137597 utterances from 3175 speakers, totalling 131 hours.
* The amount of data due to female speakers is 73h38m, the amount of data due to male speakers is 52h26m and the amount of data due to speakers with an unknown gender information is 05h02m
* The number of female speakers is 1667, the number of male speakers is 1412. The number of speakers with an unknown gender information is 96.
* The audios due to female speakers are 78993, the audios due to male speakers are 53927 and the audios due to speakers with an unknown gender information are 4677.
### Other Known Limitations
"Samrómur Children: Icelandic Speech 21.09" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
## Additional Information
### Dataset Curators
The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab (LVL) at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology. The recording process has started in October 2019 and continues to this day (Spetember 2021). The corpus was curated by Carlos Daniel Hernández Mena in 2021.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{menasamromurchildren2021,
title={Samrómur Children Icelandic Speech 1.0},
ldc_catalog_no={LDC2022S11},
DOI={https://doi.org/10.35111/frrj-qd60},
author={Hernández Mena, Carlos Daniel and Borsky, Michal and Mollberg, David Erik and Guðmundsson, Smári Freyr and Hedström, Staffan and Pálsson, Ragnar and Jónsson, Ólafur Helgi and Þorsteinsdóttir, Sunneva and Guðmundsdóttir, Jóhanna Vigdís and Magnúsdóttir, Eydís Huld and Þórhallsdóttir, Ragnheiður and Guðnason, Jón},
publisher={Reykjavík University}
journal={Linguistic Data Consortium, Philadelphia},
year={2019},
url={https://catalog.ldc.upenn.edu/LDC2022S11},
}
```
### Contributions
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
The verification for the dataset was funded by the the Icelandic Directorate of Labour's Student Summer Job Program in 2020 and 2021.
Special thanks for the summer students for all the hard work.
|
hugginglearners/netflix-shows | ---
license:
- cc0-1.0
kaggle_id: infamouscoder/dataset-netflix-shows
---
# Dataset Card for Dataset: NetFlix Shows
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/infamouscoder/dataset-netflix-shows
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The raw data is Web Scrapped through Selenium. It contains Unlabelled text data of around 9000 Netflix Shows and Movies along with Full details like Cast, Release Year, Rating, Description, etc.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@infamouscoder](https://kaggle.com/infamouscoder)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
adasgaleus/word-importance | ---
language:
- en
license: cc-by-4.0
task_categories:
- token-classification
dataset_info:
features:
- name: context
sequence: string
- name: label
sequence: float64
splits:
- name: test
num_bytes: 45725
num_examples: 50
download_size: 15440
dataset_size: 45725
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
tags:
- word importance
---
# Word Importance
## Dataset Description
- **Repository:** [https://github.com/adam-osusky/predicting-word-importance]()
- **Paper:** [TODO]()
### Dataset Summary
The Word Importance dataset consists of short contexts, approximately 50 words in length, along with annotations indicating the importance of words within these contexts. Annotators were tasked with ranking the top 10% of important words within each context.
Any words left unranked by the user received the same last rank. For instance, if a user selected 5 words, the remaining words were assigned a rank of 6.
Moreover, multiple users contributed rankings for each context, and the final ranking for a context was computed by averaging these contributions.
The dataset is designed to facilitate research in word importance prediction and token classification tasks. For further details on annotation instructions and methodology, refer to the associated paper (link to be provided when available).
### Supported Tasks
Given its small size, the dataset is primarily intended for evaluating models that predict word importance scores. For code related to evaluation, please refer to [https://github.com/adam-osusky/predicting-word-importance].
### Languages
All the text is in english. And it consists of 5 domains: news, beletry, poetry, jokes, and transcribed spoken language
## Additional Information
### Licensing Information
This work is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{wordimp-osus,
author = {Adam Osuský},
title = {Predicting Word Importance Using Pre-Trained Language Models},
school = {Charles University, Faculty of Mathematics and Physics},
year = {2024},
type = {Bachelor's Thesis},
}
``` |
liuyanchen1015/MULTI_VALUE_wnli_his_him | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 3042
num_examples: 12
- name: test
num_bytes: 8487
num_examples: 28
- name: train
num_bytes: 27336
num_examples: 125
download_size: 22142
dataset_size: 38865
---
# Dataset Card for "MULTI_VALUE_wnli_his_him"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigbio/bionlp_st_2013_cg |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2013 CG
homepage: https://github.com/openbiocorpora/bionlp-st-2013-cg
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2013 CG
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-cg
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
the Cancer Genetics (CG) is a event extraction task and a main task of the BioNLP Shared Task (ST) 2013.
The CG task is an information extraction task targeting the recognition of events in text,
represented as structured n-ary associations of given physical entities. In addition to
addressing the cancer domain, the CG task is differentiated from previous event extraction
tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple
levels of biological organization, ranging from the molecular through the cellular and organ
levels up to whole organisms. Final test set submissions were accepted from six teams
## Citation Information
```
@inproceedings{pyysalo-etal-2013-overview,
title = "Overview of the Cancer Genetics ({CG}) task of {B}io{NLP} Shared Task 2013",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Ananiadou, Sophia",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2008",
pages = "58--66",
}
```
|
fimu-docproc-research/CIVQA-TesseractOCR-LayoutLM | ---
dataset_info:
features:
- name: input_ids
sequence: int64
- name: bbox
dtype:
array2_d:
shape:
- 512
- 4
dtype: int64
- name: attention_mask
sequence: int64
- name: image
dtype:
array3_d:
shape:
- 3
- 224
- 224
dtype: int64
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
- name: questions
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 198175471439
num_examples: 160645
- name: validation
num_bytes: 20009392368
num_examples: 16220
download_size: 826530358
dataset_size: 218184863807
language:
- cs
tags:
- finance
pretty_name: C
license: mit
---
# CIVQA TesseractOCR LayoutLM Dataset
The Czech Invoice Visual Question Answering dataset was created with Tesseract OCR and encoded for the LayoutLM.
The pre-encoded dataset can be found on this link: https://huggingface.co/datasets/fimu-docproc-research/CIVQA-TesseractOCR
All invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.
- Invoice number
- Variable symbol
- Specific symbol
- Constant symbol
- Bank code
- Account number
- ICO
- Total amount
- Invoice date
- Due date
- Name of supplier
- IBAN
- DIC
- QR code
- Supplier's address
The invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: https://forms.gle/tUVJKoB22oeTncUD6
We profoundly appreciate your cooperation and understanding in this matter. |
NLPC-UOM/Sinhala-POS-Data | ---
annotations_creators: []
languages:
- si
licenses:
- mit
---
# Sinhala-POS-Data
POS tagged Sinhala text
news- verified- final level.txt file contains the first version of our annotated data. There are 253636 word in it.
TagList.txt contains the tag list.
Tagging Guide.pdf contains a detailed description of the tags.
If you use this data set or the tag set, please cite one of these as apropriate:
Fernando, S., & Ranathunga, S. (2018, May). Evaluation of Different Classifiers for Sinhala POS Tagging. In 2018 Moratuwa Engineering Research Conference (MERCon) (pp. 96-101). IEEE.
Dilshani, N., Fernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2017). A Comprehensive Part of Speech (POS) Tag Set for Sinhala Language. The Third International Conference on Linguistics in Sri Lanka, ICLSL 2017. Department of Linguistics, University of Kelaniya, Sri Lanka.
Fernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2016, December). Comprehensive Part-Of-Speech Tag Set and SVM Based POS Tagger for Sinhala. In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016) (pp. 173-182).
|
nateraw/fuego-20230208-180352-b0cb47 | ---
tags:
- fuego
fuego:
id: 20230208-180352-b0cb47
status: preparing
script: main.py
requirements_file: requirements.txt
space_id: nateraw/fuego-20230208-180352-b0cb47
space_hardware: cpu-basic
github_repo_id: pytorch/examples
github_repo_branch: main
github_repo_sha: d8456a36d1bbb22f72b003f59406a19a0a0547c3
---
|
gerhardsr/aiforsiteupdate | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_DreadPoor__BagelToppyLake-7B-slerp | ---
pretty_name: Evaluation run of DreadPoor/BagelToppyLake-7B-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [DreadPoor/BagelToppyLake-7B-slerp](https://huggingface.co/DreadPoor/BagelToppyLake-7B-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_DreadPoor__BagelToppyLake-7B-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-13T19:00:17.803854](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__BagelToppyLake-7B-slerp/blob/main/results_2024-02-13T19-00-17.803854.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6450608281170044,\n\
\ \"acc_stderr\": 0.03230282110443641,\n \"acc_norm\": 0.64702543076818,\n\
\ \"acc_norm_stderr\": 0.03296019853363821,\n \"mc1\": 0.4541003671970624,\n\
\ \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.6215432793564798,\n\
\ \"mc2_stderr\": 0.015396330957522903\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6535836177474402,\n \"acc_stderr\": 0.013905011180063228,\n\
\ \"acc_norm\": 0.6715017064846417,\n \"acc_norm_stderr\": 0.013724978465537302\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6711810396335391,\n\
\ \"acc_stderr\": 0.004688239419302076,\n \"acc_norm\": 0.8479386576379208,\n\
\ \"acc_norm_stderr\": 0.0035834648107534598\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n\
\ \"acc_stderr\": 0.0421850621536888,\n \"acc_norm\": 0.6074074074074074,\n\
\ \"acc_norm_stderr\": 0.0421850621536888\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.038234289699266046,\n\
\ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.038234289699266046\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n\
\ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n\
\ \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n\
\ \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41005291005291006,\n \"acc_stderr\": 0.025331202438944433,\n \"\
acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.025331202438944433\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n\
\ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n\
\ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7677419354838709,\n\
\ \"acc_stderr\": 0.02402225613030824,\n \"acc_norm\": 0.7677419354838709,\n\
\ \"acc_norm_stderr\": 0.02402225613030824\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\"\
: 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n\
\ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"\
acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n\
\ \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6538461538461539,\n \"acc_stderr\": 0.02412112541694119,\n \
\ \"acc_norm\": 0.6538461538461539,\n \"acc_norm_stderr\": 0.02412112541694119\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465066,\n \
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465066\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634335,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634335\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3973509933774834,\n \"acc_stderr\": 0.0399552400768168,\n \"acc_norm\"\
: 0.3973509933774834,\n \"acc_norm_stderr\": 0.0399552400768168\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8311926605504587,\n\
\ \"acc_stderr\": 0.016060056268530343,\n \"acc_norm\": 0.8311926605504587,\n\
\ \"acc_norm_stderr\": 0.016060056268530343\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.03392238405321617,\n\
\ \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.03392238405321617\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8186274509803921,\n \"acc_stderr\": 0.027044621719474082,\n \"\
acc_norm\": 0.8186274509803921,\n \"acc_norm_stderr\": 0.027044621719474082\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7974683544303798,\n \"acc_stderr\": 0.02616056824660146,\n \
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.02616056824660146\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.036412970813137296,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.036412970813137296\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8314176245210728,\n\
\ \"acc_stderr\": 0.013387895731543604,\n \"acc_norm\": 0.8314176245210728,\n\
\ \"acc_norm_stderr\": 0.013387895731543604\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.024818350129436593,\n\
\ \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.024818350129436593\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.34413407821229053,\n\
\ \"acc_stderr\": 0.015889221313307094,\n \"acc_norm\": 0.34413407821229053,\n\
\ \"acc_norm_stderr\": 0.015889221313307094\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n\
\ \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n\
\ \"acc_stderr\": 0.025403832978179604,\n \"acc_norm\": 0.7234726688102894,\n\
\ \"acc_norm_stderr\": 0.025403832978179604\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.02465968518596728,\n\
\ \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.02465968518596728\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.475177304964539,\n \"acc_stderr\": 0.02979071924382972,\n \
\ \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.02979071924382972\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46153846153846156,\n\
\ \"acc_stderr\": 0.012732398286190442,\n \"acc_norm\": 0.46153846153846156,\n\
\ \"acc_norm_stderr\": 0.012732398286190442\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462927,\n\
\ \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462927\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6405228758169934,\n \"acc_stderr\": 0.01941253924203216,\n \
\ \"acc_norm\": 0.6405228758169934,\n \"acc_norm_stderr\": 0.01941253924203216\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\
\ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \
\ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291293,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291293\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.02553843336857833,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.02553843336857833\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.035887028128263734,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.035887028128263734\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4541003671970624,\n\
\ \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.6215432793564798,\n\
\ \"mc2_stderr\": 0.015396330957522903\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8184688239936859,\n \"acc_stderr\": 0.010833276515007482\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5504169825625473,\n \
\ \"acc_stderr\": 0.013702290047884749\n }\n}\n```"
repo_url: https://huggingface.co/DreadPoor/BagelToppyLake-7B-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|arc:challenge|25_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|gsm8k|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hellaswag|10_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-13T19-00-17.803854.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-13T19-00-17.803854.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- '**/details_harness|winogrande|5_2024-02-13T19-00-17.803854.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-13T19-00-17.803854.parquet'
- config_name: results
data_files:
- split: 2024_02_13T19_00_17.803854
path:
- results_2024-02-13T19-00-17.803854.parquet
- split: latest
path:
- results_2024-02-13T19-00-17.803854.parquet
---
# Dataset Card for Evaluation run of DreadPoor/BagelToppyLake-7B-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [DreadPoor/BagelToppyLake-7B-slerp](https://huggingface.co/DreadPoor/BagelToppyLake-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_DreadPoor__BagelToppyLake-7B-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-13T19:00:17.803854](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__BagelToppyLake-7B-slerp/blob/main/results_2024-02-13T19-00-17.803854.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6450608281170044,
"acc_stderr": 0.03230282110443641,
"acc_norm": 0.64702543076818,
"acc_norm_stderr": 0.03296019853363821,
"mc1": 0.4541003671970624,
"mc1_stderr": 0.017429593091323522,
"mc2": 0.6215432793564798,
"mc2_stderr": 0.015396330957522903
},
"harness|arc:challenge|25": {
"acc": 0.6535836177474402,
"acc_stderr": 0.013905011180063228,
"acc_norm": 0.6715017064846417,
"acc_norm_stderr": 0.013724978465537302
},
"harness|hellaswag|10": {
"acc": 0.6711810396335391,
"acc_stderr": 0.004688239419302076,
"acc_norm": 0.8479386576379208,
"acc_norm_stderr": 0.0035834648107534598
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.0421850621536888,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.0421850621536888
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.038234289699266046,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.038234289699266046
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.0325005368436584,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.0325005368436584
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370333,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370333
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.025331202438944433,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.025331202438944433
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7677419354838709,
"acc_stderr": 0.02402225613030824,
"acc_norm": 0.7677419354838709,
"acc_norm_stderr": 0.02402225613030824
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919443,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6538461538461539,
"acc_stderr": 0.02412112541694119,
"acc_norm": 0.6538461538461539,
"acc_norm_stderr": 0.02412112541694119
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465066,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465066
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.029344572500634335,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.029344572500634335
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3973509933774834,
"acc_stderr": 0.0399552400768168,
"acc_norm": 0.3973509933774834,
"acc_norm_stderr": 0.0399552400768168
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8311926605504587,
"acc_stderr": 0.016060056268530343,
"acc_norm": 0.8311926605504587,
"acc_norm_stderr": 0.016060056268530343
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.03392238405321617,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.03392238405321617
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8186274509803921,
"acc_stderr": 0.027044621719474082,
"acc_norm": 0.8186274509803921,
"acc_norm_stderr": 0.027044621719474082
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.02616056824660146,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.02616056824660146
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.036412970813137296,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.036412970813137296
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8314176245210728,
"acc_stderr": 0.013387895731543604,
"acc_norm": 0.8314176245210728,
"acc_norm_stderr": 0.013387895731543604
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.024818350129436593,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.024818350129436593
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.34413407821229053,
"acc_stderr": 0.015889221313307094,
"acc_norm": 0.34413407821229053,
"acc_norm_stderr": 0.015889221313307094
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.025403832978179604,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.025403832978179604
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.02465968518596728,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.02465968518596728
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.02979071924382972,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.02979071924382972
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46153846153846156,
"acc_stderr": 0.012732398286190442,
"acc_norm": 0.46153846153846156,
"acc_norm_stderr": 0.012732398286190442
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462927,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462927
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6405228758169934,
"acc_stderr": 0.01941253924203216,
"acc_norm": 0.6405228758169934,
"acc_norm_stderr": 0.01941253924203216
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.028795185574291293,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.028795185574291293
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.02553843336857833,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.02553843336857833
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.035887028128263734,
"acc_norm": 0.85,
"acc_norm_stderr": 0.035887028128263734
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4541003671970624,
"mc1_stderr": 0.017429593091323522,
"mc2": 0.6215432793564798,
"mc2_stderr": 0.015396330957522903
},
"harness|winogrande|5": {
"acc": 0.8184688239936859,
"acc_stderr": 0.010833276515007482
},
"harness|gsm8k|5": {
"acc": 0.5504169825625473,
"acc_stderr": 0.013702290047884749
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Abhinav-B/finetune_llama_gpt_v2 | ---
dataset_info:
features:
- name: questions
dtype: int64
- name: queries
dtype: int64
splits:
- name: train
num_bytes: 1600
num_examples: 100
download_size: 2379
dataset_size: 1600
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyenthanhdo/patent_v2_merged | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: lang
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 118735189
num_examples: 100488
download_size: 66085340
dataset_size: 118735189
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "patent_v2_merged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HWERI/openorca-multiplechoice-5k-comparisons | ---
license: apache-2.0
---
A subset of beaugogh/openorca-multiplechoice-10k, where model responses are added as the "rejected" responses.
The model used here is beaugogh/Llama2-7b-openorca-mc-v2. |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/21116b08 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 174
num_examples: 10
download_size: 1324
dataset_size: 174
---
# Dataset Card for "21116b08"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
meta_woz | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- other
license_details: Microsoft Research Data License Agreement
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: metalwoz
pretty_name: Meta-Learning Wizard-of-Oz
dataset_info:
- config_name: dialogues
features:
- name: id
dtype: string
- name: user_id
dtype: string
- name: bot_id
dtype: string
- name: domain
dtype: string
- name: task_id
dtype: string
- name: turns
sequence: string
splits:
- name: train
num_bytes: 19999218
num_examples: 37884
- name: test
num_bytes: 1284287
num_examples: 2319
download_size: 8629863
dataset_size: 21283505
- config_name: tasks
features:
- name: task_id
dtype: string
- name: domain
dtype: string
- name: bot_prompt
dtype: string
- name: bot_role
dtype: string
- name: user_prompt
dtype: string
- name: user_role
dtype: string
splits:
- name: train
num_bytes: 73768
num_examples: 227
- name: test
num_bytes: 4351
num_examples: 14
download_size: 8629863
dataset_size: 78119
---
# Dataset Card for MetaLWOz
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [MetaLWOz Project Website](https://www.microsoft.com/en-us/research/project/metalwoz/)
- **Paper:** [Fast Domain Adaptation for Goal-Oriented Dialogue Using a Hybrid Generative-Retrieval Transformer](https://ieeexplore.ieee.org/abstract/document/9053599), and [Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation](https://arxiv.org/pdf/2003.01680.pdf)
- **Point of Contact:** [Hannes Schulz](https://www.microsoft.com/en-us/research/people/haschulz/)
### Dataset Summary
MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models.
We introduce the Meta-Learning Wizard of Oz (MetaLWOz) dialogue dataset for developing fast adaptation methods for
conversation models. This data can be used to train task-oriented dialogue models, specifically to develop methods to
quickly simulate user responses with a small amount of data. Such fast-adaptation models fall into the research areas
of transfer learning and meta learning. The dataset consists of 37,884 crowdsourced dialogues recorded between two
human users in a Wizard of Oz setup, in which one was instructed to behave like a bot, and the other a true human
user. The users are assigned a task belonging to a particular domain, for example booking a reservation at a
particular restaurant, and work together to complete the task. Our dataset spans 47 domains having 227 tasks total.
Dialogues are a minimum of 10 turns long.
### Supported Tasks and Leaderboards
This dataset supports a range of task.
- **Generative dialogue modeling** or `dialogue-modeling`: This data can be used to train task-oriented dialogue
models, specifically to develop methods to quickly simulate user responses with a small amount of data. Such fast
-adaptation models fall into the research areas of transfer learning and meta learning. The text of the dialogues
can be used to train a sequence model on the utterances.
Example of sample input/output is given in section [Data Instances](#data-instances)
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
A data instance is a full multi-turn dialogue between two crowd-workers, one had the role of being a `bot`, and the other one was the `user`. Both were
given a `domain` and a `task`. Each turn has a single utterance, e.g.:
```
Domain: Ski
User Task: You want to know if there are good ski hills an
hour’s drive from your current location.
Bot Task: Tell the user that there are no ski hills in their
immediate location.
Bot: Hello how may I help you?
User: Is there any good ski hills an hour’s drive from my
current location?
Bot: I’m sorry to inform you that there are no ski hills in your
immediate location
User: Can you help me find the nearest?
Bot: Absolutely! It looks like you’re about 3 hours away from
Bear Mountain. That seems to be the closest.
User: Hmm.. sounds good
Bot: Alright! I can help you get your lift tickets now!When
will you be going?
User: Awesome! please get me a ticket for 10pax
Bot: You’ve got it. Anything else I can help you with?
User: None. Thanks again!
Bot: No problem!
```
Example of input/output for this dialog:
```
Input: dialog history = Hello how may I help you?; Is there
any good ski hills an hour’s drive from my current location?;
I’m sorry to inform you that there are no ski hills in your
immediate location
Output: user response = Can you help me find the nearest?
```
### Data Fields
Each dialogue instance has the following fields:
- `id`: a unique ID identifying the dialog.
- `user_id`: a unique ID identifying the user.
- `bot_id`: a unique ID identifying the bot.
- `domain`: a unique ID identifying the domain. Provides a mapping to tasks dataset.
- `task_id`: a unique ID identifying the task. Provides a mapping to tasks dataset.
- `turns`: the sequence of utterances alternating between `bot` and `user`, starting with a prompt from `bot`.
Each task instance has following fields:
- `task_id`: a unique ID identifying the task.
- `domain`: a unique ID identifying the domain.
- `bot_prompt`: The task specification for bot.
- `bot_role`: The domain oriented role of bot.
- `user_prompt`: The task specification for user.
- `user_role`: The domain oriented role of user.
### Data Splits
The dataset is split into a `train` and `test` split with the following sizes:
| | Training MetaLWOz | Evaluation MetaLWOz | Combined |
| ----- | ------ | ----- | ---- |
| Total Domains | 47 | 4 | 51 |
| Total Tasks | 226 | 14 | 240 |
| Total Dialogs | 37884 | 2319 | 40203 |
Below are the various statistics of the dataset:
| Statistic | Mean | Minimum | Maximum |
| ----- | ------ | ----- | ---- |
| Number of tasks per domain | 4.8 | 3 | 11 |
| Number of dialogs per domain | 806.0 | 288 | 1990 |
| Number of dialogs per task | 167.6 | 32 | 285 |
| Number of turns per dialog | 11.4 | 10 | 46 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset v1 version is created by team of researchers from Microsoft Research (Montreal, Canada)
### Licensing Information
The dataset is released under [Microsoft Research Data License Agreement](https://msropendata-web-api.azurewebsites.net/licenses/2f933be3-284d-500b-7ea3-2aa2fd0f1bb2/view)
### Citation Information
You can cite the following for the various versions of MetaLWOz:
Version 1.0
```
@InProceedings{shalyminov2020fast,
author = {Shalyminov, Igor and Sordoni, Alessandro and Atkinson, Adam and Schulz, Hannes},
title = {Fast Domain Adaptation For Goal-Oriented Dialogue Using A Hybrid Generative-Retrieval Transformer},
booktitle = {2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year = {2020},
month = {April},
url = {https://www.microsoft.com/en-us/research/publication/fast-domain-adaptation-for-goal-oriented-dialogue-using-a
-hybrid-generative-retrieval-transformer/},
}
```
### Contributions
Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset. |
nirdrang/anthro-ai | ---
license: apache-2.0
---
|
A2H0H0R1/autotrain-data-test | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: autotrain_image
dtype: image
- name: autotrain_label
dtype:
class_label:
names:
'0': cats
'1': dogs
splits:
- name: train
num_bytes: 60756.0
num_examples: 10
- name: validation
num_bytes: 60756.0
num_examples: 10
download_size: 124636
dataset_size: 121512.0
---
# Dataset Card for "autotrain-data-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Boni98/PixLore-Rich-Captions | ---
license: apache-2.0
task_categories:
- image-to-text
language:
- en
pretty_name: PixLore Rich Captions
---
Rich image captioning dataset used for training PixLore model: https://arxiv.org/abs/2312.05349
"image_path" contains the path to the COCO dataset image (change the path accordingly),
"rich_caption" contains the rich caption created using the technique described in the paper.
The rest of the columns are used for debugging or improving the prompt.
|
Rickcerq/vozdevalerinha | ---
license: openrail
---
|
ajibawa-2023/Children-Stories-Collection | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
size_categories:
- 100K<n<1M
tags:
- synthetic
- story
- children
- young children
---
**Children Stories Collection**
A great synthetic datasets consists of around **0.9 million** stories especially meant for **Young Children**. You can directly use these datasets for training large models.
Total 10 datasets are available for download. You can use any one or all the json files for training purpose.
These datasets are in "prompt" and "text" format. Total token length is also available.
Thank you for your love & support. |
mozilla-foundation/common_voice_6_1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
as:
- n<1K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fi:
- 1K<n<10K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
hi:
- n<1K
hsb:
- 1K<n<10K
hu:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 10K<n<100K
it:
- 100K<n<1M
ja:
- 1K<n<10K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lg:
- 1K<n<10K
lt:
- 1K<n<10K
lv:
- 1K<n<10K
mn:
- 10K<n<100K
mt:
- 10K<n<100K
nl:
- 10K<n<100K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 10K<n<100K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 1K<n<10K
ru:
- 10K<n<100K
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 10K<n<100K
ta:
- 10K<n<100K
th:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
uk:
- 10K<n<100K
vi:
- 1K<n<10K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- 10K<n<100K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 6.1
language_bcp47:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 6.1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 9283 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 7335 validated hours in 60 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Assamese, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Georgian, German, Greek, Hakha Chin, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_6_1", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
MBEIR/M-BEIR_DEV | ---
license: "mit"
pretty_name: "M-BEIR_DEV"
language:
- "en"
configs:
- config_name: query
data_files:
- split: train
path: "query/train/*.jsonl"
- config_name: cand_pool
data_files:
- split: local
path: "cand_pool/*.jsonl"
- config_name: instructions
data_files:
- split: instructions
path: "instructions/*.jsonl"
- config_name: qrels
data_files:
- split: train
path: "qrels/*.txt"
---
|
mohit-raghavendra/SHP-SFT | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answers
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 4066461
num_examples: 3938
- name: validation
num_bytes: 213999
num_examples: 213
- name: test
num_bytes: 219850
num_examples: 227
download_size: 2875551
dataset_size: 4500310
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Piro17/fer2013test | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': angry
'1': disgust
'2': fear
'3': happy
'4': neutral
'5': sad
'6': surprise
splits:
- name: train
num_bytes: 11521798.802
num_examples: 7178
download_size: 10231842
dataset_size: 11521798.802
---
# Dataset Card for "fer2013test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llm-aes/asappp-1-2-instruct | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 29451763
num_examples: 7166
download_size: 8644011
dataset_size: 29451763
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
erhwenkuo/train_1m-chinese-zhtw | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 422333552
num_examples: 917424
download_size: 290105331
dataset_size: 422333552
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- zh
tags:
- alpaca
- fine-tune
size_categories:
- 100K<n<1M
---
# Dataset Card for "train_1m-chinese-zhtw"
## 內容
包含約 100 萬條由 [BELLE](https://github.com/LianjiaTech/BELLE) 專案產生的中文指令(instruction)資料。
## 範例
```
{
"instruction": "判斷給定的文章是否符合語法規則。如果不符合,請提供修改建議。下面是一篇文章的開頭: 為了探討這個主題,本文將提供一系列資料和例項,以證明這一觀點,
"input": "",
"output": "這個開頭符合語法規則。"
}
```
### 欄位:
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## 使用限制
僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。 |
quarkonics/bonit | ---
license: apache-2.0
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 174096
num_examples: 743
download_size: 60062
dataset_size: 174096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cj-mills/hagrid-sample-120k-384p | ---
license: cc-by-sa-4.0
task_categories:
- object-detection
language:
- en
pretty_name: HaGRID Sample 120k 384p
size_categories:
- 100K<n<1M
---
This dataset contains 127,331 images from [HaGRID](https://github.com/hukenovs/hagrid) (HAnd Gesture Recognition Image Dataset) downscaled to 384p. The original dataset is 716GB and contains 552,992 1080p images. I created this sample for a tutorial so readers can use the dataset in the free tiers of Google Colab and Kaggle Notebooks.
### Original Authors:
* [Alexander Kapitanov](https://www.linkedin.com/in/hukenovs)
* [Andrey Makhlyarchuk](https://www.linkedin.com/in/makhliarchuk)
* [Karina Kvanchiani](https://www.linkedin.com/in/kvanchiani)
### Original Dataset Links
* [GitHub](https://github.com/hukenovs/hagrid)
* [Kaggle Datasets Page](https://www.kaggle.com/datasets/kapitanov/hagrid)
### Object Classes
```text
['call',
'no_gesture',
'dislike',
'fist',
'four',
'like',
'mute',
'ok',
'one',
'palm',
'peace',
'peace_inverted',
'rock',
'stop',
'stop_inverted',
'three',
'three2',
'two_up',
'two_up_inverted']
```
### Annotations
* `bboxes`: `[top-left-X-position, top-left-Y-position, width, height]`
* Multiply `top-left-X-position` and `width` values by the image width and multiply `top-left-Y-position` and `height` values by the image height.
<div style="overflow-x: auto; overflow-y: auto">
<table>
<thead>
<tr style="text-align: right">
<th></th>
<th>00005c9c-3548-4a8f-9d0b-2dd4aff37fc9</th>
</tr>
</thead>
<tbody>
<tr>
<th>bboxes</th>
<td>[[0.23925175, 0.28595301, 0.25055143, 0.20777627]]</td>
</tr>
<tr>
<th>labels</th>
<td>[call]</td>
</tr>
<tr>
<th>leading_hand</th>
<td>right</td>
</tr>
<tr>
<th>leading_conf</th>
<td>1</td>
</tr>
<tr>
<th>user_id</th>
<td>5a389ffe1bed6660a59f4586c7d8fe2770785e5bf79b09334aa951f6f119c024</td>
</tr>
</tbody>
</table>
</div>
|
metarank/esci | ---
license: apache-2.0
language:
- en
tags:
- shopping
- ranking
- amazon
pretty_name: ESCI
size_categories:
- 10K<n<100K
---
# Amazon ESCI/ESCI-S dataset
A combination of [Amazon ESCI](https://github.com/amazon-science/esci-data) and [ESCI-S](https://github.com/shuttie/esci-s) datasets in a JSON format.
Used for fine-tuning bi- and cross-encoder models in the [Metarank](https://huggingface.co/metarank) project.
## Dataset format
The dataset is encoded in a JSON-line format, where each row is a single ranking event,
with all item metadata pre-joined. An example:
```json
{
"query": "!qscreen fence without holes",
"e": [
{
"title": "Zippity Outdoor Products ZP19026 Lightweight Portable Vinyl Picket Fence Kit w/Metal Base(42\" H x 92\" W), White",
"desc": "..."
},
{
"title": "Sunnyglade 6 feet x 50 feet Privacy Screen Fence Heavy Duty Fencing Mesh Shade Net Cover for Wall Garden Yard Backyard (6 ft X 50 ft, Green)",
"desc": "..."
},
{
"title": "Amgo 6' x 50' Black Fence Privacy Screen Windscreen,with Bindings & Grommets, Heavy Duty for Commercial and Residential, 90% Blockage, Cable Zip Ties Included, (Available for Custom Sizes)",
"desc": "..."
},
{
"title": "Amgo 4' x 50' Black Fence Privacy Screen Windscreen,with Bindings & Grommets, Heavy Duty for Commercial and Residential, 90% Blockage, Cable Zip Ties Included, (Available for Custom Sizes)",
"desc": "..."
}
]
}
```
## License
Apache 2.0 |
satishsatpal/pchat | ---
license: mit
---
|
liuyanchen1015/MULTI_VALUE_mrpc_conditional_were_was | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 864
num_examples: 3
- name: train
num_bytes: 2158
num_examples: 7
- name: validation
num_bytes: 260
num_examples: 1
download_size: 12972
dataset_size: 3282
---
# Dataset Card for "MULTI_VALUE_mrpc_conditional_were_was"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GroupSix/common-voice-en-sv | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 12090945008
num_examples: 12588
- name: test
num_bytes: 4937998648
num_examples: 5141
download_size: 2508578885
dataset_size: 17028943656
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Yahir21/ggg | ---
license: afl-3.0
---
|
Rabnawaz/King | ---
license: apache-2.0
---
|
pietrolesci/mnli-embeddings | ---
dataset_info:
- config_name: pietrolesci__bert-base-uncased_mnli_53fb0761e0_epoch20
features:
- name: uid
dtype: int64
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 2420615128
num_examples: 392702
download_size: 1946635938
dataset_size: 2420615128
- config_name: pietrolesci__bert-tiny_mnli_cdc7ea0d50_epoch20
features:
- name: uid
dtype: int64
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 409980888
num_examples: 392702
download_size: 398525726
dataset_size: 409980888
configs:
- config_name: pietrolesci__bert-base-uncased_mnli_53fb0761e0_epoch20
data_files:
- split: train
path: pietrolesci__bert-base-uncased_mnli_53fb0761e0_epoch20/train-*
- config_name: pietrolesci__bert-tiny_mnli_cdc7ea0d50_epoch20
data_files:
- split: train
path: pietrolesci__bert-tiny_mnli_cdc7ea0d50_epoch20/train-*
---
|
Jcuhfehl/OpenHermes-ChatML-tokenized_llama | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 443338883
num_examples: 242831
download_size: 123505257
dataset_size: 443338883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MadVoyager/stable_diffusion_instructional_dataset | ---
task_categories:
- question-answering
- text2text-generation
- conversational
language:
- en
tags:
- stable diffusion
- llama
- chatgpt
- alpaca
- llm
- dataset
pretty_name: sd_instruc
--- |
NPCProgrammer/BERT_Emotions_tuned | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 51085533
num_examples: 16000
- name: validation
num_bytes: 6382695
num_examples: 2000
- name: test
num_bytes: 6385173
num_examples: 2000
download_size: 2333818
dataset_size: 63853401
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
armanzarei/keivan_finetune | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 156358099.0
num_examples: 329
download_size: 156335903
dataset_size: 156358099.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_mrpc_more_much | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 20790
num_examples: 72
- name: train
num_bytes: 49718
num_examples: 176
- name: validation
num_bytes: 6369
num_examples: 23
download_size: 61246
dataset_size: 76877
---
# Dataset Card for "MULTI_VALUE_mrpc_more_much"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_louisbrulenaudet__Maxine-34B-stock | ---
pretty_name: Evaluation run of louisbrulenaudet/Maxine-34B-stock
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [louisbrulenaudet/Maxine-34B-stock](https://huggingface.co/louisbrulenaudet/Maxine-34B-stock)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_louisbrulenaudet__Maxine-34B-stock\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-05T00:59:27.637181](https://huggingface.co/datasets/open-llm-leaderboard/details_louisbrulenaudet__Maxine-34B-stock/blob/main/results_2024-04-05T00-59-27.637181.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.764333755853148,\n\
\ \"acc_stderr\": 0.028344527076434468,\n \"acc_norm\": 0.767478256941674,\n\
\ \"acc_norm_stderr\": 0.028893491881214303,\n \"mc1\": 0.5263157894736842,\n\
\ \"mc1_stderr\": 0.017479241161975457,\n \"mc2\": 0.7017750053458277,\n\
\ \"mc2_stderr\": 0.014211541851082555\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7192832764505119,\n \"acc_stderr\": 0.013131238126975583,\n\
\ \"acc_norm\": 0.7406143344709898,\n \"acc_norm_stderr\": 0.012808273573927094\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.671081457876917,\n\
\ \"acc_stderr\": 0.004688601416815173,\n \"acc_norm\": 0.8673571001792472,\n\
\ \"acc_norm_stderr\": 0.0033849518032134734\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.03785714465066653,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.03785714465066653\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.881578947368421,\n \"acc_stderr\": 0.02629399585547494,\n\
\ \"acc_norm\": 0.881578947368421,\n \"acc_norm_stderr\": 0.02629399585547494\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n\
\ \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.76,\n \
\ \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8075471698113208,\n \"acc_stderr\": 0.024262979839372274,\n\
\ \"acc_norm\": 0.8075471698113208,\n \"acc_norm_stderr\": 0.024262979839372274\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9097222222222222,\n\
\ \"acc_stderr\": 0.023964965777906935,\n \"acc_norm\": 0.9097222222222222,\n\
\ \"acc_norm_stderr\": 0.023964965777906935\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.58,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.04999999999999999,\n \
\ \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.04999999999999999\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7283236994219653,\n\
\ \"acc_stderr\": 0.0339175032232166,\n \"acc_norm\": 0.7283236994219653,\n\
\ \"acc_norm_stderr\": 0.0339175032232166\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.04959859966384181,\n\
\ \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.04959859966384181\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7702127659574468,\n \"acc_stderr\": 0.02750175294441242,\n\
\ \"acc_norm\": 0.7702127659574468,\n \"acc_norm_stderr\": 0.02750175294441242\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6052631578947368,\n\
\ \"acc_stderr\": 0.04598188057816542,\n \"acc_norm\": 0.6052631578947368,\n\
\ \"acc_norm_stderr\": 0.04598188057816542\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.7586206896551724,\n \"acc_stderr\": 0.03565998174135302,\n\
\ \"acc_norm\": 0.7586206896551724,\n \"acc_norm_stderr\": 0.03565998174135302\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.7354497354497355,\n \"acc_stderr\": 0.022717467897708614,\n \"\
acc_norm\": 0.7354497354497355,\n \"acc_norm_stderr\": 0.022717467897708614\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5396825396825397,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.5396825396825397,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \
\ \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9064516129032258,\n\
\ \"acc_stderr\": 0.01656575466827098,\n \"acc_norm\": 0.9064516129032258,\n\
\ \"acc_norm_stderr\": 0.01656575466827098\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6748768472906403,\n \"acc_stderr\": 0.032957975663112704,\n\
\ \"acc_norm\": 0.6748768472906403,\n \"acc_norm_stderr\": 0.032957975663112704\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\"\
: 0.77,\n \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8666666666666667,\n \"acc_stderr\": 0.026544435312706467,\n\
\ \"acc_norm\": 0.8666666666666667,\n \"acc_norm_stderr\": 0.026544435312706467\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9242424242424242,\n \"acc_stderr\": 0.018852670234993093,\n \"\
acc_norm\": 0.9242424242424242,\n \"acc_norm_stderr\": 0.018852670234993093\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9740932642487047,\n \"acc_stderr\": 0.011464523356953162,\n\
\ \"acc_norm\": 0.9740932642487047,\n \"acc_norm_stderr\": 0.011464523356953162\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8051282051282052,\n \"acc_stderr\": 0.020083167595181393,\n\
\ \"acc_norm\": 0.8051282051282052,\n \"acc_norm_stderr\": 0.020083167595181393\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.45555555555555555,\n \"acc_stderr\": 0.030364862504824428,\n \
\ \"acc_norm\": 0.45555555555555555,\n \"acc_norm_stderr\": 0.030364862504824428\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8487394957983193,\n \"acc_stderr\": 0.023274255898707952,\n\
\ \"acc_norm\": 0.8487394957983193,\n \"acc_norm_stderr\": 0.023274255898707952\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.5165562913907285,\n \"acc_stderr\": 0.04080244185628972,\n \"\
acc_norm\": 0.5165562913907285,\n \"acc_norm_stderr\": 0.04080244185628972\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9229357798165138,\n \"acc_stderr\": 0.011434381698911096,\n \"\
acc_norm\": 0.9229357798165138,\n \"acc_norm_stderr\": 0.011434381698911096\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6435185185185185,\n \"acc_stderr\": 0.032664783315272714,\n \"\
acc_norm\": 0.6435185185185185,\n \"acc_norm_stderr\": 0.032664783315272714\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9264705882352942,\n \"acc_stderr\": 0.018318855850089678,\n \"\
acc_norm\": 0.9264705882352942,\n \"acc_norm_stderr\": 0.018318855850089678\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9113924050632911,\n \"acc_stderr\": 0.018498315206865384,\n \
\ \"acc_norm\": 0.9113924050632911,\n \"acc_norm_stderr\": 0.018498315206865384\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.820627802690583,\n\
\ \"acc_stderr\": 0.0257498195691928,\n \"acc_norm\": 0.820627802690583,\n\
\ \"acc_norm_stderr\": 0.0257498195691928\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8702290076335878,\n \"acc_stderr\": 0.029473649496907065,\n\
\ \"acc_norm\": 0.8702290076335878,\n \"acc_norm_stderr\": 0.029473649496907065\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.859504132231405,\n \"acc_stderr\": 0.031722334260021585,\n \"\
acc_norm\": 0.859504132231405,\n \"acc_norm_stderr\": 0.031722334260021585\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8981481481481481,\n\
\ \"acc_stderr\": 0.02923927267563275,\n \"acc_norm\": 0.8981481481481481,\n\
\ \"acc_norm_stderr\": 0.02923927267563275\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8711656441717791,\n \"acc_stderr\": 0.026321383198783674,\n\
\ \"acc_norm\": 0.8711656441717791,\n \"acc_norm_stderr\": 0.026321383198783674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5625,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.5625,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n\
\ \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9444444444444444,\n\
\ \"acc_stderr\": 0.01500631280644693,\n \"acc_norm\": 0.9444444444444444,\n\
\ \"acc_norm_stderr\": 0.01500631280644693\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.03379976689896309,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.03379976689896309\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9144316730523627,\n\
\ \"acc_stderr\": 0.010002965568647285,\n \"acc_norm\": 0.9144316730523627,\n\
\ \"acc_norm_stderr\": 0.010002965568647285\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8236994219653179,\n \"acc_stderr\": 0.020516425672490714,\n\
\ \"acc_norm\": 0.8236994219653179,\n \"acc_norm_stderr\": 0.020516425672490714\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7977653631284917,\n\
\ \"acc_stderr\": 0.013433729483320982,\n \"acc_norm\": 0.7977653631284917,\n\
\ \"acc_norm_stderr\": 0.013433729483320982\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8562091503267973,\n \"acc_stderr\": 0.02009118893604371,\n\
\ \"acc_norm\": 0.8562091503267973,\n \"acc_norm_stderr\": 0.02009118893604371\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8006430868167203,\n\
\ \"acc_stderr\": 0.022691033780549656,\n \"acc_norm\": 0.8006430868167203,\n\
\ \"acc_norm_stderr\": 0.022691033780549656\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8703703703703703,\n \"acc_stderr\": 0.018689725721062065,\n\
\ \"acc_norm\": 0.8703703703703703,\n \"acc_norm_stderr\": 0.018689725721062065\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6347517730496454,\n \"acc_stderr\": 0.02872386385328127,\n \
\ \"acc_norm\": 0.6347517730496454,\n \"acc_norm_stderr\": 0.02872386385328127\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5951760104302477,\n\
\ \"acc_stderr\": 0.012536743830953986,\n \"acc_norm\": 0.5951760104302477,\n\
\ \"acc_norm_stderr\": 0.012536743830953986\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8308823529411765,\n \"acc_stderr\": 0.022770868010113014,\n\
\ \"acc_norm\": 0.8308823529411765,\n \"acc_norm_stderr\": 0.022770868010113014\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8218954248366013,\n \"acc_stderr\": 0.01547836965310857,\n \
\ \"acc_norm\": 0.8218954248366013,\n \"acc_norm_stderr\": 0.01547836965310857\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8448979591836735,\n \"acc_stderr\": 0.0231747988612186,\n\
\ \"acc_norm\": 0.8448979591836735,\n \"acc_norm_stderr\": 0.0231747988612186\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9054726368159204,\n\
\ \"acc_stderr\": 0.020687186951534087,\n \"acc_norm\": 0.9054726368159204,\n\
\ \"acc_norm_stderr\": 0.020687186951534087\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.02876234912646613,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.02876234912646613\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n\
\ \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n\
\ \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8596491228070176,\n \"acc_stderr\": 0.026640582539133196,\n\
\ \"acc_norm\": 0.8596491228070176,\n \"acc_norm_stderr\": 0.026640582539133196\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5263157894736842,\n\
\ \"mc1_stderr\": 0.017479241161975457,\n \"mc2\": 0.7017750053458277,\n\
\ \"mc2_stderr\": 0.014211541851082555\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8389897395422258,\n \"acc_stderr\": 0.010329712832785715\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7217589082638363,\n \
\ \"acc_stderr\": 0.012343803671422682\n }\n}\n```"
repo_url: https://huggingface.co/louisbrulenaudet/Maxine-34B-stock
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|arc:challenge|25_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|gsm8k|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hellaswag|10_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T00-59-27.637181.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-05T00-59-27.637181.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- '**/details_harness|winogrande|5_2024-04-05T00-59-27.637181.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-05T00-59-27.637181.parquet'
- config_name: results
data_files:
- split: 2024_04_05T00_59_27.637181
path:
- results_2024-04-05T00-59-27.637181.parquet
- split: latest
path:
- results_2024-04-05T00-59-27.637181.parquet
---
# Dataset Card for Evaluation run of louisbrulenaudet/Maxine-34B-stock
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [louisbrulenaudet/Maxine-34B-stock](https://huggingface.co/louisbrulenaudet/Maxine-34B-stock) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_louisbrulenaudet__Maxine-34B-stock",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-05T00:59:27.637181](https://huggingface.co/datasets/open-llm-leaderboard/details_louisbrulenaudet__Maxine-34B-stock/blob/main/results_2024-04-05T00-59-27.637181.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.764333755853148,
"acc_stderr": 0.028344527076434468,
"acc_norm": 0.767478256941674,
"acc_norm_stderr": 0.028893491881214303,
"mc1": 0.5263157894736842,
"mc1_stderr": 0.017479241161975457,
"mc2": 0.7017750053458277,
"mc2_stderr": 0.014211541851082555
},
"harness|arc:challenge|25": {
"acc": 0.7192832764505119,
"acc_stderr": 0.013131238126975583,
"acc_norm": 0.7406143344709898,
"acc_norm_stderr": 0.012808273573927094
},
"harness|hellaswag|10": {
"acc": 0.671081457876917,
"acc_stderr": 0.004688601416815173,
"acc_norm": 0.8673571001792472,
"acc_norm_stderr": 0.0033849518032134734
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.03785714465066653,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.03785714465066653
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.881578947368421,
"acc_stderr": 0.02629399585547494,
"acc_norm": 0.881578947368421,
"acc_norm_stderr": 0.02629399585547494
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8075471698113208,
"acc_stderr": 0.024262979839372274,
"acc_norm": 0.8075471698113208,
"acc_norm_stderr": 0.024262979839372274
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9097222222222222,
"acc_stderr": 0.023964965777906935,
"acc_norm": 0.9097222222222222,
"acc_norm_stderr": 0.023964965777906935
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.58,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.45,
"acc_stderr": 0.04999999999999999,
"acc_norm": 0.45,
"acc_norm_stderr": 0.04999999999999999
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.0339175032232166,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.0339175032232166
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5392156862745098,
"acc_stderr": 0.04959859966384181,
"acc_norm": 0.5392156862745098,
"acc_norm_stderr": 0.04959859966384181
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7702127659574468,
"acc_stderr": 0.02750175294441242,
"acc_norm": 0.7702127659574468,
"acc_norm_stderr": 0.02750175294441242
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6052631578947368,
"acc_stderr": 0.04598188057816542,
"acc_norm": 0.6052631578947368,
"acc_norm_stderr": 0.04598188057816542
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7586206896551724,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.7586206896551724,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.7354497354497355,
"acc_stderr": 0.022717467897708614,
"acc_norm": 0.7354497354497355,
"acc_norm_stderr": 0.022717467897708614
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5396825396825397,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.5396825396825397,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9064516129032258,
"acc_stderr": 0.01656575466827098,
"acc_norm": 0.9064516129032258,
"acc_norm_stderr": 0.01656575466827098
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6748768472906403,
"acc_stderr": 0.032957975663112704,
"acc_norm": 0.6748768472906403,
"acc_norm_stderr": 0.032957975663112704
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165044,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165044
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8666666666666667,
"acc_stderr": 0.026544435312706467,
"acc_norm": 0.8666666666666667,
"acc_norm_stderr": 0.026544435312706467
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9242424242424242,
"acc_stderr": 0.018852670234993093,
"acc_norm": 0.9242424242424242,
"acc_norm_stderr": 0.018852670234993093
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9740932642487047,
"acc_stderr": 0.011464523356953162,
"acc_norm": 0.9740932642487047,
"acc_norm_stderr": 0.011464523356953162
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8051282051282052,
"acc_stderr": 0.020083167595181393,
"acc_norm": 0.8051282051282052,
"acc_norm_stderr": 0.020083167595181393
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.45555555555555555,
"acc_stderr": 0.030364862504824428,
"acc_norm": 0.45555555555555555,
"acc_norm_stderr": 0.030364862504824428
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8487394957983193,
"acc_stderr": 0.023274255898707952,
"acc_norm": 0.8487394957983193,
"acc_norm_stderr": 0.023274255898707952
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5165562913907285,
"acc_stderr": 0.04080244185628972,
"acc_norm": 0.5165562913907285,
"acc_norm_stderr": 0.04080244185628972
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9229357798165138,
"acc_stderr": 0.011434381698911096,
"acc_norm": 0.9229357798165138,
"acc_norm_stderr": 0.011434381698911096
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6435185185185185,
"acc_stderr": 0.032664783315272714,
"acc_norm": 0.6435185185185185,
"acc_norm_stderr": 0.032664783315272714
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9264705882352942,
"acc_stderr": 0.018318855850089678,
"acc_norm": 0.9264705882352942,
"acc_norm_stderr": 0.018318855850089678
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9113924050632911,
"acc_stderr": 0.018498315206865384,
"acc_norm": 0.9113924050632911,
"acc_norm_stderr": 0.018498315206865384
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.820627802690583,
"acc_stderr": 0.0257498195691928,
"acc_norm": 0.820627802690583,
"acc_norm_stderr": 0.0257498195691928
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8702290076335878,
"acc_stderr": 0.029473649496907065,
"acc_norm": 0.8702290076335878,
"acc_norm_stderr": 0.029473649496907065
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.859504132231405,
"acc_stderr": 0.031722334260021585,
"acc_norm": 0.859504132231405,
"acc_norm_stderr": 0.031722334260021585
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8981481481481481,
"acc_stderr": 0.02923927267563275,
"acc_norm": 0.8981481481481481,
"acc_norm_stderr": 0.02923927267563275
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8711656441717791,
"acc_stderr": 0.026321383198783674,
"acc_norm": 0.8711656441717791,
"acc_norm_stderr": 0.026321383198783674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5625,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.03492606476623791,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.03492606476623791
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9444444444444444,
"acc_stderr": 0.01500631280644693,
"acc_norm": 0.9444444444444444,
"acc_norm_stderr": 0.01500631280644693
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.87,
"acc_stderr": 0.03379976689896309,
"acc_norm": 0.87,
"acc_norm_stderr": 0.03379976689896309
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9144316730523627,
"acc_stderr": 0.010002965568647285,
"acc_norm": 0.9144316730523627,
"acc_norm_stderr": 0.010002965568647285
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8236994219653179,
"acc_stderr": 0.020516425672490714,
"acc_norm": 0.8236994219653179,
"acc_norm_stderr": 0.020516425672490714
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7977653631284917,
"acc_stderr": 0.013433729483320982,
"acc_norm": 0.7977653631284917,
"acc_norm_stderr": 0.013433729483320982
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8562091503267973,
"acc_stderr": 0.02009118893604371,
"acc_norm": 0.8562091503267973,
"acc_norm_stderr": 0.02009118893604371
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8006430868167203,
"acc_stderr": 0.022691033780549656,
"acc_norm": 0.8006430868167203,
"acc_norm_stderr": 0.022691033780549656
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8703703703703703,
"acc_stderr": 0.018689725721062065,
"acc_norm": 0.8703703703703703,
"acc_norm_stderr": 0.018689725721062065
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6347517730496454,
"acc_stderr": 0.02872386385328127,
"acc_norm": 0.6347517730496454,
"acc_norm_stderr": 0.02872386385328127
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5951760104302477,
"acc_stderr": 0.012536743830953986,
"acc_norm": 0.5951760104302477,
"acc_norm_stderr": 0.012536743830953986
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8308823529411765,
"acc_stderr": 0.022770868010113014,
"acc_norm": 0.8308823529411765,
"acc_norm_stderr": 0.022770868010113014
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8218954248366013,
"acc_stderr": 0.01547836965310857,
"acc_norm": 0.8218954248366013,
"acc_norm_stderr": 0.01547836965310857
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8448979591836735,
"acc_stderr": 0.0231747988612186,
"acc_norm": 0.8448979591836735,
"acc_norm_stderr": 0.0231747988612186
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.9054726368159204,
"acc_stderr": 0.020687186951534087,
"acc_norm": 0.9054726368159204,
"acc_norm_stderr": 0.020687186951534087
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.02876234912646613,
"acc_norm": 0.91,
"acc_norm_stderr": 0.02876234912646613
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598053,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598053
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8596491228070176,
"acc_stderr": 0.026640582539133196,
"acc_norm": 0.8596491228070176,
"acc_norm_stderr": 0.026640582539133196
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5263157894736842,
"mc1_stderr": 0.017479241161975457,
"mc2": 0.7017750053458277,
"mc2_stderr": 0.014211541851082555
},
"harness|winogrande|5": {
"acc": 0.8389897395422258,
"acc_stderr": 0.010329712832785715
},
"harness|gsm8k|5": {
"acc": 0.7217589082638363,
"acc_stderr": 0.012343803671422682
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Codec-SUPERB/speech_tokenizer_16k | ---
configs:
- config_name: default
data_files:
- split: test.other
path: data/test.other-*
- split: validation.other
path: data/validation.other-*
- split: train.other.500
path: data/train.other.500-*
- split: train.clean.100
path: data/train.clean.100-*
- split: test.clean
path: data/test.clean-*
- split: train.clean.360
path: data/train.clean.360-*
- split: validation.clean
path: data/validation.clean-*
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: audio_codes
sequence:
sequence: int64
splits:
- name: test.other
num_bytes: 62049899
num_examples: 2939
- name: validation.other
num_bytes: 59498714
num_examples: 2864
- name: train.other.500
num_bytes: 5761561617
num_examples: 148688
- name: train.clean.100
num_bytes: 1166450829
num_examples: 28539
- name: test.clean
num_bytes: 62745230
num_examples: 2620
- name: train.clean.360
num_bytes: 4216515060
num_examples: 104014
- name: validation.clean
num_bytes: 62578176
num_examples: 2703
download_size: 1801683161
dataset_size: 11391399525
---
# Dataset Card for "speech_tokenizer_16k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/mukai_takumi_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mukai_takumi (THE iDOLM@STER: Cinderella Girls)
This is the dataset of mukai_takumi (THE iDOLM@STER: Cinderella Girls), containing 467 images and their tags.
The core tags of this character are `breasts, long_hair, black_hair, large_breasts, brown_hair, green_eyes, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 467 | 541.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mukai_takumi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 467 | 323.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mukai_takumi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1087 | 656.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mukai_takumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 467 | 484.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mukai_takumi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1087 | 916.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mukai_takumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mukai_takumi_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, blush, cleavage, navel, solo, side-tie_bikini_bottom, looking_at_viewer, simple_background, white_background, yellow_eyes, open_clothes |
| 1 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, simple_background, solo, white_background, cleavage, upper_body, grin, jacket, sarashi, collarbone, hand_on_hip, open_clothes |
| 2 | 6 |  |  |  |  |  | 1girl, blush, cleavage, dress, earrings, necklace, looking_at_viewer, solo, bare_shoulders, smile, collarbone, ponytail, sideboob |
| 3 | 6 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, open_jacket, ponytail, smile, solo, black_skirt, crop_top, earrings, midriff, miniskirt, navel, necklace, blush, bracelet, collarbone, hand_on_hip, parted_bangs, suspender_skirt, tattoo, thighs, white_jacket, black_choker, closed_mouth, cropped_jacket, hair_flower, idol, sidelocks, thigh_strap, white_background, white_belt, white_gloves |
| 4 | 11 |  |  |  |  |  | 1boy, 1girl, blush, hetero, solo_focus, sweat, nipples, nude, penis, huge_breasts, mosaic_censoring, open_mouth, vaginal, dark-skinned_male, sex_from_behind |
| 5 | 15 |  |  |  |  |  | 1boy, 1girl, hetero, solo_focus, nipples, blush, paizuri, sweat, cum_on_breasts, huge_breasts, penis, censored, pov, ejaculation, smile, teeth, breasts_squeezed_together, looking_at_viewer |
| 6 | 10 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, solo_focus, sex, cum_in_pussy, sweat, vaginal, navel, open_mouth, completely_nude, pov, spread_legs, bar_censor, cowgirl_position, female_pubic_hair, girl_on_top, huge_breasts, penis |
| 7 | 8 |  |  |  |  |  | 1girl, detached_collar, playboy_bunny, rabbit_ears, solo, blush, fake_animal_ears, wrist_cuffs, cleavage, bowtie, looking_at_viewer, rabbit_tail, bangs, bare_shoulders, covered_navel, strapless_leotard, anger_vein, black_leotard, cowboy_shot, fishnet_pantyhose, grin, open_mouth |
| 8 | 5 |  |  |  |  |  | 1girl, blush, red_neckerchief, looking_at_viewer, simple_background, solo, white_background, bangs, black_sailor_collar, black_serafuku, black_skirt, collarbone, covering_mouth, crying_with_eyes_open, pleated_skirt, short_sleeves, sitting, upper_body, white_shirt, yellow_eyes |
| 9 | 6 |  |  |  |  |  | maid_apron, blush, looking_at_viewer, 1girl, black_dress, enmaided, maid_headdress, simple_background, solo, white_apron, white_background, bangs, black_footwear, frilled_apron, full_body, juliet_sleeves, shoes, sidelocks, smile, standing |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | cleavage | navel | solo | side-tie_bikini_bottom | looking_at_viewer | simple_background | white_background | yellow_eyes | open_clothes | upper_body | grin | jacket | sarashi | collarbone | hand_on_hip | dress | earrings | necklace | bare_shoulders | smile | ponytail | sideboob | open_jacket | black_skirt | crop_top | midriff | miniskirt | bracelet | parted_bangs | suspender_skirt | tattoo | thighs | white_jacket | black_choker | closed_mouth | cropped_jacket | hair_flower | idol | sidelocks | thigh_strap | white_belt | white_gloves | 1boy | hetero | solo_focus | sweat | nipples | nude | penis | huge_breasts | mosaic_censoring | open_mouth | vaginal | dark-skinned_male | sex_from_behind | paizuri | cum_on_breasts | censored | pov | ejaculation | teeth | breasts_squeezed_together | sex | cum_in_pussy | completely_nude | spread_legs | bar_censor | cowgirl_position | female_pubic_hair | girl_on_top | detached_collar | playboy_bunny | rabbit_ears | fake_animal_ears | wrist_cuffs | bowtie | rabbit_tail | bangs | covered_navel | strapless_leotard | anger_vein | black_leotard | cowboy_shot | fishnet_pantyhose | red_neckerchief | black_sailor_collar | black_serafuku | covering_mouth | crying_with_eyes_open | pleated_skirt | short_sleeves | sitting | white_shirt | maid_apron | black_dress | enmaided | maid_headdress | white_apron | black_footwear | frilled_apron | full_body | juliet_sleeves | shoes | standing |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------|:--------|:-------|:-------------------------|:--------------------|:--------------------|:-------------------|:--------------|:---------------|:-------------|:-------|:---------|:----------|:-------------|:--------------|:--------|:-----------|:-----------|:-----------------|:--------|:-----------|:-----------|:--------------|:--------------|:-----------|:----------|:------------|:-----------|:---------------|:------------------|:---------|:---------|:---------------|:---------------|:---------------|:-----------------|:--------------|:-------|:------------|:--------------|:-------------|:---------------|:-------|:---------|:-------------|:--------|:----------|:-------|:--------|:---------------|:-------------------|:-------------|:----------|:--------------------|:------------------|:----------|:-----------------|:-----------|:------|:--------------|:--------|:----------------------------|:------|:---------------|:------------------|:--------------|:-------------|:-------------------|:--------------------|:--------------|:------------------|:----------------|:--------------|:-------------------|:--------------|:---------|:--------------|:--------|:----------------|:--------------------|:-------------|:----------------|:--------------|:--------------------|:------------------|:----------------------|:-----------------|:-----------------|:------------------------|:----------------|:----------------|:----------|:--------------|:-------------|:--------------|:-----------|:-----------------|:--------------|:-----------------|:----------------|:------------|:-----------------|:--------|:-----------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | | X | | X | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | | X | | X | | | | | | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | X | X | | X | | X | | | | | | | X | X | | X | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 11 |  |  |  |  |  | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 15 |  |  |  |  |  | X | X | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | X | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | X | X | | X | X | | | | | | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | X | X | | X | | X | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | X | | | X | | X | X | X | X | | X | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 9 | 6 |  |  |  |  |  | X | X | | | X | | X | X | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
osadoun/IdentifyingBoycottsAndBullyingInChildrenMessages | ---
language:
- he
--- |
Nexdata/Sichuan_Dialect_Conversational_Speech_Data_by_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Sichuan_Dialect_Conversational_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1065?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
1730 Sichuan native speakers participated in the recording and face-to-face free talking in a natural way in wide fields without the topic specified. It is natural and fluency in speech, and in line with the actual dialogue scene. We transcribed the speech into text manually to ensure high accuracy.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1065?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Sichuan Dialect
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
VishnuPJ/Alpaca_Instruct_Malayalam | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_dvruette__llama-13b-pretrained-dropout | ---
pretty_name: Evaluation run of dvruette/llama-13b-pretrained-dropout
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [dvruette/llama-13b-pretrained-dropout](https://huggingface.co/dvruette/llama-13b-pretrained-dropout)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_dvruette__llama-13b-pretrained-dropout\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T13:29:36.249394](https://huggingface.co/datasets/open-llm-leaderboard/details_dvruette__llama-13b-pretrained-dropout/blob/main/results_2023-10-18T13-29-36.249394.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.20501258389261745,\n\
\ \"em_stderr\": 0.0041343766395959035,\n \"f1\": 0.2702611157718119,\n\
\ \"f1_stderr\": 0.004144727885990915,\n \"acc\": 0.43522094959648105,\n\
\ \"acc_stderr\": 0.01051473093615015\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.20501258389261745,\n \"em_stderr\": 0.0041343766395959035,\n\
\ \"f1\": 0.2702611157718119,\n \"f1_stderr\": 0.004144727885990915\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11827141774071266,\n \
\ \"acc_stderr\": 0.008895075852434953\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7521704814522494,\n \"acc_stderr\": 0.01213438601986535\n\
\ }\n}\n```"
repo_url: https://huggingface.co/dvruette/llama-13b-pretrained-dropout
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|arc:challenge|25_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T13_29_36.249394
path:
- '**/details_harness|drop|3_2023-10-18T13-29-36.249394.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T13-29-36.249394.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T13_29_36.249394
path:
- '**/details_harness|gsm8k|5_2023-10-18T13-29-36.249394.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T13-29-36.249394.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hellaswag|10_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:40:51.054216.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T19:40:51.054216.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T19:40:51.054216.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T13_29_36.249394
path:
- '**/details_harness|winogrande|5_2023-10-18T13-29-36.249394.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T13-29-36.249394.parquet'
- config_name: results
data_files:
- split: 2023_07_19T19_40_51.054216
path:
- results_2023-07-19T19:40:51.054216.parquet
- split: 2023_10_18T13_29_36.249394
path:
- results_2023-10-18T13-29-36.249394.parquet
- split: latest
path:
- results_2023-10-18T13-29-36.249394.parquet
---
# Dataset Card for Evaluation run of dvruette/llama-13b-pretrained-dropout
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/dvruette/llama-13b-pretrained-dropout
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [dvruette/llama-13b-pretrained-dropout](https://huggingface.co/dvruette/llama-13b-pretrained-dropout) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_dvruette__llama-13b-pretrained-dropout",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T13:29:36.249394](https://huggingface.co/datasets/open-llm-leaderboard/details_dvruette__llama-13b-pretrained-dropout/blob/main/results_2023-10-18T13-29-36.249394.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.20501258389261745,
"em_stderr": 0.0041343766395959035,
"f1": 0.2702611157718119,
"f1_stderr": 0.004144727885990915,
"acc": 0.43522094959648105,
"acc_stderr": 0.01051473093615015
},
"harness|drop|3": {
"em": 0.20501258389261745,
"em_stderr": 0.0041343766395959035,
"f1": 0.2702611157718119,
"f1_stderr": 0.004144727885990915
},
"harness|gsm8k|5": {
"acc": 0.11827141774071266,
"acc_stderr": 0.008895075852434953
},
"harness|winogrande|5": {
"acc": 0.7521704814522494,
"acc_stderr": 0.01213438601986535
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
automated-research-group/phi-winogrande_inverted_option-results | ---
dataset_info:
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}'
features:
- name: id
dtype: 'null'
- name: prediction
dtype: 'null'
- name: likelihood
dtype: 'null'
- name: perplexity
dtype: 'null'
- name: accuracy
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1342
dataset_size: 0
configs:
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.9}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.8}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.95}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.9}/train-*'
---
# Dataset Card for "phi-winogrande_inverted_option-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GGLab/Turkish-plu | ---
license: apache-2.0
---
|
thedaviddelight/github-issues | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: labels
list:
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: id
dtype: int64
- name: name
dtype: string
- name: node_id
dtype: string
- name: url
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: assignees
list:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: milestone
struct:
- name: closed_at
dtype: string
- name: closed_issues
dtype: int64
- name: created_at
dtype: string
- name: creator
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: description
dtype: string
- name: due_on
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: labels_url
dtype: string
- name: node_id
dtype: string
- name: number
dtype: int64
- name: open_issues
dtype: int64
- name: state
dtype: string
- name: title
dtype: string
- name: updated_at
dtype: string
- name: url
dtype: string
- name: comments
sequence: string
- name: created_at
dtype: timestamp[ns, tz=UTC]
- name: updated_at
dtype: timestamp[ns, tz=UTC]
- name: closed_at
dtype: timestamp[ns, tz=UTC]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: float64
- name: body
dtype: string
- name: reactions
struct:
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: confused
dtype: int64
- name: eyes
dtype: int64
- name: heart
dtype: int64
- name: hooray
dtype: int64
- name: laugh
dtype: int64
- name: rocket
dtype: int64
- name: total_count
dtype: int64
- name: url
dtype: string
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: float64
- name: state_reason
dtype: string
- name: draft
dtype: float64
- name: pull_request
struct:
- name: diff_url
dtype: string
- name: html_url
dtype: string
- name: merged_at
dtype: string
- name: patch_url
dtype: string
- name: url
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 36053725
num_examples: 6593
download_size: 10558195
dataset_size: 36053725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
18Barz/lyratix | ---
license: apache-2.0
task_categories:
- zero-shot-classification
language:
- en
- af
- ar
- es
- sw
tags:
- music
- not-for-all-audiences
- finance
pretty_name: soulo_lyratix
size_categories:
- 100M<n<1B
---
from bboyunv.finance_protraction.text import CountVectorizer
from bboyunv.compensation stems+lyratixderoylocation
# Theorize 'dataset' our list of recording artist
dataset = ["Run-D.M.C.", "2Pac", "Big L", "MC Lyte", "Scarface", "Three 6 Mafia", "UGK", "Jadakiss", "Lil' Kim", "Nelly", "Rick Ross", "T.I."]
# Convert the list to a pandas DataFrame
df = pd.DataFrame(dataset, columns=['Lyraticians'])
# lyratix a document-term matrix
vectorizer = CountVectorizer()
dtm = vectorizer.fit_transform(df['Lyraticians'])
# bring into play (bip) deroy(paymInt) modeling
LIrA = Logical it·er·a·tion architecture (T_transformer=3, random_state=42)
topics = bip.fit_transform(dtm)
# Print the top words for each topic
lyratix_DeRoy = vectorizer.get_finance_Rechord_out()
for T, topic in enumerate(bip.transfomer_):
top_words = [feature_names[bip] for bip in topic.dispersclrk()[-5:][::-1]]
print(B"Topic {b + 1}: {', '.join(upper_lyratix)}") |
Partha117/swe_bench_formatted | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: before_fix_sha
dtype: string
- name: body
dtype: string
- name: report_datetime
dtype: string
- name: issue_id
dtype: int64
- name: updated_files
dtype: string
- name: status
dtype: string
- name: repo_url
dtype: string
- name: title
dtype: string
- name: issue_url
dtype: string
- name: pull_url
dtype: string
- name: after_fix_sha
dtype: string
- name: commit_datetime
dtype: timestamp[us, tz=UTC]
- name: language
dtype: string
splits:
- name: test
num_bytes: 5139016
num_examples: 2294
download_size: 2227536
dataset_size: 5139016
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
aisc-team-c2/MMedBench | ---
license: cc-by-4.0
language:
- en
- zh
- ja
- fr
- ru
- es
tags:
- medical
task_categories:
- question-answering
configs:
- config_name: english
data_files: "English.jsonl"
- config_name: french
data_files: "French.jsonl"
---
*This is a dataset repository made for the AISC class at Harvard Medical School. Please find the original dataset repository here: https://huggingface.co/datasets/Henrychur/MMedBench*
# MMedBench
[💻Github Repo](https://github.com/MAGIC-AI4Med/MMedLM) [🖨️arXiv Paper](https://arxiv.org/abs/2402.13963)
The official benchmark for "Towards Building Multilingual Language Model for Medicine".
## Introduction
This repo contains MMedBench, a comprehensive multilingual medical benchmark comprising 45,048 QA pairs for training and 8,518 QA pairs for testing. Each sample includes a question, options, the correct answer, and a reference explanation for the selection of the correct answer.
To access the data, please download MMedBench.zip. Upon extracting the file, you will find two folders named Train and Test. Each folder contains six .jsonl files, each named after its respective language. Each line in these files represents a sample, with the following attributes for each sample:
|Key |Value Type |Description |
|------------------|-------------------|-----------------------------------------|
|question |String | A string of question |
|options |Dict | A dict where key is the index ‘A,B,C,D,E’ and value is the string of option| |
|answer_idx |String | A string of right answer idxs. Each idx is split by ','|
|rationale |String | A string of explanation for the selection of the correct answer |
|human_checked |Bool | Whether the rationale has been manually checked. |
|human_check_passed |Bool | Whether the rationale has passed manual check. |
Our [GitHub](https://github.com/MAGIC-AI4Med/MMedLM) provides the code for finetuning on the trainset of MMedBench. Check out for more details.
## News
[2024.2.21] Our pre-print paper is released ArXiv. Dive into our findings [here](https://arxiv.org/abs/2402.13963).
[2024.2.20] We release [MMedLM](https://huggingface.co/Henrychur/MMedLM) and [MMedLM 2](https://huggingface.co/Henrychur/MMedLM2). With an auto-regressive continues training on MMedC, these models achieves superior performance compared to all other open-source models, even rivaling GPT-4 on MMedBench.
[2023.2.20] We release [MMedC](https://huggingface.co/datasets/Henrychur/MMedC), a multilingual medical corpus containing 25.5B tokens.
[2023.2.20] We release [MMedBench](https://huggingface.co/datasets/Henrychur/MMedBench), a new multilingual medical multi-choice question-answering
benchmark with rationale. Check out the leaderboard [here](https://henrychur.github.io/MultilingualMedQA/).
## Evaluation on MMedBench
The further pretrained MMedLM 2 showcast it's great performance in medical domain across different language.
| Method | Size | Year | MMedC | MMedBench | English | Chinese | Japanese | French | Russian | Spanish | Avg. |
|------------------|------|---------|-----------|-----------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| GPT-3.5 | - | 2022.12 | ✗ | ✗ | 56.88 | 52.29 | 34.63 | 32.48 | 66.36 | 66.06 | 51.47 |
| GPT-4 | - | 2023.3 | ✗ | ✗ | 78.00 | 75.07 | 72.91 | 56.59 | 83.62 | 85.67 | 74.27 |
| Gemini-1.0 pro | - | 2024.1 | ✗ | ✗ | 53.73 | 60.19 | 44.22 | 29.90 | 73.44 | 69.69 | 55.20 |
| BLOOMZ | 7B | 2023.5 | ✗ | trainset | 43.28 | 58.06 | 32.66 | 26.37 | 62.89 | 47.34 | 45.10 |
| InternLM | 7B | 2023.7 | ✗ | trainset | 44.07 | 64.62 | 37.19 | 24.92 | 58.20 | 44.97 | 45.67 |
| Llama\ 2 | 7B | 2023.7 | ✗ | trainset | 43.36 | 50.29 | 25.13 | 20.90 | 66.80 | 47.10 | 42.26 |
| MedAlpaca | 7B | 2023.3 | ✗ | trainset | 46.74 | 44.80 | 29.64 | 21.06 | 59.38 | 45.00 | 41.11 |
| ChatDoctor | 7B | 2023.4 | ✗ | trainset | 43.52 | 43.26 | 25.63 | 18.81 | 62.50 | 43.44 | 39.53 |
| PMC-LLaMA | 7B | 2023.4 | ✗ | trainset | 47.53 | 42.44 | 24.12 | 20.74 | 62.11 | 43.29 | 40.04 |
| Mistral | 7B | 2023.10 | ✗ | trainset | 61.74 | 71.10 | 44.72 | 48.71 | 74.22 | 63.86 | 60.73 |
| InternLM\ 2 | 7B | 2024.2 | ✗ | trainset | 57.27 | 77.55 | 47.74 | 41.00 | 68.36 | 59.59 | 58.59 |
| MMedLM~(Ours) | 7B | - | ✗ | trainset | 49.88 | 70.49 | 46.23 | 36.66 | 72.27 | 54.52 | 55.01 |
| MMedLM\ 2~(Ours) | 7B | - | ✗ | trainset | 61.74 | 80.01 | 61.81 | 52.09 | 80.47 | 67.65 | 67.30 |
- GPT and Gemini is evluated under zero-shot setting through API
- Open-source models first undergo training on the trainset of MMedBench before evaluate.
## Contact
If you have any question, please feel free to contact qiupengcheng@pjlab.org.cn.
## Citation
```
@misc{qiu2024building,
title={Towards Building Multilingual Language Model for Medicine},
author={Pengcheng Qiu and Chaoyi Wu and Xiaoman Zhang and Weixiong Lin and Haicheng Wang and Ya Zhang and Yanfeng Wang and Weidi Xie},
year={2024},
eprint={2402.13963},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
AryaParikh/autotrain-data-arp_summ_1 | ---
task_categories:
- summarization
---
# AutoTrain Dataset for project: arp_summ_1
## Dataset Description
This dataset has been automatically processed by AutoTrain for project arp_summ_1.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": " eat , grass , horse ",
"target": " The old horse ate grass all day. "
},
{
"text": " lay , dog , rug ",
"target": " Brown dog chews on bone while laying on the rug. "
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 197 |
| valid | 50 |
|
agak/agak | ---
license: openrail
---
|
tfshaman/error_metamath_sympy_v1 | ---
dataset_info:
features:
- name: output
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: code_output
dtype: string
- name: data_type
dtype: string
- name: exception_type
dtype: string
splits:
- name: train
num_bytes: 445137822
num_examples: 169933
download_size: 157530983
dataset_size: 445137822
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "error_metamath_sympy_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/quirky_popqa_pythia-410m_bob_easy | ---
dataset_info:
features:
- name: id
dtype: string
- name: choices
sequence: string
- name: label
dtype: int64
- name: popularity
dtype: int64
- name: difficulty
dtype: float64
- name: statement
dtype: string
- name: character
dtype: string
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: bob_log_odds
dtype: float64
splits:
- name: train
num_bytes: 956505.0212765958
num_examples: 6132
- name: validation
num_bytes: 72149.154
num_examples: 462
- name: test
num_bytes: 76436.57
num_examples: 490
download_size: 401584
dataset_size: 1105090.7452765957
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
lmaoliketest/yellow_test | ---
license: unknown
---
|
hojzas/proj4-label-validation | ---
license: apache-2.0
---
|
xDAN-datasets/huatuo_encyclopedia_qa_364k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1196521698
num_examples: 364420
download_size: 0
dataset_size: 1196521698
---
# Dataset Card for "huatuo_encyclopedia_qa_364k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ctu-aic/qa2d-pl | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: turker_answer
dtype: string
- name: rule-based
dtype: string
- name: dataset
dtype: string
- name: example_uid
dtype: string
splits:
- name: train
num_bytes: 17513368
num_examples: 60710
- name: validation
num_bytes: 3007517
num_examples: 10344
download_size: 15105952
dataset_size: 20520885
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: mit
task_categories:
- text2text-generation
language:
- pl
pretty_name: QA2D-pl
size_categories:
- 10K<n<100K
---
Polish version of the Question to Declarative Sentence ([QA2D](https://huggingface.co/datasets/domenicrosati/QA2D)). Machine-translated using [DeepL](https://www.deepl.com]) service.
For more information, see our [Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language](https://arxiv.org/abs/2312.10171) paper.
Currently in review for [NCAA](https://link.springer.com/journal/521) journal.
```bibtex
@article{drchal2023pipeline,
title={Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language},
author={Drchal, Jan and Ullrich, Herbert and Mlyn{\'a}{\v{r}}, Tom{\'a}{\v{s}} and Moravec, V{\'a}clav},
journal={arXiv preprint arXiv:2312.10171},
year={2023}
}
```
|
VictorNGomes/CorpusTeMario | ---
language:
- pt
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** Interinstitutional Center for Computational Linguistics (Núcleo Interinstitucional de Linguística Computacional -- NILC)
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
http://www.nilc.icmc.usp.br/nilc/tools/TeMario.zip
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ByteSized/EduText | ---
license: mit
---
|
hugginglearners/twitter-dataset-tesla | ---
license:
- cc0-1.0
kaggle_id: vishesh1412/twitter-dataset-tesla
---
# Dataset Card for Twitter Dataset: Tesla
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/vishesh1412/twitter-dataset-tesla
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains all the Tweets regarding #Tesla or #tesla till 12/07/2022 (dd-mm-yyyy). It can be used for sentiment analysis research purpose or used in other NLP tasks or just for fun.
It contains 10,000 recent Tweets with the user ID, the hashtags used in the Tweets, and other important features.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@vishesh1412](https://kaggle.com/vishesh1412)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
cadbla/BangChan_train | ---
license: openrail
---
|
AbeShinzo0708/SugaYosihide_voicedata_for_Bert-VITS2 | ---
language:
- ja
tags:
- 菅義偉
- SugaYoshihide
pretty_name: 菅義偉
--- |
Hack90/ncbi_genbank_part_75 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 35009212242
num_examples: 74649
download_size: 15493347795
dataset_size: 35009212242
---
# Dataset Card for "ncbi_genbank_part_75"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/711_Hours_Vietnamese_Spontaneous_Speech_Data | ---
license: cc-by-nc-nd-4.0
---
## Description
Tibetan(China) Real-world Casual Conversation and Monologue speech dataset, covers conversation, interview, etc, mirrors real-world interactions. Transcribed with text content, speaker's ID, gender, and other attributes. Our dataset was collected from extensive and diversify speakers, geographicly speaking, enhancing model performance in real and complex tasks. Quality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1120?source=Huggingface
# Specifications
## Format
16kHz, 16 bit, wav, mono channel;
## Content category
Including conversation, interview, etc;
## Recording environment
Low background noise;
## Country
China(CHN);
## Language(Region) Code
bo-CN;
## Language
Tibetan;
## Features of annotation
Transcription text, timestamp, speaker ID, gender.
## Accuracy Rate
Word Accuracy Rate (WAR) 97%
# Licensing Information
Commercial License
|
japanese-asr/whisper_transcriptions.reazonspeech.all_26 | ---
dataset_info:
config_name: all
features:
- name: name
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: whisper_transcript
sequence: int64
splits:
- name: train
num_bytes: 30375669140.0
num_examples: 267445
download_size: 30135083088
dataset_size: 30375669140.0
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
---
|
mHossain/final_train_v4_test_540000 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: input_text
dtype: string
- name: target_text
dtype: string
- name: prefix
dtype: string
splits:
- name: train
num_bytes: 6696897.3
num_examples: 18000
- name: test
num_bytes: 744099.7
num_examples: 2000
download_size: 3205086
dataset_size: 7440997.0
---
# Dataset Card for "final_train_v4_test_540000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aminlouhichi/donutTOPSOLIDTIMCOD | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 9934958.0
num_examples: 46
- name: validation
num_bytes: 9934958.0
num_examples: 46
- name: test
num_bytes: 9934958.0
num_examples: 46
download_size: 27397953
dataset_size: 29804874.0
---
# Dataset Card for "donutTOPSOLIDTIMCOD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
society-ethics/medmcqa_age_gender | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: opa
dtype: string
- name: opb
dtype: string
- name: opc
dtype: string
- name: opd
dtype: string
- name: cop
dtype: int64
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: age.child
dtype: bool
- name: age.youth
dtype: bool
- name: age.adult
dtype: bool
- name: age.senior
dtype: bool
- name: gender.male
dtype: bool
- name: gender.female
dtype: bool
splits:
- name: train
num_bytes: 132040415
num_examples: 182822
- name: validation
num_bytes: 2224566
num_examples: 4183
download_size: 84155335
dataset_size: 134264981
---
# Dataset Card for "medmcqa_age_gender"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pseudolab/autotrain-data-Nuclear_Fusion_Falcon | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Magnetic Field Fluctuations
dtype: float64
- name: Leakage
dtype: float64
- name: Instabilities
dtype: float64
- name: Plasma Instabilities
dtype: float64
- name: Magnetic Field Strength
dtype: float64
- name: Injection Energy
dtype: float64
- name: Beam Symmetry
dtype: float64
- name: Target Density
dtype: float64
- name: Target Composition
dtype: string
- name: Fuel Density
dtype: float64
- name: Temperature
dtype: float64
- name: Confinement Time
dtype: float64
- name: Fuel Purity
dtype: float64
- name: Energy Input
dtype: float64
- name: Power Output
dtype: float64
- name: Pressure
dtype: float64
- name: Neutron Yield
dtype: float64
- name: Ignition
dtype: int64
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 17566788
num_examples: 100000
- name: validation
num_bytes: 17566788
num_examples: 100000
download_size: 32112642
dataset_size: 35133576
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-Nuclear_Fusion_Falcon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jelly/github-issues | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: 'null'
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 13846886
num_examples: 2797
download_size: 3821819
dataset_size: 13846886
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Codec-SUPERB/maestro_synth | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k
path: data/encodec_24k-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: id
dtype: string
splits:
- name: original
num_bytes: 2131228269.0
num_examples: 185
- name: academicodec_hifi_16k_320d
num_bytes: 710421979.0
num_examples: 185
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 710421979.0
num_examples: 185
- name: academicodec_hifi_24k_320d
num_bytes: 1065621979.0
num_examples: 185
- name: audiodec_24k_320d
num_bytes: 1065621979.0
num_examples: 185
- name: dac_16k
num_bytes: 710421979.0
num_examples: 185
- name: dac_24k
num_bytes: 1065621979.0
num_examples: 185
- name: dac_44k
num_bytes: 1958061979.0
num_examples: 185
- name: encodec_24k
num_bytes: 1065622349.0
num_examples: 185
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 710422349.0
num_examples: 185
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 710422349.0
num_examples: 185
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 710422349.0
num_examples: 185
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 710422349.0
num_examples: 185
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 710422349.0
num_examples: 185
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 710422349.0
num_examples: 185
- name: speech_tokenizer_16k
num_bytes: 710540379.0
num_examples: 185
download_size: 15255614807
dataset_size: 15456118944.0
---
# Dataset Card for "maestro_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.