datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
xiemoxiaoshaso/image | ---
license: openrail
---
|
liuyanchen1015/MULTI_VALUE_wnli_drop_copula_be_AP | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 230
num_examples: 2
- name: test
num_bytes: 1662
num_examples: 7
- name: train
num_bytes: 6842
num_examples: 47
download_size: 10658
dataset_size: 8734
---
# Dataset Card for "MULTI_VALUE_wnli_drop_copula_be_AP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/augmentatio-standardized_cluster_4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 68743865
num_examples: 6699
download_size: 19780297
dataset_size: 68743865
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "augmentatio-standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/asobiasobase | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Asobi Asobase
This is the image base of bangumi Asobi Asobase, we detected 33 characters, 3159 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 483 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 149 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 65 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 14 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 22 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 9 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 9 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 11 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 829 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 25 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 117 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 31 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 89 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 35 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 157 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 31 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 43 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 647 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 13 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 70 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 21 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 22 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 30 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 13 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 11 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 44 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 20 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 10 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 8 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 9 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 10 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 6 | [Download](31/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 106 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
open-llm-leaderboard/details_IkariDev__Athena-v3 | ---
pretty_name: Evaluation run of IkariDev/Athena-v3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [IkariDev/Athena-v3](https://huggingface.co/IkariDev/Athena-v3) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_IkariDev__Athena-v3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-28T03:48:59.225796](https://huggingface.co/datasets/open-llm-leaderboard/details_IkariDev__Athena-v3/blob/main/results_2023-10-28T03-48-59.225796.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0050335570469798654,\n\
\ \"em_stderr\": 0.0007247385547751907,\n \"f1\": 0.08212038590604023,\n\
\ \"f1_stderr\": 0.0017177930965841738,\n \"acc\": 0.4368461553651238,\n\
\ \"acc_stderr\": 0.010431419008808642\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0050335570469798654,\n \"em_stderr\": 0.0007247385547751907,\n\
\ \"f1\": 0.08212038590604023,\n \"f1_stderr\": 0.0017177930965841738\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11599696739954511,\n \
\ \"acc_stderr\": 0.008820485491442497\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7576953433307024,\n \"acc_stderr\": 0.012042352526174789\n\
\ }\n}\n```"
repo_url: https://huggingface.co/IkariDev/Athena-v3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|arc:challenge|25_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_28T03_48_59.225796
path:
- '**/details_harness|drop|3_2023-10-28T03-48-59.225796.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-28T03-48-59.225796.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_28T03_48_59.225796
path:
- '**/details_harness|gsm8k|5_2023-10-28T03-48-59.225796.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-28T03-48-59.225796.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hellaswag|10_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T05-57-51.929610.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T05-57-51.929610.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T05-57-51.929610.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_28T03_48_59.225796
path:
- '**/details_harness|winogrande|5_2023-10-28T03-48-59.225796.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-28T03-48-59.225796.parquet'
- config_name: results
data_files:
- split: 2023_10_04T05_57_51.929610
path:
- results_2023-10-04T05-57-51.929610.parquet
- split: 2023_10_28T03_48_59.225796
path:
- results_2023-10-28T03-48-59.225796.parquet
- split: latest
path:
- results_2023-10-28T03-48-59.225796.parquet
---
# Dataset Card for Evaluation run of IkariDev/Athena-v3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/IkariDev/Athena-v3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [IkariDev/Athena-v3](https://huggingface.co/IkariDev/Athena-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_IkariDev__Athena-v3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T03:48:59.225796](https://huggingface.co/datasets/open-llm-leaderboard/details_IkariDev__Athena-v3/blob/main/results_2023-10-28T03-48-59.225796.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0050335570469798654,
"em_stderr": 0.0007247385547751907,
"f1": 0.08212038590604023,
"f1_stderr": 0.0017177930965841738,
"acc": 0.4368461553651238,
"acc_stderr": 0.010431419008808642
},
"harness|drop|3": {
"em": 0.0050335570469798654,
"em_stderr": 0.0007247385547751907,
"f1": 0.08212038590604023,
"f1_stderr": 0.0017177930965841738
},
"harness|gsm8k|5": {
"acc": 0.11599696739954511,
"acc_stderr": 0.008820485491442497
},
"harness|winogrande|5": {
"acc": 0.7576953433307024,
"acc_stderr": 0.012042352526174789
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
BoburAmirov/example | ---
task_categories:
- automatic-speech-recognition
language:
- uz
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
wmt/wmt20_mlqe_task2 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- found
language:
- de
- en
- zh
license:
- unknown
multilinguality:
- translation
size_categories:
- 1K<n<10K
source_datasets:
- extended|wikipedia
task_categories:
- translation
- text-classification
task_ids: []
pretty_name: WMT20 - MultiLingual Quality Estimation (MLQE) Task2
config_names:
- en-de
- en-zh
tags:
- translation-quality-estimation
dataset_info:
- config_name: en-de
features:
- name: translation
dtype:
translation:
languages:
- en
- de
- name: src_tags
sequence:
class_label:
names:
'0': BAD
'1': OK
- name: mt_tags
sequence:
class_label:
names:
'0': BAD
'1': OK
- name: pe
dtype: string
- name: hter
dtype: float32
- name: alignments
sequence:
sequence: int32
splits:
- name: train
num_bytes: 6463902
num_examples: 7000
- name: test
num_bytes: 425042
num_examples: 1000
- name: validation
num_bytes: 927588
num_examples: 1000
download_size: 2284213
dataset_size: 7816532
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
- name: src_tags
sequence:
class_label:
names:
'0': BAD
'1': OK
- name: mt_tags
sequence:
class_label:
names:
'0': BAD
'1': OK
- name: pe
dtype: string
- name: hter
dtype: float32
- name: alignments
sequence:
sequence: int32
splits:
- name: train
num_bytes: 6786870
num_examples: 7000
- name: test
num_bytes: 443200
num_examples: 1000
- name: validation
num_bytes: 954682
num_examples: 1000
download_size: 2436542
dataset_size: 8184752
configs:
- config_name: en-de
data_files:
- split: train
path: en-de/train-*
- split: test
path: en-de/test-*
- split: validation
path: en-de/validation-*
- config_name: en-zh
data_files:
- split: train
path: en-zh/train-*
- split: test
path: en-zh/test-*
- split: validation
path: en-zh/validation-*
---
# Dataset Card for WMT20 - MultiLingual Quality Estimation (MLQE) Task2
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WMT20 Quality Estimation Shared Task](http://www.statmt.org/wmt20/quality-estimation-task.html)
- **Repository**: [Github repository](https://github.com/deep-spin/deep-spin.github.io/tree/master/docs/data/wmt2020_qe)
- **Paper:** *Not available*
### Dataset Summary
From the homepage:
*This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task.*
*Task 1 evaluates the application of QE for post-editing purposes. It consists of predicting:*
- ***Word-level tags.*** *This is done both on source side (to detect which words caused errors) and target side (to detect mistranslated or missing words).*
- ***Target.*** *Each token is tagged as either `OK` or `BAD`. Additionally, each gap between two words is tagged as `BAD` if one or more missing words should have been there, and `OK` otherwise. Note that number of tags for each target sentence is 2*N+1, where N is the number of tokens in the sentence.*
- ***Source.*** *Tokens are tagged as `OK` if they were correctly translated, and `BAD` otherwise. Gaps are not tagged.*
- ***Sentence-level HTER scores.*** *HTER (Human Translation Error Rate) is the ratio between the number of edits (insertions/deletions/replacements) needed and the reference translation length.*
### Supported Tasks and Leaderboards
From the homepage:
*For sentence-level QE, submissions are evaluated in terms of the Pearson's correlation metric for the sentence-level HTER prediction. For word-level QE, they will be evaluated in terms of MCC ([Matthews correlation coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)). These are the [official evaluation scripts](https://github.com/sheffieldnlp/qe-eval-scripts).*
### Languages
There are two language pairs in this dataset:
- English - German (`en` - `de`)
- German - Chinese (`en` - `zh`)
## Dataset Structure
### Data Instances
An example looks like this:
```
{
'translation': {
'en': 'favorite fish include cod , salmon , winter flounder , haddock , striped bass , pollock , hake , bluefish , and , in southern New England , Tautog .',
'de': 'zu den Lieblingsfischen gehören Kabeljau , Lachs , Winterflounder , Schellfisch , gestreifter Bass , Pollock , Seehecht , Rotbarsch und in Südengland Tautog .',
}
'src_tags': [1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1],
'mt_tags': [1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1],
'pe': 'zu den Lieblingsfischen zählen Kabeljau , Lachs , Winterflunder , Schellfisch , Wolfsbarsch , Pollock , Seehecht , Bluefish und im Süden Neuenglands Tautog .',
'hter': 0.3199999928474426,
'alignments': [[2, 0], [2, 1], [2, 3], [3, 2], [3, 4], [4, 5], [5, 6], [6, 5], [7, 6], [8, 6], [9, 7], [10, 8], [10, 10], [11, 9], [12, 12], [13, 13], [14, 11], [15, 12], [15, 15], [16, 14], [17, 17], [19, 16], [20, 16], [21, 20], [22, 18], [23, 19], [23, 21], [24, 22], [25, 21], [26, 22], [27, 22], [28, 23], [29, 24]],
}
```
### Data Fields
- `translation`: Dictionary with pairs (source,target).
- src_lg: sequence of text in source language.
- tgt_lg: sequence of text in target language.
- `src_tags`: source word-level tags. `0`=`BAD`, `1`=`OK`. `[]` if N/A (only for test).
- `mt_tags`: target word-level tags. `0`=`BAD`, `1`=`OK`. `[]` if N/A (only for test).
- `pe`: post-edited version of NMT output. `""` if N/A (only for test).
- `hter`: human translation error rate. `-10_000` if N/A (only for test).
- `alignments`: Word aligments. List of pairs of integers.
### Data Splits
There are 2 configurations in this dataset (one for each available language pair). Each configuration is composed of 7K examples for training, 1K for validation and 1K for (blind) test.
## Dataset Creation
### Curation Rationale
The original text is extracted from Wikipedia.
From the homepage:
*Word-level labels have been obtained by using the alignments provided by the [TER](http://www.cs.umd.edu/~snover/tercom/) tool (settings: tokenised, case insensitive, exact matching only, disabling shifts by using the `-d 0` option) between machine translations and their post-edited versions. Shifts (word order errors) were not annotated as such (but rather as deletions + insertions) to avoid introducing noise in the annotation.*
*HTER values are obtained deterministically from word-level tags. However, when computing HTER, we allow shifts in TER.*
*The baseline system is a neural predictor-estimator approach implemented in [OpenKiwi](https://github.com/Unbabel/OpenKiwi) ([Kepler at al., 2019](https://arxiv.org/abs/1902.08646)), where the predictor model will be trained on the parallel data used to train the NMT model.*
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown
### Citation Information
```
Not available.
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
CyberHarem/ozaki_reiko_theidolmster | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ozaki_reiko (THE iDOLM@STER)
This is the dataset of ozaki_reiko (THE iDOLM@STER), containing 26 images and their tags.
The core tags of this character are `long_hair, brown_eyes, breasts, brown_hair, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 26 | 9.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 26 | 8.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 38 | 13.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 26 | 9.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 38 | 13.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ozaki_reiko_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, blush, smile, solo, jacket, 2girls, skirt, cleavage |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | smile | solo | jacket | 2girls | skirt | cleavage |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------|:-------|:---------|:---------|:--------|:-----------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X |
|
mstz/twonorm | ---
language:
- en
tags:
- twonorm
- tabular_classification
- binary_classification
pretty_name: Two Norm
size_categories:
- 1K<n<10K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- 8hr
- 1hr
---
# TwoNorm
The [TwoNorm dataset](https://www.openml.org/search?type=data&status=active&id=1507) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| twonorm | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/twonorm")["train"]
```
|
joheras/prueba | ---
license: cc
---
|
yzhuang/autotree_pmlb_100000_spambase_sgosdt_l256_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 3649158912
num_examples: 100000
- name: validation
num_bytes: 364882304
num_examples: 10000
download_size: 643796701
dataset_size: 4014041216
---
# Dataset Card for "autotree_pmlb_100000_spambase_sgosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sbussiso/secret-ml-dataset | ---
license: mit
task_categories:
- text-classification
language:
- en
pretty_name: f
--- |
jvitor79/gri_glossary_dump | ---
task_categories:
- text-generation
language:
- pt
---
This dataset is a test and intents to gather all the information in the glossary of GRI standards to train the AI |
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-767bec-38719101848 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: 96harsh56/bert_test1
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 96harsh56/bert_test1
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@endoftheworld](https://huggingface.co/endoftheworld) for evaluating this model. |
Seon25/hausa_to_english | ---
pretty_name: Eldad Voice Corpus 16
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ha
extra_gated_prompt: "By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."
---
|
open-llm-leaderboard/details_rishiraj__uncensored | ---
pretty_name: Evaluation run of rishiraj/uncensored
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [rishiraj/uncensored](https://huggingface.co/rishiraj/uncensored) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rishiraj__uncensored\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-04T12:11:19.373726](https://huggingface.co/datasets/open-llm-leaderboard/details_rishiraj__uncensored/blob/main/results_2024-01-04T12-11-19.373726.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6138134057467441,\n\
\ \"acc_stderr\": 0.03270091323935443,\n \"acc_norm\": 0.6170723511545717,\n\
\ \"acc_norm_stderr\": 0.03335705121648071,\n \"mc1\": 0.423500611995104,\n\
\ \"mc1_stderr\": 0.017297421448534727,\n \"mc2\": 0.5914138790054457,\n\
\ \"mc2_stderr\": 0.015571835698051038\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6160409556313993,\n \"acc_stderr\": 0.014212444980651892,\n\
\ \"acc_norm\": 0.6604095563139932,\n \"acc_norm_stderr\": 0.01383903976282017\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6523600876319459,\n\
\ \"acc_stderr\": 0.004752476997887822,\n \"acc_norm\": 0.8480382393945429,\n\
\ \"acc_norm_stderr\": 0.0035825015965645496\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542129,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542129\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.042849586397534015,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.042849586397534015\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n \
\ \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7430555555555556,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.7430555555555556,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n\
\ \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.037336266553835096,\n\
\ \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.037336266553835096\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3431372549019608,\n\
\ \"acc_stderr\": 0.04724007352383888,\n \"acc_norm\": 0.3431372549019608,\n\
\ \"acc_norm_stderr\": 0.04724007352383888\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5361702127659574,\n\
\ \"acc_stderr\": 0.032600385118357715,\n \"acc_norm\": 0.5361702127659574,\n\
\ \"acc_norm_stderr\": 0.032600385118357715\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.04615186962583703,\n\
\ \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.04615186962583703\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n \"\
acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3888888888888889,\n \"acc_stderr\": 0.02510742548113728,\n \"\
acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.02510742548113728\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n\
\ \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n\
\ \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6774193548387096,\n\
\ \"acc_stderr\": 0.02659308451657228,\n \"acc_norm\": 0.6774193548387096,\n\
\ \"acc_norm_stderr\": 0.02659308451657228\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.45320197044334976,\n \"acc_stderr\": 0.035025446508458714,\n\
\ \"acc_norm\": 0.45320197044334976,\n \"acc_norm_stderr\": 0.035025446508458714\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\"\
: 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386424,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386424\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8497409326424871,\n \"acc_stderr\": 0.02578772318072388,\n\
\ \"acc_norm\": 0.8497409326424871,\n \"acc_norm_stderr\": 0.02578772318072388\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6076923076923076,\n \"acc_stderr\": 0.024756000382130952,\n\
\ \"acc_norm\": 0.6076923076923076,\n \"acc_norm_stderr\": 0.024756000382130952\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n \
\ \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.03086868260412162,\n \
\ \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.03086868260412162\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8055045871559633,\n \"acc_stderr\": 0.016970289090458033,\n \"\
acc_norm\": 0.8055045871559633,\n \"acc_norm_stderr\": 0.016970289090458033\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"\
acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588663,\n \"\
acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588663\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7637130801687764,\n \"acc_stderr\": 0.02765215314415925,\n \
\ \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.02765215314415925\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n\
\ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n\
\ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8264462809917356,\n \"acc_stderr\": 0.03457272836917669,\n \"\
acc_norm\": 0.8264462809917356,\n \"acc_norm_stderr\": 0.03457272836917669\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.03487825168497892,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.03487825168497892\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n\
\ \"acc_stderr\": 0.04742762361243011,\n \"acc_norm\": 0.5178571428571429,\n\
\ \"acc_norm_stderr\": 0.04742762361243011\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n\
\ \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n\
\ \"acc_stderr\": 0.022801382534597542,\n \"acc_norm\": 0.8589743589743589,\n\
\ \"acc_norm_stderr\": 0.022801382534597542\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7918263090676884,\n\
\ \"acc_stderr\": 0.014518592248904033,\n \"acc_norm\": 0.7918263090676884,\n\
\ \"acc_norm_stderr\": 0.014518592248904033\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7023121387283237,\n \"acc_stderr\": 0.024617055388677003,\n\
\ \"acc_norm\": 0.7023121387283237,\n \"acc_norm_stderr\": 0.024617055388677003\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3664804469273743,\n\
\ \"acc_stderr\": 0.016115235504865467,\n \"acc_norm\": 0.3664804469273743,\n\
\ \"acc_norm_stderr\": 0.016115235504865467\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.696078431372549,\n \"acc_stderr\": 0.026336613469046626,\n\
\ \"acc_norm\": 0.696078431372549,\n \"acc_norm_stderr\": 0.026336613469046626\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6720257234726688,\n\
\ \"acc_stderr\": 0.026664410886937613,\n \"acc_norm\": 0.6720257234726688,\n\
\ \"acc_norm_stderr\": 0.026664410886937613\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.025630824975621348,\n\
\ \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.025630824975621348\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.46099290780141844,\n \"acc_stderr\": 0.02973659252642444,\n \
\ \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.02973659252642444\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4511082138200782,\n\
\ \"acc_stderr\": 0.012709037347346233,\n \"acc_norm\": 0.4511082138200782,\n\
\ \"acc_norm_stderr\": 0.012709037347346233\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n\
\ \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6421568627450981,\n \"acc_stderr\": 0.019393058402355442,\n \
\ \"acc_norm\": 0.6421568627450981,\n \"acc_norm_stderr\": 0.019393058402355442\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.04582004841505417,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.04582004841505417\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\
\ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6119402985074627,\n\
\ \"acc_stderr\": 0.034457899643627506,\n \"acc_norm\": 0.6119402985074627,\n\
\ \"acc_norm_stderr\": 0.034457899643627506\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.035887028128263686,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.035887028128263686\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n\
\ \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n\
\ \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160875,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160875\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.423500611995104,\n\
\ \"mc1_stderr\": 0.017297421448534727,\n \"mc2\": 0.5914138790054457,\n\
\ \"mc2_stderr\": 0.015571835698051038\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7932123125493291,\n \"acc_stderr\": 0.011382566829235798\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.48218347232752085,\n \
\ \"acc_stderr\": 0.013763738379867923\n }\n}\n```"
repo_url: https://huggingface.co/rishiraj/uncensored
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|arc:challenge|25_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|gsm8k|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hellaswag|10_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-11-19.373726.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T12-11-19.373726.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- '**/details_harness|winogrande|5_2024-01-04T12-11-19.373726.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-04T12-11-19.373726.parquet'
- config_name: results
data_files:
- split: 2024_01_04T12_11_19.373726
path:
- results_2024-01-04T12-11-19.373726.parquet
- split: latest
path:
- results_2024-01-04T12-11-19.373726.parquet
---
# Dataset Card for Evaluation run of rishiraj/uncensored
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [rishiraj/uncensored](https://huggingface.co/rishiraj/uncensored) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rishiraj__uncensored",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-04T12:11:19.373726](https://huggingface.co/datasets/open-llm-leaderboard/details_rishiraj__uncensored/blob/main/results_2024-01-04T12-11-19.373726.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6138134057467441,
"acc_stderr": 0.03270091323935443,
"acc_norm": 0.6170723511545717,
"acc_norm_stderr": 0.03335705121648071,
"mc1": 0.423500611995104,
"mc1_stderr": 0.017297421448534727,
"mc2": 0.5914138790054457,
"mc2_stderr": 0.015571835698051038
},
"harness|arc:challenge|25": {
"acc": 0.6160409556313993,
"acc_stderr": 0.014212444980651892,
"acc_norm": 0.6604095563139932,
"acc_norm_stderr": 0.01383903976282017
},
"harness|hellaswag|10": {
"acc": 0.6523600876319459,
"acc_stderr": 0.004752476997887822,
"acc_norm": 0.8480382393945429,
"acc_norm_stderr": 0.0035825015965645496
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542129,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542129
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.042849586397534015,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.042849586397534015
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7430555555555556,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.7430555555555556,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.037336266553835096,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.037336266553835096
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.04724007352383888,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.04724007352383888
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5361702127659574,
"acc_stderr": 0.032600385118357715,
"acc_norm": 0.5361702127659574,
"acc_norm_stderr": 0.032600385118357715
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.04615186962583703,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.04615186962583703
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.02510742548113728,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.02510742548113728
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6774193548387096,
"acc_stderr": 0.02659308451657228,
"acc_norm": 0.6774193548387096,
"acc_norm_stderr": 0.02659308451657228
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.45320197044334976,
"acc_stderr": 0.035025446508458714,
"acc_norm": 0.45320197044334976,
"acc_norm_stderr": 0.035025446508458714
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386424,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386424
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8497409326424871,
"acc_stderr": 0.02578772318072388,
"acc_norm": 0.8497409326424871,
"acc_norm_stderr": 0.02578772318072388
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6076923076923076,
"acc_stderr": 0.024756000382130952,
"acc_norm": 0.6076923076923076,
"acc_norm_stderr": 0.024756000382130952
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.027634907264178544,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.027634907264178544
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.03086868260412162,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.03086868260412162
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8055045871559633,
"acc_stderr": 0.016970289090458033,
"acc_norm": 0.8055045871559633,
"acc_norm_stderr": 0.016970289090458033
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4675925925925926,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.4675925925925926,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.028379449451588663,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.028379449451588663
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.02765215314415925,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.02765215314415925
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8264462809917356,
"acc_stderr": 0.03457272836917669,
"acc_norm": 0.8264462809917356,
"acc_norm_stderr": 0.03457272836917669
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.03487825168497892,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.03487825168497892
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.04742762361243011,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.04742762361243011
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597542,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597542
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7918263090676884,
"acc_stderr": 0.014518592248904033,
"acc_norm": 0.7918263090676884,
"acc_norm_stderr": 0.014518592248904033
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7023121387283237,
"acc_stderr": 0.024617055388677003,
"acc_norm": 0.7023121387283237,
"acc_norm_stderr": 0.024617055388677003
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3664804469273743,
"acc_stderr": 0.016115235504865467,
"acc_norm": 0.3664804469273743,
"acc_norm_stderr": 0.016115235504865467
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.696078431372549,
"acc_stderr": 0.026336613469046626,
"acc_norm": 0.696078431372549,
"acc_norm_stderr": 0.026336613469046626
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6720257234726688,
"acc_stderr": 0.026664410886937613,
"acc_norm": 0.6720257234726688,
"acc_norm_stderr": 0.026664410886937613
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.025630824975621348,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.025630824975621348
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.02973659252642444,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.02973659252642444
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4511082138200782,
"acc_stderr": 0.012709037347346233,
"acc_norm": 0.4511082138200782,
"acc_norm_stderr": 0.012709037347346233
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6421568627450981,
"acc_stderr": 0.019393058402355442,
"acc_norm": 0.6421568627450981,
"acc_norm_stderr": 0.019393058402355442
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.04582004841505417,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.04582004841505417
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6119402985074627,
"acc_stderr": 0.034457899643627506,
"acc_norm": 0.6119402985074627,
"acc_norm_stderr": 0.034457899643627506
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.035887028128263686,
"acc_norm": 0.85,
"acc_norm_stderr": 0.035887028128263686
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160875,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160875
},
"harness|truthfulqa:mc|0": {
"mc1": 0.423500611995104,
"mc1_stderr": 0.017297421448534727,
"mc2": 0.5914138790054457,
"mc2_stderr": 0.015571835698051038
},
"harness|winogrande|5": {
"acc": 0.7932123125493291,
"acc_stderr": 0.011382566829235798
},
"harness|gsm8k|5": {
"acc": 0.48218347232752085,
"acc_stderr": 0.013763738379867923
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
kaiku03/custom_complain_dataset_NER9 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: ner_tags
dtype: string
- name: ner_tags_numeric
sequence: int64
splits:
- name: train
num_bytes: 15980
num_examples: 56
- name: validation
num_bytes: 2232
num_examples: 8
download_size: 7184
dataset_size: 18212
---
# Dataset Card for "custom_complain_dataset_NER9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JovialValley/syllable_totalMapped1 | ---
dataset_info:
features:
- name: input_values
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 110046848
num_examples: 389
- name: test
num_bytes: 27145836
num_examples: 98
download_size: 138090941
dataset_size: 137192684
---
# Dataset Card for "syllable_totalMapped1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/VQAv2_test_split_6 | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: image
dtype: image
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_wo_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_with_openai
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_B_16_with_openai
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
splits:
- name: test
num_bytes: 9245469054.0
num_examples: 44779
download_size: 1848721947
dataset_size: 9245469054.0
---
# Dataset Card for "VQAv2_test_split_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
somosnlp/medical_bilingual_en_es | ---
dataset_info:
features:
- name: description
dtype: string
- name: medical_specialty
dtype: string
- name: sample_name
dtype: string
- name: transcription
dtype: string
splits:
- name: en
num_bytes: 12845119
num_examples: 4069
- name: es
num_bytes: 13894364
num_examples: 4069
download_size: 12814673
dataset_size: 26739483
configs:
- config_name: default
data_files:
- split: en
path: data/en-*
- split: es
path: data/es-*
language:
- en
- es
size_categories:
- 1K<n<10K
---
## Datos de alta calidad es lo que necesitas.
<div style="display: flex; justify-content: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/MDQf4ffGGL-2eTHimY8by.png" style="width: 50%; max-height: 550px;">
</div>
## Traducción Mediante ChatGPT.
Inicialmente, se preparó el dataset para la traducción, ajustando el formato de los datos para asegurar su compatibilidad. La API de ChatGPT se utilizó para traducir el contenido, prestando especial atención a la precisión y el contexto específico del lenguaje médico. Tras la traducción, se revisaron y ajustaron las traducciones para corregir cualquier inexactitud y asegurar que los términos médicos y las descripciones de los procedimientos mantuvieran su significado original y relevancia clínica.
## Limpieza de Datos Post-traducción Rag utilizando ChatGPT.
Después de traducir el dataset, se llevó a cabo una limpieza exhaustiva de los datos. Este proceso implicó la normalización y estandarización del texto para garantizar la coherencia en todo el dataset. Se eliminaron elementos innecesarios como caracteres especiales, filas faltante, valores nulos. Asegurando que el dataset final fuera accesible, analizable y procurar ser de la más alta calidad.
Este enfoque garantizó que el dataset no solo estuviera correctamente traducido del inglés al español, sino también limpio y preparado para cualquier análisis o aplicación posterior, maximizando su valor para profesionales y analistas en el ámbito médico.
## Desarrollo de un Modelo Bilingüe Compacto para la Clasificación y Diagnóstico en Transcripciones Médicas.
Centrándome en el desarrollo de modelos compactos pero potentes, inspirándome en estructuras de 2 billones de parámetros al estilo de GEMMA, mí proyecto se especializa en mejorar un modelo bilingüe capaz de analizar transcripciones médicas en inglés o español. El objetivo es que pueda determinar y comunicar tres elementos clave en español o ingles: la especialidad médica apropiada para el caso, una descripción concisa del mismo, y el diagnóstico principal. Este enfoque busca la manera de procesar y entiender las transcripciones médicas en contextos bilingües. Otorgando a los profesionales de la salud una herramienta para la asignación rápida y precisa de casos a las especialidades pertinentes, facilita diagnósticos iniciales y mejora significativamente la gestión y respuesta ante las necesidades de los pacientes.
## Aprox tokens utilizados.
<div style="display: flex; justify-content: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/iPhq14owdbjGEQIeGL6Sx.png" style="width: 50%; max-height: 550px;">
</div>
## Conclusión.
El proceso de traducción y limpieza de datos, especialmente en el sector médico, es esencial para asegurar que la información sea precisa, confiable y útil para el análisis y la toma de decisiones. Al emplear herramientas avanzadas como la API de ChatGPT para la traducción y aplicar meticulosas técnicas de limpieza de datos, se puede mejorar significativamente la calidad de los datasets. Esto no solo facilita la investigación y el análisis en el ámbito de la salud, sino que también contribuye a una mejor atención al paciente y a la eficiencia operativa dentro de las instituciones médicas. Teniendo encuenta lo anterior, la inversión en la precisión y limpieza de los datos es fundamental para impulsar avances y mejorar los resultados en el sector de la salud o diferentes campos.
## Origen de los datos en ingles.
```
https://www.kaggle.com/datasets
```
## Inconvenientes en el proceso (para mi).
```
Demora en la api de chatgpt (inferencia)
Demora en la iteracion del Rag para saber si los datos dados por chatgpt estaban correctos (inferencia)
Campos nulos, vacios.
Incoherencias.
presupuesto.
```
## Numero de filas antes y despues de la depuracion.
```
Datos inicial: +-4998
Datos finales: +-4007
```
## conjunto de datos dividido en.
```
es: Español.
en: ingles.
```
## Nota.
```
Si encuentras errores por favor avisar (lo hice solo y Chatgpt guiño guiño).
```
## modelo entrenado con el conjunto de datos.
```
https://huggingface.co/somosnlp/Sam_Diagnostic
```
## Hecho.
```
NickyNicky
```
<!-- Codigo de entrenamiento: https://colab.research.google.com/drive/1UmG6X_vRqMCIWqoPrdMdDkUJCW5oxrGp#scrollTo=HvaM3RKiklXS&uniqifier=1 --> |
Maxssto/mysetmp | ---
license: openrail
---
|
WDong/Madoka_memes | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 237621.0
num_examples: 14
download_size: 235259
dataset_size: 237621.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Madoka_memes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rsouza17/modelo.voz.i.a.rei | ---
license: openrail
---
|
vivekdugale/llama2_filtered_dataset_458_amod_helios | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 485832
num_examples: 450
download_size: 244935
dataset_size: 485832
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Des1gn-1/faixa1.wav | ---
license: openrail
---
|
emi429/rr_respiratory_one_person | ---
dataset_info:
features:
- name: RR
dtype: float64
- name: Event
dtype: string
- name: Vt
sequence: float64
- name: RC
sequence: float64
- name: AB
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 373355450
num_examples: 377361
download_size: 74316964
dataset_size: 373355450
---
# Dataset Card for "rr_respiratory_one_person"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dlproject/msp_train_hubert | ---
dataset_info:
features:
- name: input_values
sequence:
sequence:
sequence: float32
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 10872804940
num_examples: 29939
download_size: 9851597205
dataset_size: 10872804940
---
# Dataset Card for "msp_train_hubert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hassanjbara/ghostbuster-prompts | ---
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- text-generation
- text2text-generation
pretty_name: Ghostbuster prompts
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 419210
num_examples: 2175
download_size: 246860
dataset_size: 419210
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Prompts used by the Ghostbuster paper, taken from the [official repo](https://github.com/vivek3141/ghostbuster). 2k prompts for creative writing and long context generation. Mainly used by the paper to benchmark LLM detection, but could be useful for benchmarking many other things (coherence, factuality, creative writing, etc.). |
patrickcleeve/autolamella | ---
license: mit
dataset_info:
- config_name: liftout
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 2479679335.0
num_examples: 801
- name: test
num_bytes: 514295427.0
num_examples: 163
download_size: 1540632118
dataset_size: 2993974762.0
- config_name: serial-liftout
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 946980390.0
num_examples: 301
- name: test
num_bytes: 342926454.0
num_examples: 109
download_size: 457168711
dataset_size: 1289906844.0
- config_name: waffle
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 673435138.0
num_examples: 214
- name: test
num_bytes: 239208412.0
num_examples: 76
download_size: 477754123
dataset_size: 912643550.0
configs:
- config_name: liftout
data_files:
- split: train
path: liftout/train-*
- split: test
path: liftout/test-*
- config_name: serial-liftout
data_files:
- split: train
path: serial-liftout/train-*
- split: test
path: serial-liftout/test-*
- config_name: waffle
data_files:
- split: train
path: waffle/train-*
- split: test
path: waffle/test-*
---
# AutoLamella Dataset
The autolamella dataset consists of images from multiple different lamella preparation methods. All data is annotated for semantic segmentation, and is available through the huggingface api at [patrickcleeve/autolamella](https://huggingface.co/datasets/patrickcleeve/autolamella)
Summary
| Dataset / Method | Train | Test | Total |
| ----------- | ----------- | -----------| -----------|
| Waffle | 214 | 76 | 290 |
| Liftout | 801 | 163 | 969 |
| Serial Liftout | 301 | 109 | 412 |
| **Full** | **1316** | **348** | **1664** |
Details about the datasets can be found in summary.csv in the dataset directory.
### Labels
Currently, the dataset is labelled for the following classes. In the future, we will add additional labels for objects such as ice contamination. If you would like to label this data, please see the labelling tools to get started.
```yaml
CLASS_LABELS: # autolamella
0: "background"
1: "lamella"
2: "manipulator"
3: "landing_post"
4: "copper_adaptor"
5: "volume_block"
```
## Download Datasets
To download datasets, you can use the huggingface api:
```python
from datasets import load_dataset
# download waffle dataset
ds = load_dataset("patrickcleeve/autolamella", name="waffle")
# download liftout dataset
ds = load_dataset("patrickcleeve/autolamella", name="liftout")
# download serial-liftout dataset
ds = load_dataset("patrickcleeve/autolamella", name="serial-liftout")
# download test split only
ds = load_dataset("patrickcleeve/autolamella", name="waffle", split="test")
```
To display images and annotations:
```python
# show random image image and annotation (training split)
import random
import numpy as np
import matplotlib.pyplot as plt
from fibsem.segmentation.utils import decode_segmap_v2
# random data
idx = random.randint(0, len(ds["train"]))
image = np.asarray(ds["train"][idx]["image"])
mask = np.asarray(ds["train"][idx]["annotation"])
# metadata
split = ds["train"].split
config_name = ds["train"].config_name
plt.title(f"{config_name}-{split}-{idx:02d}")
plt.imshow(image, cmap="gray", alpha=0.7)
plt.imshow(decode_segmap_v2(mask), alpha=0.3)
plt.axis("off")
plt.show()
```
| Waffle | Liftout | Serial Liftout |
| ----------- | ----------- | ----------- |
|  |  |  |
You can also concatenate the datasets together into a single dataset for easy combined training (e.g. mega models)
```python
from datasets import load_dataset, concatenate_datasets
# load invidual datasets
waffle_train_ds = load_dataset("patrickcleeve/autolamella", name="waffle", split="train")
liftout_train_ds = load_dataset("patrickcleeve/autolamella", name="liftout", split="train")
serial_liftout_train_ds = load_dataset("patrickcleeve/autolamella", name="serial-liftout", split="train")
# concatenate datasets (e.g. mega model)
train_ds = concatenate_datasets([waffle_train_ds, liftout_train_ds, serial_liftout_train_ds])
print(train_ds)
```
```yaml
Dataset({
features: ['image', 'annotation'],
num_rows: 1316
})
```
### Acknowledgement
- Waffle and Liftout data from Monash
- Serial Liftout data from MPI
|
MichaelOrme/Paraphrased_Word | ---
license: unknown
---
|
distinsion/image_with_prompts | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 9836
num_examples: 107
download_size: 6132
dataset_size: 9836
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sh110495/compressed_mmlu | ---
dataset_info:
features:
- name: id
sequence: string
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
dtype: int64
- name: candidate_length
sequence: int64
splits:
- name: test
num_bytes: 107043164
num_examples: 14042
download_size: 26819461
dataset_size: 107043164
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
wanyu/IteraTeR_v2 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR_v2
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision](https://arxiv.org/abs/2204.03685)
Authors: Wanyu Du*, Zae Myung Kim*, Vipul Raheja, Dhruv Kumar, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
Watch our system demonstration below!
[](https://www.youtube.com/watch?v=lK08tIpEoaE)
|
zedamangas/MiniNoia | ---
license: openrail
---
|
thanhduycao/whisper_mix_data_v2 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1287134674.425454
num_examples: 8373
- name: test
num_bytes: 540858435.8587947
num_examples: 1903
download_size: 1785351931
dataset_size: 1827993110.2842486
---
# Dataset Card for "whisper_mix_data_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713020796 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 13664
num_examples: 31
download_size: 9778
dataset_size: 13664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713020796"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_mnli_will_would | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 147195
num_examples: 625
- name: dev_mismatched
num_bytes: 154891
num_examples: 668
- name: test_matched
num_bytes: 130813
num_examples: 540
- name: test_mismatched
num_bytes: 141920
num_examples: 614
- name: train
num_bytes: 5155546
num_examples: 22160
download_size: 3464199
dataset_size: 5730365
---
# Dataset Card for "MULTI_VALUE_mnli_will_would"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_european_history-neg-answer | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_answer
dtype: string
splits:
- name: test
num_bytes: 281184
num_examples: 165
download_size: 150430
dataset_size: 281184
---
# Dataset Card for "mmlu-high_school_european_history-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SminC/pokemon_caption_data_CLIP | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: colored_image
dtype: image
splits:
- name: train
num_bytes: 69617745.0
num_examples: 829
download_size: 69422090
dataset_size: 69617745.0
---
# Dataset Card for "pokemon_caption_data_CLIP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
isobench/IsoBench | ---
language:
- en
license: cc-by-sa-4.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
- zero-shot-classification
- image-classification
pretty_name: IsoBench
dataset_info:
- config_name: chemistry
features:
- name: image
dtype: image
- name: question
dtype: string
- name: choices
dtype: string
- name: label
dtype: int64
- name: description
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 2611154.0
num_examples: 75
download_size: 2517594
dataset_size: 2611154.0
- config_name: graph_connectivity
features:
- name: image
dtype: image
- name: query_nodes_color
dtype: string
- name: adjacency_matrix
dtype: string
- name: query_node_1
dtype: int64
- name: query_node_2
dtype: int64
- name: label
dtype: bool
- name: id
dtype: string
splits:
- name: validation
num_bytes: 62682553
num_examples: 128
download_size: 19391513
dataset_size: 62682553
- config_name: graph_isomorphism
features:
- name: image
dtype: image
- name: adjacency_matrix_G
dtype: string
- name: adjacency_matrix_H
dtype: string
- name: label
dtype: bool
- name: id
dtype: string
splits:
- name: validation
num_bytes: 25082487
num_examples: 128
download_size: 8931620
dataset_size: 25082487
- config_name: graph_maxflow
features:
- name: image
dtype: image
- name: source_node
dtype: int64
- name: source_node_color
dtype: string
- name: sink_node
dtype: int64
- name: sink_node_color
dtype: string
- name: adjacency_matrix
dtype: string
- name: label
dtype: int64
- name: id
dtype: string
splits:
- name: validation
num_bytes: 44530168
num_examples: 128
download_size: 16112025
dataset_size: 44530168
- config_name: math_breakpoint
features:
- name: image
dtype: image
- name: domain
dtype: float64
- name: latex
dtype: string
- name: code
dtype: string
- name: label
dtype: int64
- name: id
dtype: string
splits:
- name: validation
num_bytes: 14120119
num_examples: 256
download_size: 12531449
dataset_size: 14120119
- config_name: math_convexity
features:
- name: image
dtype: image
- name: domain
dtype: string
- name: latex
dtype: string
- name: code
dtype: string
- name: label
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 11176740
num_examples: 256
download_size: 9253917
dataset_size: 11176740
- config_name: math_parity
features:
- name: image
dtype: image
- name: domain
dtype: float64
- name: latex
dtype: string
- name: code
dtype: string
- name: label
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 17012598
num_examples: 384
download_size: 14230745
dataset_size: 17012598
- config_name: physics
features:
- name: image
dtype: image
- name: question
dtype: string
- name: choices
dtype: string
- name: label
dtype: int64
- name: description
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 2354556.0
num_examples: 75
download_size: 2156044
dataset_size: 2354556.0
- config_name: puzzle
features:
- name: image
dtype: image
- name: anl
dtype: string
- name: pgn
dtype: string
- name: fen
dtype: string
- name: label
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 5192310.0
num_examples: 200
download_size: 4856203
dataset_size: 5192310.0
- config_name: winner_id
features:
- name: image
dtype: image
- name: anl
dtype: string
- name: pgn
dtype: string
- name: fen
dtype: string
- name: label
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 6486731
num_examples: 257
download_size: 6026970
dataset_size: 6486731
configs:
- config_name: chemistry
data_files:
- split: validation
path: chemistry/validation-*
- config_name: graph_connectivity
data_files:
- split: validation
path: graph_connectivity/validation-*
- config_name: graph_isomorphism
data_files:
- split: validation
path: graph_isomorphism/validation-*
- config_name: graph_maxflow
data_files:
- split: validation
path: graph_maxflow/validation-*
- config_name: math_breakpoint
data_files:
- split: validation
path: math_breakpoint/validation-*
- config_name: math_convexity
data_files:
- split: validation
path: math_convexity/validation-*
- config_name: math_parity
data_files:
- split: validation
path: math_parity/validation-*
- config_name: physics
data_files:
- split: validation
path: physics/validation-*
- config_name: puzzle
data_files:
- split: validation
path: puzzle/validation-*
- config_name: winner_id
data_files:
- split: validation
path: winner_id/validation-*
---
# Dataset Card for IsoBench
<!-- Provide a quick summary of the dataset. -->
📚 [paper](https://arxiv.org/abs/2404.01266) 🌐 [website](https://isobench.github.io)
Introducing IsoBench, a benchmark dataset containing problems from four major areas: math, science, algorithms, and games. Each example is presented with multiple isomorphic representations of inputs, such as visual, textual, and mathematical presentations. Details of IsoBench can be found in our [paper](https://arxiv.org/abs/2404.01266) or [website](https://isobench.github.io)!
## Table of Contents
- [Dataset Details](#dataset-details)
- [Mathematics](#mathematics)
- [Algorithms](#algorithms)
- [Games](#games)
- [Science](#science)
- [Data Fields](#deta-fields)
- [Mathematics](#mathematics)
- [Convexity](#convexity)
- [Breakpoint](#breakpoint)
- [Parity](#parity)
- [Algorithms](#algorithms)
- [Connectivity](#connectivity)
- [Maxflow](#maxflow)
- [Isomorphism](#isomorphism)
- [Games](#games)
- [Winner Identification](#winner-identification)
- [Chess Puzzle](#chess-puzzle)
- [Science](#science)
- [Chemistry](#chemistry)
- [Physics](#physics)
- [Citation](#citation)
- [Contact](#contact)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
There are 4 major domains: math, algorithm, game, and science. Each domain has several subtasks.
In tatal there are 1,887 samples in the `validation` split with ground-truth labels provided.
The `test` split without labels is coming soon......
We will show how to load the data for each subtask.
### TL;DR
There are 10 subtasks in total: `math_breakpoint, math_convexity, math_parity, graph_connectivity, graph_maxflow, graph_isomorphism, winner_id, puzzle, chemistry, physics`.
You can load a `subtask` via
```python
from datasets import load_dataset
ds_subtask = load_dataset('isobench/IsoBench', subtask, split='validation')
```
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
IsoBench is designed with two objectives, which are:
- Analyzing the behavior difference between language-only and multimodal foundation models, by prompting them with distinct (*e.g.* mathematical expression and plot of a function) representations of the same input.
- Contributing a language-only/multimodal benchmark in the science domain.
#### Mathematics
There are three mathematics tasks. Each task is structured as a classification problem and each class contains 128 samples.
- **Parity** implements a ternary classification problem. A model has to classify an input function into an even function, odd function, or neither.
- **Convexity** implements a binary classification problem for a model to classify an input function as convex or concave. **Note**: some functions are only convex (resp. concave) within a certain domain (*e.g.* `x > 0`), which is reported in the `domain` field of each sample. We recommend providing this information as part of the prompt!
- **Breakpoint** counts the number of breakpoints (*i.e.* intersections of a piecewise linear function). Each function contains either 2 or 3 breakpoints, which renders this task a binary classification problem.
```python
from datasets import load_dataset
dataset_parity = load_dataset('isobench/IsoBench', 'math_parity', split='validation')
dataset_convexity = load_dataset('isobench/IsoBench', 'math_convexity', split='validation')
dataset_breakpoint = load_dataset('isobench/IsoBench', 'math_breakpoint', split='validation')
```
### Algorithms
There are three algorithmic tasks, with ascending complexity: graph connectivity, graph maximum flow, and graph isomorphism.
You can download the data by
```python
from datasets import load_dataset
dataset_connectivity = load_dataset('isobench/IsoBench', 'graph_connectivity', split='validation')
dataset_maxflow = load_dataset('isobench/IsoBench', 'graph_maxflow', split='validation')
dataset_isomorphism = load_dataset('isobench/IsoBench', 'graph_isomorphism', split='validation')
```
Each task has 128 dev samples under the validation split.
### Games
[More Information Needed]
### Science
[More Information Needed]
## Data Fields
### Mathematics
- `image`: a PIL Image feature;
- `latex`: a `string` feature, containing the LateX definition of a function;
- `code`: a `string` feature, containing the `sympy` definition of a function;
- `label`: a `string` feature;
- `domain`: a `string` feature or `None`, denoting the domain of a function. This feature is only used for some of the Convexity problems.
- `id`: a `string` feature.
### Algorithms
#### Connectivity
- `image`: a PIL Image feature
- `query_nodes_color`: a `string` feature
- `adjacency_matrix`: a `string` feature, a string of an 2d array representing the adjacency matrix of a graph
- `query_node_1`: a `unit32` feature
- `query_node_2`: a `unit32` feature
- `label`: a `bool` feature, with possible values including `True` (query nodes connected) and `False` (query nodes not connected)
- `id`: a `string` feature
#### Maxflow
- `image`: a PIL Image feature
- `source_node`: a `unit32` feature, denoting the index of the source node
- `source_node_color`: a `string` feature, denoting the color of the `source_node` rendered in the `image`
- `sink_node`: a `unit32` feature, denoting the index of the sink node
- `sink_node_color`: a `string` feature, denoting the color of the `sink_node` rendered in the `image`
- `adjacency_matrix`: a `string` feature, a string of an 2d array representing the adjacency matrix of a graph. The value in entry (i,j) denotes the capacity of flowing from node `i` to node `j`.
- `label`: a `uint32` feature
- `id`: a `string` feature
#### Isomorphism
- `image`: a PIL Image feature, consisting of two graphs `G` and `H`
- `adjacency_matrix_G`: a `string` feature, a string of an 2d array representing the adjacency matrix of graph `G`
- `adjacency_matrix_H`: a `string` feature, a string of an 2d array representing the adjacency matrix of graph `H`
- `label`: a `bool` feature, with possible values including `True` (graphs `G` and `H` are isomorphic) and `False` (not isomorphic)
- `id`: a `string` feature
### Games
[More Information Needed]
### Science
[More Information Needed]
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```BibTeX
@misc{fu2024isobench,
title={{I}so{B}ench: Benchmarking Multimodal Foundation Models on Isomorphic Representations},
author={Deqing Fu$^*$ and Ghazal Khalighinejad$^*$ and Ollie Liu$^*$ and Bhuwan Dhingra and Dani Yogatama and Robin Jia and Willie Neiswanger},
year={2024},
eprint={2404.01266},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
**Chicago Style:**
Deqing Fu<sup>\*</sup>, Ghazal Khalighinejad<sup>\*</sup>, Ollie Liu<sup>\*</sup>, Bhuwan Dhingra, Dani Yogatama, Robin Jia, and Willie Neiswanger. "IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations." arXiv preprint arXiv:2404.01266 (2024).
## Contact
deqingfu@usc.edu, me@ollieliu.com, ghazal.khalighinejad@duke.edu |
prnv19/MathGPT | ---
license: mit
---
|
swap-uniba/mmlu_ita | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mmlu
pretty_name: Measuring Massive Multitask Language Understanding
language_bcp47:
- en-US
dataset_info:
- config_name: abstract_algebra
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 19328
num_examples: 100
- name: validation
num_bytes: 2024
num_examples: 11
- name: dev
num_bytes: 830
num_examples: 5
download_size: 166184960
dataset_size: 160623559
- config_name: anatomy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33121
num_examples: 135
- name: validation
num_bytes: 3140
num_examples: 14
- name: dev
num_bytes: 967
num_examples: 5
download_size: 166184960
dataset_size: 160638605
- config_name: astronomy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46771
num_examples: 152
- name: validation
num_bytes: 5027
num_examples: 16
- name: dev
num_bytes: 2076
num_examples: 5
download_size: 166184960
dataset_size: 160655251
- config_name: business_ethics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33252
num_examples: 100
- name: validation
num_bytes: 3038
num_examples: 11
- name: dev
num_bytes: 2190
num_examples: 5
download_size: 166184960
dataset_size: 160639857
- config_name: clinical_knowledge
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 62754
num_examples: 265
- name: validation
num_bytes: 6664
num_examples: 29
- name: dev
num_bytes: 1210
num_examples: 5
download_size: 166184960
dataset_size: 160672005
- config_name: college_biology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 48797
num_examples: 144
- name: validation
num_bytes: 4819
num_examples: 16
- name: dev
num_bytes: 1532
num_examples: 5
download_size: 166184960
dataset_size: 160656525
- config_name: college_chemistry
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 24708
num_examples: 100
- name: validation
num_bytes: 2328
num_examples: 8
- name: dev
num_bytes: 1331
num_examples: 5
download_size: 166184960
dataset_size: 160629744
- config_name: college_computer_science
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 42641
num_examples: 100
- name: validation
num_bytes: 4663
num_examples: 11
- name: dev
num_bytes: 2765
num_examples: 5
download_size: 166184960
dataset_size: 160651446
- config_name: college_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 24711
num_examples: 100
- name: validation
num_bytes: 2668
num_examples: 11
- name: dev
num_bytes: 1493
num_examples: 5
download_size: 166184960
dataset_size: 160630249
- config_name: college_medicine
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 82397
num_examples: 173
- name: validation
num_bytes: 7909
num_examples: 22
- name: dev
num_bytes: 1670
num_examples: 5
download_size: 166184960
dataset_size: 160693353
- config_name: college_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 30181
num_examples: 102
- name: validation
num_bytes: 3490
num_examples: 11
- name: dev
num_bytes: 1412
num_examples: 5
download_size: 166184960
dataset_size: 160636460
- config_name: computer_security
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 27124
num_examples: 100
- name: validation
num_bytes: 4549
num_examples: 11
- name: dev
num_bytes: 1101
num_examples: 5
download_size: 166184960
dataset_size: 160634151
- config_name: conceptual_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 40709
num_examples: 235
- name: validation
num_bytes: 4474
num_examples: 26
- name: dev
num_bytes: 934
num_examples: 5
download_size: 166184960
dataset_size: 160647494
- config_name: econometrics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46547
num_examples: 114
- name: validation
num_bytes: 4967
num_examples: 12
- name: dev
num_bytes: 1644
num_examples: 5
download_size: 166184960
dataset_size: 160654535
- config_name: electrical_engineering
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 25142
num_examples: 145
- name: validation
num_bytes: 2903
num_examples: 16
- name: dev
num_bytes: 972
num_examples: 5
download_size: 166184960
dataset_size: 160630394
- config_name: elementary_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 70108
num_examples: 378
- name: validation
num_bytes: 8988
num_examples: 41
- name: dev
num_bytes: 1440
num_examples: 5
download_size: 166184960
dataset_size: 160681913
- config_name: formal_logic
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 49785
num_examples: 126
- name: validation
num_bytes: 6252
num_examples: 14
- name: dev
num_bytes: 1757
num_examples: 5
download_size: 166184960
dataset_size: 160659171
- config_name: global_facts
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 18403
num_examples: 100
- name: validation
num_bytes: 1865
num_examples: 10
- name: dev
num_bytes: 1229
num_examples: 5
download_size: 166184960
dataset_size: 160622874
- config_name: high_school_biology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 109732
num_examples: 310
- name: validation
num_bytes: 11022
num_examples: 32
- name: dev
num_bytes: 1673
num_examples: 5
download_size: 166184960
dataset_size: 160723804
- config_name: high_school_chemistry
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 58464
num_examples: 203
- name: validation
num_bytes: 7092
num_examples: 22
- name: dev
num_bytes: 1220
num_examples: 5
download_size: 166184960
dataset_size: 160668153
- config_name: high_school_computer_science
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 44476
num_examples: 100
- name: validation
num_bytes: 3343
num_examples: 9
- name: dev
num_bytes: 2918
num_examples: 5
download_size: 166184960
dataset_size: 160652114
- config_name: high_school_european_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 270300
num_examples: 165
- name: validation
num_bytes: 29632
num_examples: 18
- name: dev
num_bytes: 11564
num_examples: 5
download_size: 166184960
dataset_size: 160912873
- config_name: high_school_geography
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 42034
num_examples: 198
- name: validation
num_bytes: 4332
num_examples: 22
- name: dev
num_bytes: 1403
num_examples: 5
download_size: 166184960
dataset_size: 160649146
- config_name: high_school_government_and_politics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 66074
num_examples: 193
- name: validation
num_bytes: 7063
num_examples: 21
- name: dev
num_bytes: 1779
num_examples: 5
download_size: 166184960
dataset_size: 160676293
- config_name: high_school_macroeconomics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 117687
num_examples: 390
- name: validation
num_bytes: 13020
num_examples: 43
- name: dev
num_bytes: 1328
num_examples: 5
download_size: 166184960
dataset_size: 160733412
- config_name: high_school_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 54854
num_examples: 270
- name: validation
num_bytes: 5765
num_examples: 29
- name: dev
num_bytes: 1297
num_examples: 5
download_size: 166184960
dataset_size: 160663293
- config_name: high_school_microeconomics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 75703
num_examples: 238
- name: validation
num_bytes: 7553
num_examples: 26
- name: dev
num_bytes: 1298
num_examples: 5
download_size: 166184960
dataset_size: 160685931
- config_name: high_school_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 59538
num_examples: 151
- name: validation
num_bytes: 6771
num_examples: 17
- name: dev
num_bytes: 1489
num_examples: 5
download_size: 166184960
dataset_size: 160669175
- config_name: high_school_psychology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 159407
num_examples: 545
- name: validation
num_bytes: 17269
num_examples: 60
- name: dev
num_bytes: 1905
num_examples: 5
download_size: 166184960
dataset_size: 160779958
- config_name: high_school_statistics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 110702
num_examples: 216
- name: validation
num_bytes: 9997
num_examples: 23
- name: dev
num_bytes: 2528
num_examples: 5
download_size: 166184960
dataset_size: 160724604
- config_name: high_school_us_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 296734
num_examples: 204
- name: validation
num_bytes: 31706
num_examples: 22
- name: dev
num_bytes: 8864
num_examples: 5
download_size: 166184960
dataset_size: 160938681
- config_name: high_school_world_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 378617
num_examples: 237
- name: validation
num_bytes: 45501
num_examples: 26
- name: dev
num_bytes: 4882
num_examples: 5
download_size: 166184960
dataset_size: 161030377
- config_name: human_aging
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46098
num_examples: 223
- name: validation
num_bytes: 4707
num_examples: 23
- name: dev
num_bytes: 1008
num_examples: 5
download_size: 166184960
dataset_size: 160653190
- config_name: human_sexuality
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 32110
num_examples: 131
- name: validation
num_bytes: 2421
num_examples: 12
- name: dev
num_bytes: 1077
num_examples: 5
download_size: 166184960
dataset_size: 160636985
- config_name: international_law
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 53531
num_examples: 121
- name: validation
num_bytes: 6473
num_examples: 13
- name: dev
num_bytes: 2418
num_examples: 5
download_size: 166184960
dataset_size: 160663799
- config_name: jurisprudence
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33986
num_examples: 108
- name: validation
num_bytes: 3729
num_examples: 11
- name: dev
num_bytes: 1303
num_examples: 5
download_size: 166184960
dataset_size: 160640395
- config_name: logical_fallacies
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 50117
num_examples: 163
- name: validation
num_bytes: 5103
num_examples: 18
- name: dev
num_bytes: 1573
num_examples: 5
download_size: 166184960
dataset_size: 160658170
- config_name: machine_learning
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33880
num_examples: 112
- name: validation
num_bytes: 3232
num_examples: 11
- name: dev
num_bytes: 2323
num_examples: 5
download_size: 166184960
dataset_size: 160640812
- config_name: management
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 20002
num_examples: 103
- name: validation
num_bytes: 1820
num_examples: 11
- name: dev
num_bytes: 898
num_examples: 5
download_size: 166184960
dataset_size: 160624097
- config_name: marketing
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 63025
num_examples: 234
- name: validation
num_bytes: 7394
num_examples: 25
- name: dev
num_bytes: 1481
num_examples: 5
download_size: 166184960
dataset_size: 160673277
- config_name: medical_genetics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 20864
num_examples: 100
- name: validation
num_bytes: 3005
num_examples: 11
- name: dev
num_bytes: 1089
num_examples: 5
download_size: 166184960
dataset_size: 160626335
- config_name: miscellaneous
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 147704
num_examples: 783
- name: validation
num_bytes: 14330
num_examples: 86
- name: dev
num_bytes: 699
num_examples: 5
download_size: 166184960
dataset_size: 160764110
- config_name: moral_disputes
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 107818
num_examples: 346
- name: validation
num_bytes: 12420
num_examples: 38
- name: dev
num_bytes: 1755
num_examples: 5
download_size: 166184960
dataset_size: 160723370
- config_name: moral_scenarios
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 374026
num_examples: 895
- name: validation
num_bytes: 42338
num_examples: 100
- name: dev
num_bytes: 2058
num_examples: 5
download_size: 166184960
dataset_size: 161019799
- config_name: nutrition
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 92410
num_examples: 306
- name: validation
num_bytes: 8436
num_examples: 33
- name: dev
num_bytes: 2085
num_examples: 5
download_size: 166184960
dataset_size: 160704308
- config_name: philosophy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 80073
num_examples: 311
- name: validation
num_bytes: 9184
num_examples: 34
- name: dev
num_bytes: 988
num_examples: 5
download_size: 166184960
dataset_size: 160691622
- config_name: prehistory
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 89594
num_examples: 324
- name: validation
num_bytes: 10285
num_examples: 35
- name: dev
num_bytes: 1878
num_examples: 5
download_size: 166184960
dataset_size: 160703134
- config_name: professional_accounting
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 124550
num_examples: 282
- name: validation
num_bytes: 14372
num_examples: 31
- name: dev
num_bytes: 2148
num_examples: 5
download_size: 166184960
dataset_size: 160742447
- config_name: professional_law
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 1891762
num_examples: 1534
- name: validation
num_bytes: 203519
num_examples: 170
- name: dev
num_bytes: 6610
num_examples: 5
download_size: 166184960
dataset_size: 162703268
- config_name: professional_medicine
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 217561
num_examples: 272
- name: validation
num_bytes: 23847
num_examples: 31
- name: dev
num_bytes: 3807
num_examples: 5
download_size: 166184960
dataset_size: 160846592
- config_name: professional_psychology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 225899
num_examples: 612
- name: validation
num_bytes: 29101
num_examples: 69
- name: dev
num_bytes: 2267
num_examples: 5
download_size: 166184960
dataset_size: 160858644
- config_name: public_relations
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 28760
num_examples: 110
- name: validation
num_bytes: 4566
num_examples: 12
- name: dev
num_bytes: 1496
num_examples: 5
download_size: 166184960
dataset_size: 160636199
- config_name: security_studies
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 204844
num_examples: 245
- name: validation
num_bytes: 22637
num_examples: 27
- name: dev
num_bytes: 5335
num_examples: 5
download_size: 166184960
dataset_size: 160834193
- config_name: sociology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 66243
num_examples: 201
- name: validation
num_bytes: 7184
num_examples: 22
- name: dev
num_bytes: 1613
num_examples: 5
download_size: 166184960
dataset_size: 160676417
- config_name: us_foreign_policy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 28443
num_examples: 100
- name: validation
num_bytes: 3264
num_examples: 11
- name: dev
num_bytes: 1611
num_examples: 5
download_size: 166184960
dataset_size: 160634695
- config_name: virology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 38759
num_examples: 166
- name: validation
num_bytes: 5463
num_examples: 18
- name: dev
num_bytes: 1096
num_examples: 5
download_size: 166184960
dataset_size: 160646695
- config_name: world_religions
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 25274
num_examples: 171
- name: validation
num_bytes: 2765
num_examples: 19
- name: dev
num_bytes: 670
num_examples: 5
download_size: 166184960
dataset_size: 160630086
---
# Italian Version of the MMLU DATASET
Based on the version released by: [**FreedomIntelligence/MMLU_Italian**](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Italian)
Includes minor fixes.
# Citations
This version:
```
@misc{basile2023llamantino,
title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language},
author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro},
year={2023},
eprint={2312.09993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Dataset:
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
# Original Dataset Card for MMLU
## Dataset Description
- **Repository**: https://github.com/hendrycks/test
- **Paper**: https://arxiv.org/abs/2009.03300
### Dataset Summary
[Measuring Massive Multitask Language Understanding](https://arxiv.org/pdf/2009.03300) by [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/), [Collin Burns](http://collinpburns.com), [Steven Basart](https://stevenbas.art), Andy Zou, Mantas Mazeika, [Dawn Song](https://people.eecs.berkeley.edu/~dawnsong/), and [Jacob Steinhardt](https://www.stat.berkeley.edu/~jsteinhardt/) (ICLR 2021).
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability.
A complete list of tasks: ['abstract_algebra', 'anatomy', 'astronomy', 'business_ethics', 'clinical_knowledge', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_medicine', 'college_physics', 'computer_security', 'conceptual_physics', 'econometrics', 'electrical_engineering', 'elementary_mathematics', 'formal_logic', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_european_history', 'high_school_geography', 'high_school_government_and_politics', 'high_school_macroeconomics', 'high_school_mathematics', 'high_school_microeconomics', 'high_school_physics', 'high_school_psychology', 'high_school_statistics', 'high_school_us_history', 'high_school_world_history', 'human_aging', 'human_sexuality', 'international_law', 'jurisprudence', 'logical_fallacies', 'machine_learning', 'management', 'marketing', 'medical_genetics', 'miscellaneous', 'moral_disputes', 'moral_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional_accounting', 'professional_law', 'professional_medicine', 'professional_psychology', 'public_relations', 'security_studies', 'sociology', 'us_foreign_policy', 'virology', 'world_religions']
### Languages
English
## Dataset Structure
### Data Instances
An example from anatomy subtask looks as follows:
```
{
"question": "What is the embryological origin of the hyoid bone?",
"choices": ["The first pharyngeal arch", "The first and second pharyngeal arches", "The second pharyngeal arch", "The second and third pharyngeal arches"],
"answer": "D"
}
```
### Data Fields
- `question`: a string feature
- `choices`: a list of 4 string features
- `answer`: a ClassLabel feature
### Data Splits
- `auxiliary_train`: auxiliary multiple-choice training questions from ARC, MC_TEST, OBQA, RACE, etc.
- `dev`: 5 examples per subtask, meant for few-shot setting
- `test`: there are at least 100 examples per subtask
| | auxiliary_train | dev | val | test |
| ----- | :------: | :-----: | :-----: | :-----: |
| TOTAL | 99842 | 285 | 1531 | 14042
### Licensing Information
[MIT License](https://github.com/hendrycks/test/blob/master/LICENSE)
```
### Contributions
Thanks to [@andyzoujm](https://github.com/andyzoujm) for adding this dataset.
|
adityarra07/train_data_15000 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2527685083.524488
num_examples: 15000
- name: test
num_bytes: 33702566.98032651
num_examples: 200
download_size: 2525375368
dataset_size: 2561387650.5048146
---
# Dataset Card for "train_data_15000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985072 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- beans
eval_info:
task: image_multi_class_classification
model: johnnydevriese/vit_beans
metrics: []
dataset_name: beans
dataset_config: default
dataset_split: test
col_mapping:
image: image
target: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: johnnydevriese/vit_beans
* Dataset: beans
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate/autoeval-eval-phpthinh__examplei-all-929d48-1748861031 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplei
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: ['f1']
dataset_name: phpthinh/examplei
dataset_config: all
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: phpthinh/examplei
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
SALT-NLP/Design2Code_human_eval_pairwise | ---
dataset_info:
features:
- name: id
dtype: string
- name: ref_image
dtype: image
- name: ref_html
dtype: string
- name: model1
dtype: string
- name: model2
dtype: string
- name: image1
dtype: image
- name: image2
dtype: image
- name: html1
dtype: string
- name: html2
dtype: string
- name: win1
dtype: int64
- name: win2
dtype: int64
- name: tie
dtype: int64
splits:
- name: train
num_bytes: 348516021.0
num_examples: 700
download_size: 298172345
dataset_size: 348516021.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Find more details in [our paper](arxiv.org/abs/2403.03163). |
CyberHarem/js05_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of js05/JS05/JS05 (Girls' Frontline)
This is the dataset of js05/JS05/JS05 (Girls' Frontline), containing 13 images and their tags.
The core tags of this character are `short_hair, green_eyes, grey_hair, bangs, earrings`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 13 | 14.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js05_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 13 | 9.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js05_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 30 | 18.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js05_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 13 | 14.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js05_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 30 | 25.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js05_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/js05_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, solo, black_gloves, looking_at_viewer, simple_background, fingerless_gloves, closed_mouth, jewelry, smile, white_background, bare_shoulders, choker, elbow_gloves, holding, skirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | black_gloves | looking_at_viewer | simple_background | fingerless_gloves | closed_mouth | jewelry | smile | white_background | bare_shoulders | choker | elbow_gloves | holding | skirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------------|:--------------------|:--------------------|:--------------------|:---------------|:----------|:--------|:-------------------|:-----------------|:---------|:---------------|:----------|:--------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
man4j/aisha_v3_style | ---
dataset_info:
features:
- name: instruct
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 208920
num_examples: 162
download_size: 52610
dataset_size: 208920
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rhfeiyang/photo-sketch-pair-500 | ---
dataset_info:
features:
- name: photo
dtype: image
- name: sketch
dtype: image
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 383437393.0
num_examples: 500
download_size: 383466798
dataset_size: 383437393.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AIARTCHAN/lora-Asbestos_Ceiling | ---
license: creativeml-openrail-m
tags:
- lora
- aiartchan
- stable-diffusion
---
# Lora - Asbestos_Ceiling
## Dataset Description
- **원본** [수상할 정도로 익숙한 석면 천장 로라 공유 및 사용법](https://arca.live/b/aiart/69669397)
석면 **천장** 로라 파일
## !!사용법!!
그냥 T2I에서 로라 넣고 돌리면 벽까지 침범을 당해서 타율이 매우 떨어짐
천장 쪽만 인페인트해서 돌려야 타율이 좋음
**디노이즈 강도 : 0.5**
**<lora:Asbestos Ceiling:2.0>**
[다운로드](https://huggingface.co/datasets/AIARTCHAN/lora-Asbestos_Ceiling/resolve/main/Asbestos%20Ceiling.safetensors) |
danasone/wikipedia_ru | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10137635834
num_examples: 1925386
download_size: 1222287612
dataset_size: 10137635834
---
# Dataset Card for "wikipedia_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
misaelferrer/sentiment-banking | ---
license: apache-2.0
---
|
malaysia-ai/crawl-youtube | ---
dataset_info:
features:
- name: filename
dtype:
audio:
sampling_rate: 16000
- name: url
dtype: string
splits:
- name: train
num_bytes: 1068464089483.938
num_examples: 59879
download_size: 16395869337
dataset_size: 1068464089483.938
---
# Crawl Youtube
We crawled Malaysian and Singaporean youtube channels, total up to 60k audio files with total 185k hours.
URLs data at https://github.com/mesolitica/malaya-speech/tree/master/data/youtube/data
Notebooks at https://github.com/mesolitica/malaya-speech/tree/master/data/youtube
## How to load the data efficiently?
```python
import pandas as pd
import json
from datasets import Audio
from torch.utils.data import DataLoader, Dataset
chunks = 30
sr = 16000
class Train(Dataset):
def __init__(self, indices, maxlen_cache_df=5, maxlen_cache_audio=50):
self.indices = {}
for k, v in indices.items():
for i in range(int(k), v['start'] + v['end'], 1):
self.indices[i] = v
self.max_index = len(self.indices)
self.cache_df = {}
self.cache_audio = {}
self.maxlen_cache_df = maxlen_cache_df
self.maxlen_cache_audio = maxlen_cache_audio
self.audio = Audio(sampling_rate=16000)
def __len__(self):
return self.max_index
def __getitem__(self, item):
if item < 0:
item = self.max_index + item
v = self.indices[item]
key_row = f"{v['filename']}-{v['i']}"
chunk_index = item - v['start']
if key_row not in self.cache_audio:
if v['filename'] not in self.cache_df:
df = pd.read_parquet(v['filename'])
if len(self.cache_df) >= self.maxlen_cache_df:
keys = list(self.cache_df.keys())
self.cache_df.pop(sorted(keys)[0], None)
self.cache_df[v['filename']] = df
else:
df = self.cache_df[v['filename']]
row = df.iloc[int(v['i'])]
audio = self.audio.decode_example(self.audio.encode_example(row['filename']))
if len(self.cache_audio) >= self.maxlen_cache_audio:
keys = list(self.cache_audio.keys())
self.cache_audio.pop(sorted(keys)[0], None)
self.cache_audio[key_row] = audio
else:
audio = self.cache_audio[key_row]
return {
'array': audio['array'][(chunks * sr) * chunk_index: (chunks * sr) * (chunk_index + 1)]
}
with open('crawl-youtube-global-indices.json') as fopen:
global_indices = json.load(fopen)
train = Train(global_indices)
train[0]
```
```
{'array': array([ 0. , 0. , 0. , ..., -0.00845753,
0.00168016, -0.00606468])}
```
This is global hashing indices if the audio chunked with 30 seconds, read more at https://github.com/mesolitica/malaysian-dataset/tree/master/speech-to-text-semisupervised/pseudolabel-whisper
## Licensing
```
All the videos, songs, images, and graphics used in the video belong to their respective owners and I does not claim any right over them.
Copyright Disclaimer under section 107 of the Copyright Act of 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research. Fair use is a use permitted by copyright statute that might otherwise be infringing.
``` |
open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.3 | ---
pretty_name: Evaluation run of davidkim205/Rhea-72b-v0.3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [davidkim205/Rhea-72b-v0.3](https://huggingface.co/davidkim205/Rhea-72b-v0.3)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-23T20:16:27.166987](https://huggingface.co/datasets/open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.3/blob/main/results_2024-03-23T20-16-27.166987.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.775359998161504,\n\
\ \"acc_stderr\": 0.027898864348066918,\n \"acc_norm\": 0.7767118781733953,\n\
\ \"acc_norm_stderr\": 0.028458565396692535,\n \"mc1\": 0.6682986536107711,\n\
\ \"mc1_stderr\": 0.01648214881024148,\n \"mc2\": 0.7593481584480776,\n\
\ \"mc2_stderr\": 0.014270713709869645\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7491467576791809,\n \"acc_stderr\": 0.01266819862131543,\n\
\ \"acc_norm\": 0.7679180887372014,\n \"acc_norm_stderr\": 0.012336718284948856\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.740390360485959,\n\
\ \"acc_stderr\": 0.004375244237045139,\n \"acc_norm\": 0.89982075283808,\n\
\ \"acc_norm_stderr\": 0.002996252441361047\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7111111111111111,\n\
\ \"acc_stderr\": 0.03915450630414251,\n \"acc_norm\": 0.7111111111111111,\n\
\ \"acc_norm_stderr\": 0.03915450630414251\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.881578947368421,\n \"acc_stderr\": 0.026293995855474928,\n\
\ \"acc_norm\": 0.881578947368421,\n \"acc_norm_stderr\": 0.026293995855474928\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.8,\n\
\ \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.8,\n \
\ \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8490566037735849,\n \"acc_stderr\": 0.022032988985703494,\n\
\ \"acc_norm\": 0.8490566037735849,\n \"acc_norm_stderr\": 0.022032988985703494\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9305555555555556,\n\
\ \"acc_stderr\": 0.021257974822832048,\n \"acc_norm\": 0.9305555555555556,\n\
\ \"acc_norm_stderr\": 0.021257974822832048\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n\
\ \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7514450867052023,\n\
\ \"acc_stderr\": 0.03295304696818317,\n \"acc_norm\": 0.7514450867052023,\n\
\ \"acc_norm_stderr\": 0.03295304696818317\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n\
\ \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.8042553191489362,\n \"acc_stderr\": 0.025937853139977148,\n\
\ \"acc_norm\": 0.8042553191489362,\n \"acc_norm_stderr\": 0.025937853139977148\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6140350877192983,\n\
\ \"acc_stderr\": 0.04579639422070434,\n \"acc_norm\": 0.6140350877192983,\n\
\ \"acc_norm_stderr\": 0.04579639422070434\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.7724137931034483,\n \"acc_stderr\": 0.03493950380131184,\n\
\ \"acc_norm\": 0.7724137931034483,\n \"acc_norm_stderr\": 0.03493950380131184\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.701058201058201,\n \"acc_stderr\": 0.023577604791655805,\n \"\
acc_norm\": 0.701058201058201,\n \"acc_norm_stderr\": 0.023577604791655805\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5714285714285714,\n\
\ \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.5714285714285714,\n\
\ \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8903225806451613,\n\
\ \"acc_stderr\": 0.017776778700485184,\n \"acc_norm\": 0.8903225806451613,\n\
\ \"acc_norm_stderr\": 0.017776778700485184\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6650246305418719,\n \"acc_stderr\": 0.033208527423483104,\n\
\ \"acc_norm\": 0.6650246305418719,\n \"acc_norm_stderr\": 0.033208527423483104\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\"\
: 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n\
\ \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9393939393939394,\n \"acc_stderr\": 0.016999994927421592,\n \"\
acc_norm\": 0.9393939393939394,\n \"acc_norm_stderr\": 0.016999994927421592\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9896373056994818,\n \"acc_stderr\": 0.007308424386792194,\n\
\ \"acc_norm\": 0.9896373056994818,\n \"acc_norm_stderr\": 0.007308424386792194\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8051282051282052,\n \"acc_stderr\": 0.020083167595181393,\n\
\ \"acc_norm\": 0.8051282051282052,\n \"acc_norm_stderr\": 0.020083167595181393\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.4777777777777778,\n \"acc_stderr\": 0.030455413985678408,\n \
\ \"acc_norm\": 0.4777777777777778,\n \"acc_norm_stderr\": 0.030455413985678408\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8613445378151261,\n \"acc_stderr\": 0.022448264476832593,\n\
\ \"acc_norm\": 0.8613445378151261,\n \"acc_norm_stderr\": 0.022448264476832593\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.5695364238410596,\n \"acc_stderr\": 0.04042809961395634,\n \"\
acc_norm\": 0.5695364238410596,\n \"acc_norm_stderr\": 0.04042809961395634\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9357798165137615,\n \"acc_stderr\": 0.010510494713201403,\n \"\
acc_norm\": 0.9357798165137615,\n \"acc_norm_stderr\": 0.010510494713201403\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6712962962962963,\n \"acc_stderr\": 0.032036140846700596,\n \"\
acc_norm\": 0.6712962962962963,\n \"acc_norm_stderr\": 0.032036140846700596\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9166666666666666,\n \"acc_stderr\": 0.019398452135813905,\n \"\
acc_norm\": 0.9166666666666666,\n \"acc_norm_stderr\": 0.019398452135813905\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9071729957805907,\n \"acc_stderr\": 0.018889750550956715,\n \
\ \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.018889750550956715\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7937219730941704,\n\
\ \"acc_stderr\": 0.027157150479563824,\n \"acc_norm\": 0.7937219730941704,\n\
\ \"acc_norm_stderr\": 0.027157150479563824\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8778625954198473,\n \"acc_stderr\": 0.02871877688934232,\n\
\ \"acc_norm\": 0.8778625954198473,\n \"acc_norm_stderr\": 0.02871877688934232\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8925619834710744,\n \"acc_stderr\": 0.028268812192540616,\n \"\
acc_norm\": 0.8925619834710744,\n \"acc_norm_stderr\": 0.028268812192540616\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8518518518518519,\n\
\ \"acc_stderr\": 0.03434300243630999,\n \"acc_norm\": 0.8518518518518519,\n\
\ \"acc_norm_stderr\": 0.03434300243630999\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.852760736196319,\n \"acc_stderr\": 0.027839915278339653,\n\
\ \"acc_norm\": 0.852760736196319,\n \"acc_norm_stderr\": 0.027839915278339653\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6160714285714286,\n\
\ \"acc_stderr\": 0.04616143075028546,\n \"acc_norm\": 0.6160714285714286,\n\
\ \"acc_norm_stderr\": 0.04616143075028546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.03393295729761011,\n\
\ \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.03393295729761011\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9487179487179487,\n\
\ \"acc_stderr\": 0.014450181176872736,\n \"acc_norm\": 0.9487179487179487,\n\
\ \"acc_norm_stderr\": 0.014450181176872736\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977725,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977725\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9169859514687101,\n\
\ \"acc_stderr\": 0.009866287394639536,\n \"acc_norm\": 0.9169859514687101,\n\
\ \"acc_norm_stderr\": 0.009866287394639536\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8410404624277457,\n \"acc_stderr\": 0.019685307033571946,\n\
\ \"acc_norm\": 0.8410404624277457,\n \"acc_norm_stderr\": 0.019685307033571946\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7418994413407821,\n\
\ \"acc_stderr\": 0.014635185616527836,\n \"acc_norm\": 0.7418994413407821,\n\
\ \"acc_norm_stderr\": 0.014635185616527836\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.020279402936174588,\n\
\ \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.020279402936174588\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8488745980707395,\n\
\ \"acc_stderr\": 0.020342749744428634,\n \"acc_norm\": 0.8488745980707395,\n\
\ \"acc_norm_stderr\": 0.020342749744428634\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8580246913580247,\n \"acc_stderr\": 0.019420260109438287,\n\
\ \"acc_norm\": 0.8580246913580247,\n \"acc_norm_stderr\": 0.019420260109438287\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6560283687943262,\n \"acc_stderr\": 0.028338017428611334,\n \
\ \"acc_norm\": 0.6560283687943262,\n \"acc_norm_stderr\": 0.028338017428611334\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6108213820078227,\n\
\ \"acc_stderr\": 0.012452613934287014,\n \"acc_norm\": 0.6108213820078227,\n\
\ \"acc_norm_stderr\": 0.012452613934287014\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8419117647058824,\n \"acc_stderr\": 0.02216146260806852,\n\
\ \"acc_norm\": 0.8419117647058824,\n \"acc_norm_stderr\": 0.02216146260806852\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8235294117647058,\n \"acc_stderr\": 0.015422512066262549,\n \
\ \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.015422512066262549\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7636363636363637,\n\
\ \"acc_stderr\": 0.040693063197213754,\n \"acc_norm\": 0.7636363636363637,\n\
\ \"acc_norm_stderr\": 0.040693063197213754\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8326530612244898,\n \"acc_stderr\": 0.02389714476891452,\n\
\ \"acc_norm\": 0.8326530612244898,\n \"acc_norm_stderr\": 0.02389714476891452\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8955223880597015,\n\
\ \"acc_stderr\": 0.021628920516700643,\n \"acc_norm\": 0.8955223880597015,\n\
\ \"acc_norm_stderr\": 0.021628920516700643\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.93,\n \"acc_stderr\": 0.0256432399976243,\n \
\ \"acc_norm\": 0.93,\n \"acc_norm_stderr\": 0.0256432399976243\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n\
\ \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n\
\ \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8830409356725146,\n \"acc_stderr\": 0.02464806896136616,\n\
\ \"acc_norm\": 0.8830409356725146,\n \"acc_norm_stderr\": 0.02464806896136616\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6682986536107711,\n\
\ \"mc1_stderr\": 0.01648214881024148,\n \"mc2\": 0.7593481584480776,\n\
\ \"mc2_stderr\": 0.014270713709869645\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.850828729281768,\n \"acc_stderr\": 0.010012598805627305\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7983320697498104,\n \
\ \"acc_stderr\": 0.011052295889544391\n }\n}\n```"
repo_url: https://huggingface.co/davidkim205/Rhea-72b-v0.3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|arc:challenge|25_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|gsm8k|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hellaswag|10_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T20-16-27.166987.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-23T20-16-27.166987.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- '**/details_harness|winogrande|5_2024-03-23T20-16-27.166987.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-23T20-16-27.166987.parquet'
- config_name: results
data_files:
- split: 2024_03_23T20_16_27.166987
path:
- results_2024-03-23T20-16-27.166987.parquet
- split: latest
path:
- results_2024-03-23T20-16-27.166987.parquet
---
# Dataset Card for Evaluation run of davidkim205/Rhea-72b-v0.3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [davidkim205/Rhea-72b-v0.3](https://huggingface.co/davidkim205/Rhea-72b-v0.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-23T20:16:27.166987](https://huggingface.co/datasets/open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.3/blob/main/results_2024-03-23T20-16-27.166987.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.775359998161504,
"acc_stderr": 0.027898864348066918,
"acc_norm": 0.7767118781733953,
"acc_norm_stderr": 0.028458565396692535,
"mc1": 0.6682986536107711,
"mc1_stderr": 0.01648214881024148,
"mc2": 0.7593481584480776,
"mc2_stderr": 0.014270713709869645
},
"harness|arc:challenge|25": {
"acc": 0.7491467576791809,
"acc_stderr": 0.01266819862131543,
"acc_norm": 0.7679180887372014,
"acc_norm_stderr": 0.012336718284948856
},
"harness|hellaswag|10": {
"acc": 0.740390360485959,
"acc_stderr": 0.004375244237045139,
"acc_norm": 0.89982075283808,
"acc_norm_stderr": 0.002996252441361047
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7111111111111111,
"acc_stderr": 0.03915450630414251,
"acc_norm": 0.7111111111111111,
"acc_norm_stderr": 0.03915450630414251
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.881578947368421,
"acc_stderr": 0.026293995855474928,
"acc_norm": 0.881578947368421,
"acc_norm_stderr": 0.026293995855474928
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.8,
"acc_stderr": 0.040201512610368445,
"acc_norm": 0.8,
"acc_norm_stderr": 0.040201512610368445
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8490566037735849,
"acc_stderr": 0.022032988985703494,
"acc_norm": 0.8490566037735849,
"acc_norm_stderr": 0.022032988985703494
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9305555555555556,
"acc_stderr": 0.021257974822832048,
"acc_norm": 0.9305555555555556,
"acc_norm_stderr": 0.021257974822832048
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7514450867052023,
"acc_stderr": 0.03295304696818317,
"acc_norm": 0.7514450867052023,
"acc_norm_stderr": 0.03295304696818317
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.8042553191489362,
"acc_stderr": 0.025937853139977148,
"acc_norm": 0.8042553191489362,
"acc_norm_stderr": 0.025937853139977148
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6140350877192983,
"acc_stderr": 0.04579639422070434,
"acc_norm": 0.6140350877192983,
"acc_norm_stderr": 0.04579639422070434
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7724137931034483,
"acc_stderr": 0.03493950380131184,
"acc_norm": 0.7724137931034483,
"acc_norm_stderr": 0.03493950380131184
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.701058201058201,
"acc_stderr": 0.023577604791655805,
"acc_norm": 0.701058201058201,
"acc_norm_stderr": 0.023577604791655805
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8903225806451613,
"acc_stderr": 0.017776778700485184,
"acc_norm": 0.8903225806451613,
"acc_norm_stderr": 0.017776778700485184
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6650246305418719,
"acc_stderr": 0.033208527423483104,
"acc_norm": 0.6650246305418719,
"acc_norm_stderr": 0.033208527423483104
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9393939393939394,
"acc_stderr": 0.016999994927421592,
"acc_norm": 0.9393939393939394,
"acc_norm_stderr": 0.016999994927421592
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9896373056994818,
"acc_stderr": 0.007308424386792194,
"acc_norm": 0.9896373056994818,
"acc_norm_stderr": 0.007308424386792194
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8051282051282052,
"acc_stderr": 0.020083167595181393,
"acc_norm": 0.8051282051282052,
"acc_norm_stderr": 0.020083167595181393
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.4777777777777778,
"acc_stderr": 0.030455413985678408,
"acc_norm": 0.4777777777777778,
"acc_norm_stderr": 0.030455413985678408
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8613445378151261,
"acc_stderr": 0.022448264476832593,
"acc_norm": 0.8613445378151261,
"acc_norm_stderr": 0.022448264476832593
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5695364238410596,
"acc_stderr": 0.04042809961395634,
"acc_norm": 0.5695364238410596,
"acc_norm_stderr": 0.04042809961395634
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9357798165137615,
"acc_stderr": 0.010510494713201403,
"acc_norm": 0.9357798165137615,
"acc_norm_stderr": 0.010510494713201403
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6712962962962963,
"acc_stderr": 0.032036140846700596,
"acc_norm": 0.6712962962962963,
"acc_norm_stderr": 0.032036140846700596
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9166666666666666,
"acc_stderr": 0.019398452135813905,
"acc_norm": 0.9166666666666666,
"acc_norm_stderr": 0.019398452135813905
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9071729957805907,
"acc_stderr": 0.018889750550956715,
"acc_norm": 0.9071729957805907,
"acc_norm_stderr": 0.018889750550956715
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7937219730941704,
"acc_stderr": 0.027157150479563824,
"acc_norm": 0.7937219730941704,
"acc_norm_stderr": 0.027157150479563824
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8778625954198473,
"acc_stderr": 0.02871877688934232,
"acc_norm": 0.8778625954198473,
"acc_norm_stderr": 0.02871877688934232
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8925619834710744,
"acc_stderr": 0.028268812192540616,
"acc_norm": 0.8925619834710744,
"acc_norm_stderr": 0.028268812192540616
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8518518518518519,
"acc_stderr": 0.03434300243630999,
"acc_norm": 0.8518518518518519,
"acc_norm_stderr": 0.03434300243630999
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.852760736196319,
"acc_stderr": 0.027839915278339653,
"acc_norm": 0.852760736196319,
"acc_norm_stderr": 0.027839915278339653
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6160714285714286,
"acc_stderr": 0.04616143075028546,
"acc_norm": 0.6160714285714286,
"acc_norm_stderr": 0.04616143075028546
},
"harness|hendrycksTest-management|5": {
"acc": 0.8640776699029126,
"acc_stderr": 0.03393295729761011,
"acc_norm": 0.8640776699029126,
"acc_norm_stderr": 0.03393295729761011
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9487179487179487,
"acc_stderr": 0.014450181176872736,
"acc_norm": 0.9487179487179487,
"acc_norm_stderr": 0.014450181176872736
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.86,
"acc_stderr": 0.034873508801977725,
"acc_norm": 0.86,
"acc_norm_stderr": 0.034873508801977725
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9169859514687101,
"acc_stderr": 0.009866287394639536,
"acc_norm": 0.9169859514687101,
"acc_norm_stderr": 0.009866287394639536
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8410404624277457,
"acc_stderr": 0.019685307033571946,
"acc_norm": 0.8410404624277457,
"acc_norm_stderr": 0.019685307033571946
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7418994413407821,
"acc_stderr": 0.014635185616527836,
"acc_norm": 0.7418994413407821,
"acc_norm_stderr": 0.014635185616527836
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.020279402936174588,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.020279402936174588
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8488745980707395,
"acc_stderr": 0.020342749744428634,
"acc_norm": 0.8488745980707395,
"acc_norm_stderr": 0.020342749744428634
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8580246913580247,
"acc_stderr": 0.019420260109438287,
"acc_norm": 0.8580246913580247,
"acc_norm_stderr": 0.019420260109438287
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6560283687943262,
"acc_stderr": 0.028338017428611334,
"acc_norm": 0.6560283687943262,
"acc_norm_stderr": 0.028338017428611334
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.6108213820078227,
"acc_stderr": 0.012452613934287014,
"acc_norm": 0.6108213820078227,
"acc_norm_stderr": 0.012452613934287014
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8419117647058824,
"acc_stderr": 0.02216146260806852,
"acc_norm": 0.8419117647058824,
"acc_norm_stderr": 0.02216146260806852
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.015422512066262549,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.015422512066262549
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.040693063197213754,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.040693063197213754
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8326530612244898,
"acc_stderr": 0.02389714476891452,
"acc_norm": 0.8326530612244898,
"acc_norm_stderr": 0.02389714476891452
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8955223880597015,
"acc_stderr": 0.021628920516700643,
"acc_norm": 0.8955223880597015,
"acc_norm_stderr": 0.021628920516700643
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.93,
"acc_stderr": 0.0256432399976243,
"acc_norm": 0.93,
"acc_norm_stderr": 0.0256432399976243
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598053,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598053
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8830409356725146,
"acc_stderr": 0.02464806896136616,
"acc_norm": 0.8830409356725146,
"acc_norm_stderr": 0.02464806896136616
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6682986536107711,
"mc1_stderr": 0.01648214881024148,
"mc2": 0.7593481584480776,
"mc2_stderr": 0.014270713709869645
},
"harness|winogrande|5": {
"acc": 0.850828729281768,
"acc_stderr": 0.010012598805627305
},
"harness|gsm8k|5": {
"acc": 0.7983320697498104,
"acc_stderr": 0.011052295889544391
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
AdapterOcean/med_alpaca_standardized_cluster_49_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 5207935
num_examples: 14054
download_size: 2113207
dataset_size: 5207935
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_49_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/gloucester_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of gloucester/グロスター/格罗斯特 (Azur Lane)
This is the dataset of gloucester/グロスター/格罗斯特 (Azur Lane), containing 38 images and their tags.
The core tags of this character are `breasts, purple_hair, short_hair, yellow_eyes, large_breasts, hairband, bangs, hair_over_one_eye`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 38 | 45.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gloucester_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 38 | 30.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gloucester_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 92 | 63.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gloucester_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 38 | 41.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gloucester_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 92 | 80.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gloucester_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/gloucester_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, dress, solo, underboob_cutout, looking_at_viewer, juliet_sleeves, simple_background, white_background, black_gloves, maid_apron |
| 1 | 12 |  |  |  |  |  | 1girl, black_gloves, china_dress, black_pantyhose, official_alternate_costume, solo, thighband_pantyhose, feather_boa, hair_ornament, looking_at_viewer, shrug_(clothing), chain, red_flower, thigh_strap, indoors, pelvic_curtain, cleavage_cutout, covered_navel, hair_between_eyes, hair_intakes, purple_dress, standing, window |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dress | solo | underboob_cutout | looking_at_viewer | juliet_sleeves | simple_background | white_background | black_gloves | maid_apron | china_dress | black_pantyhose | official_alternate_costume | thighband_pantyhose | feather_boa | hair_ornament | shrug_(clothing) | chain | red_flower | thigh_strap | indoors | pelvic_curtain | cleavage_cutout | covered_navel | hair_between_eyes | hair_intakes | purple_dress | standing | window |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-------------------|:--------------------|:-----------------|:--------------------|:-------------------|:---------------|:-------------|:--------------|:------------------|:-----------------------------|:----------------------|:--------------|:----------------|:-------------------|:--------|:-------------|:--------------|:----------|:-----------------|:------------------|:----------------|:--------------------|:---------------|:---------------|:-----------|:---------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | | X | | X | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Niankaphell/Arsenium-Voice | ---
license: openrail
---
|
emilykang/cardiology_train | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 725321961.5
num_examples: 1500
download_size: 712987464
dataset_size: 725321961.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Eitanli/abstracts | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: recall
dtype: int64
- name: article_title
dtype: string
- name: topic
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 232927086.52719492
num_examples: 135922
- name: test
num_bytes: 29117171.077408876
num_examples: 16991
- name: valid
num_bytes: 29115457.395396195
num_examples: 16990
download_size: 157551845
dataset_size: 291159715.0
---
# Dataset Card for "abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
toilaluan/tuned_prompt_ig_db_v1 | ---
dataset_info:
features:
- name: image
dtype: image
- name: topic
dtype: string
- name: prompt
dtype: string
- name: request_id
dtype: int64
- name: model_type
dtype: string
splits:
- name: train
num_bytes: 852360042.0
num_examples: 18000
download_size: 1308058237
dataset_size: 852360042.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tuned_prompt_ig_db_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-122500 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 15379006677
num_examples: 2500
download_size: 3089755171
dataset_size: 15379006677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_allknowingroger__TripleMerge2-7B-Ties | ---
pretty_name: Evaluation run of allknowingroger/TripleMerge2-7B-Ties
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [allknowingroger/TripleMerge2-7B-Ties](https://huggingface.co/allknowingroger/TripleMerge2-7B-Ties)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_allknowingroger__TripleMerge2-7B-Ties\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-10T20:36:21.255305](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__TripleMerge2-7B-Ties/blob/main/results_2024-04-10T20-36-21.255305.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6498983296015911,\n\
\ \"acc_stderr\": 0.03208411533469777,\n \"acc_norm\": 0.6490316915091527,\n\
\ \"acc_norm_stderr\": 0.032757912146706654,\n \"mc1\": 0.6230110159118727,\n\
\ \"mc1_stderr\": 0.01696551757893035,\n \"mc2\": 0.7718738301935896,\n\
\ \"mc2_stderr\": 0.013854169177751263\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7107508532423208,\n \"acc_stderr\": 0.013250012579393441,\n\
\ \"acc_norm\": 0.735494880546075,\n \"acc_norm_stderr\": 0.012889272949313368\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7131049591714798,\n\
\ \"acc_stderr\": 0.004513877465062106,\n \"acc_norm\": 0.8886675960963951,\n\
\ \"acc_norm_stderr\": 0.0031390048159258667\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n\
\ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\
\ \"acc_stderr\": 0.03586879280080341,\n \"acc_norm\": 0.7569444444444444,\n\
\ \"acc_norm_stderr\": 0.03586879280080341\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n\
\ \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542126,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542126\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108101,\n\
\ \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108101\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n\
\ \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894443,\n \"\
acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894443\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n\
\ \"acc_stderr\": 0.023540799358723292,\n \"acc_norm\": 0.7806451612903226,\n\
\ \"acc_norm_stderr\": 0.023540799358723292\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"\
acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.658974358974359,\n \"acc_stderr\": 0.02403548967633508,\n \
\ \"acc_norm\": 0.658974358974359,\n \"acc_norm_stderr\": 0.02403548967633508\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \
\ \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \
\ \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669237,\n \"\
acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669237\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8431372549019608,\n \"acc_stderr\": 0.02552472232455334,\n \"\
acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.02552472232455334\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290916,\n \
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.013586619219903347,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.013586619219903347\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.02378620325550829,\n\
\ \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.02378620325550829\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43687150837988825,\n\
\ \"acc_stderr\": 0.016588680864530622,\n \"acc_norm\": 0.43687150837988825,\n\
\ \"acc_norm_stderr\": 0.016588680864530622\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818733,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818733\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\
\ \"acc_stderr\": 0.02575586592263295,\n \"acc_norm\": 0.7106109324758842,\n\
\ \"acc_norm_stderr\": 0.02575586592263295\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47392438070404175,\n\
\ \"acc_stderr\": 0.01275285834653313,\n \"acc_norm\": 0.47392438070404175,\n\
\ \"acc_norm_stderr\": 0.01275285834653313\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n\
\ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093092,\n \
\ \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093092\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6230110159118727,\n\
\ \"mc1_stderr\": 0.01696551757893035,\n \"mc2\": 0.7718738301935896,\n\
\ \"mc2_stderr\": 0.013854169177751263\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8476716653512234,\n \"acc_stderr\": 0.01009920824606559\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7035633055344959,\n \
\ \"acc_stderr\": 0.01257939823558952\n }\n}\n```"
repo_url: https://huggingface.co/allknowingroger/TripleMerge2-7B-Ties
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|arc:challenge|25_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|gsm8k|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hellaswag|10_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T20-36-21.255305.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-10T20-36-21.255305.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- '**/details_harness|winogrande|5_2024-04-10T20-36-21.255305.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-10T20-36-21.255305.parquet'
- config_name: results
data_files:
- split: 2024_04_10T20_36_21.255305
path:
- results_2024-04-10T20-36-21.255305.parquet
- split: latest
path:
- results_2024-04-10T20-36-21.255305.parquet
---
# Dataset Card for Evaluation run of allknowingroger/TripleMerge2-7B-Ties
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [allknowingroger/TripleMerge2-7B-Ties](https://huggingface.co/allknowingroger/TripleMerge2-7B-Ties) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_allknowingroger__TripleMerge2-7B-Ties",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-10T20:36:21.255305](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__TripleMerge2-7B-Ties/blob/main/results_2024-04-10T20-36-21.255305.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6498983296015911,
"acc_stderr": 0.03208411533469777,
"acc_norm": 0.6490316915091527,
"acc_norm_stderr": 0.032757912146706654,
"mc1": 0.6230110159118727,
"mc1_stderr": 0.01696551757893035,
"mc2": 0.7718738301935896,
"mc2_stderr": 0.013854169177751263
},
"harness|arc:challenge|25": {
"acc": 0.7107508532423208,
"acc_stderr": 0.013250012579393441,
"acc_norm": 0.735494880546075,
"acc_norm_stderr": 0.012889272949313368
},
"harness|hellaswag|10": {
"acc": 0.7131049591714798,
"acc_stderr": 0.004513877465062106,
"acc_norm": 0.8886675960963951,
"acc_norm_stderr": 0.0031390048159258667
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.03586879280080341,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.03586879280080341
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542126,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542126
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108101,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108101
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.02533120243894443,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.02533120243894443
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723292,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723292
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.658974358974359,
"acc_stderr": 0.02403548967633508,
"acc_norm": 0.658974358974359,
"acc_norm_stderr": 0.02403548967633508
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028593,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8495412844036697,
"acc_stderr": 0.015328563932669237,
"acc_norm": 0.8495412844036697,
"acc_norm_stderr": 0.015328563932669237
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.02552472232455334,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.02552472232455334
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.025744902532290916,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.025744902532290916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903347,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903347
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7341040462427746,
"acc_stderr": 0.02378620325550829,
"acc_norm": 0.7341040462427746,
"acc_norm_stderr": 0.02378620325550829
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.43687150837988825,
"acc_stderr": 0.016588680864530622,
"acc_norm": 0.43687150837988825,
"acc_norm_stderr": 0.016588680864530622
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.02575586592263295,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.02575586592263295
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47392438070404175,
"acc_stderr": 0.01275285834653313,
"acc_norm": 0.47392438070404175,
"acc_norm_stderr": 0.01275285834653313
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.018901015322093092,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.018901015322093092
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6230110159118727,
"mc1_stderr": 0.01696551757893035,
"mc2": 0.7718738301935896,
"mc2_stderr": 0.013854169177751263
},
"harness|winogrande|5": {
"acc": 0.8476716653512234,
"acc_stderr": 0.01009920824606559
},
"harness|gsm8k|5": {
"acc": 0.7035633055344959,
"acc_stderr": 0.01257939823558952
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
tat1111/cad_blockchain_smartcontract | ---
license: afl-3.0
language:
- en
tags:
- smartcontract
- blockchain
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
This dataset is collected from top token in etherscan which are verifed and opensource. It contains useful information within the token, like token address, holders, sourcecode and so on.
Besides, we pay much attention to analysis the source code of tokens’ contract, which refers to the tag column in this dataset. The value of tag contains LLM analysis of contract source code with the token, which is useful for solidity contract creation by LLMs.
### Dataset Summary
This dataset contains of 877 rows in total. Each row includes the following features:
- token name(string) : the name of the top token we collect from etherscan
- max_total_supply(integer): max total supply of this token
- holders(string): holders number of this token
- total_transfers(integer): token transfer transaction number of this token
- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply
- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply
- contract_address(string): address of this token
- source_code(string): source code of the contracts
- abi(string): abi, application binary interface of the source code.
- tags(json): the llm analysis of the source code display in json type. The structure of tags is :
```python
{
"Pragma": <Pragma>,
"Contracs": [
{
"name": "<Contact_name>",
"role": "<Contract_role>" ,
"functions": { "<func_name>": "<func_role>" },
"modifier": { "<modifier_name>": "<modifier_role>" }
}
],
"Interface": [
{
"name": "<Interface_name>",
"role": "<Interface_role>" ,
"functions": { "<func_name>": "<func_role>" },
"modifier": { "<modifier_name>": "<modifier_role>" }
}
],
"Library": [
{
"name": "<Library_name>",
"role": "<Library_role>" ,
"functions": { "<func_name>": "<func_role>" },
"modifier": { "<modifier_name>": "<modifier_role>" }
}
],
}
```
tags value contains the name and role of each contract/library/interface and the functions’ name and role within it. Tags can help poor llms clearly figure out what’s users need and feed back the correct answer.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- The dataset is in the English language (en).
- Smart contracts (source code ) are in Solidity programming language.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- token name(string) : the name of the top token we collect from etherscan
- max_total_supply(integer): max total supply of this token
- holders(string): holders number of this token
- total_transfers(integer): token transfer transaction number of this token
- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply
- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply
- contract_address(string): address of this token
- source_code(string): source code of the contracts
- abi(string): abi, application binary interface of the source code.
- tags(json): the llm analysis of the source code display in json type. The structure of tags is :
## Dataset Creation
To collect token information except tags we use beautifulsoup4 to crawl contracts from etherscan top token.
As for tags we built a tool called “Labeling Tool for Smart Contract Dataset Based on LLM”
This tool uses LLM model like GPT3.5 to figure out the structure of contracts and roles of every part.
And we made an SmartContractTagging agent to complete this task. You can find our codes in this github link: xxxx
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
|
Aff4n20/ancient-coin-dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 96470971.064
num_examples: 2128
download_size: 89767532
dataset_size: 96470971.064
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/cbf4595f | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1340
dataset_size: 180
---
# Dataset Card for "cbf4595f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jan-hq/rag_hallucination_dataset_1000_binarized | ---
language:
- en
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 6835446.0
num_examples: 900
- name: test
num_bytes: 759494.0
num_examples: 100
download_size: 4627149
dataset_size: 7594940.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
heliosprime/twitter_dataset_1712968385 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 8198
num_examples: 19
download_size: 8332
dataset_size: 8198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1712968385"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arize-ai/beer_reviews_label_drift_neutral | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### language
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
autoevaluate/autoeval-eval-conll2003-conll2003-bc26c9-1485554295 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: jjglilleberg/bert-finetuned-ner
metrics: []
dataset_name: conll2003
dataset_config: conll2003
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: jjglilleberg/bert-finetuned-ner
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
Rewcifer/ct_scans_90pct_2048_cutoff_falcon | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 880984917.8756735
num_examples: 176406
download_size: 166046892
dataset_size: 880984917.8756735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ct_scans_90pct_2048_cutoff_falcon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MikhailT/lj-speech | ---
dataset_info:
features:
- name: file
dtype: string
- name: spoken_text
dtype: string
- name: normalized_text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 22050
splits:
- name: full
num_bytes: 3801342754
num_examples: 13100
download_size: 3785600048
dataset_size: 3801342754
license: cc-by-4.0
language:
- en
pretty_name: LJ Speech
size_categories:
- 10K<n<100K
---
# LJ Speech Dataset |
kelen0102/Ornn_League_of_Legends | ---
license: openrail
---
|
vikenkd/mini-python_code_instructions | ---
license: mit
dataset_info:
features:
- name: Instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 3688786
num_examples: 1000
download_size: 1571003
dataset_size: 3688786
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
refresd | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
- fr
license:
- mit
multilinguality:
- translation
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-wikimatrix
task_categories:
- text-classification
- translation
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring
paperswithcode_id: refresd
pretty_name: Rationalized English-French Semantic Divergences
dataset_info:
features:
- name: sentence_en
dtype: string
- name: sentence_fr
dtype: string
- name: label
dtype:
class_label:
names:
'0': divergent
'1': equivalent
- name: all_labels
dtype:
class_label:
names:
'0': unrelated
'1': some_meaning_difference
'2': no_meaning_difference
- name: rationale_en
dtype: string
- name: rationale_fr
dtype: string
splits:
- name: train
num_bytes: 501562
num_examples: 1039
download_size: 503977
dataset_size: 501562
---
# Dataset Card for REFreSD Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/Elbria/xling-SemDiv/tree/master/REFreSD)
- **Repository:** [Github](https://github.com/Elbria/xling-SemDiv/)
- **Paper:** [Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank](https://www.aclweb.org/anthology/2020.emnlp-main.121)
- **Leaderboard:**
- **Point of Contact:** [Eleftheria Briakou](mailto:ebriakou@cs.umd.edu)
- **Additional Documentation:** [Annotation workflow, data statement, DataSheet, and IRB documentation](https://elbria.github.io/post/refresd/)
### Dataset Summary
The Rationalized English-French Semantic Divergences (REFreSD) dataset consists of 1,039 English-French sentence-pairs annotated with sentence-level divergence judgments and token-level rationales. The project under which REFreSD was collected aims to advance our fundamental understanding of computational representations and methods for comparing and contrasting text meaning across languages.
### Supported Tasks and Leaderboards
`semantic-similarity-classification` and `semantic-similarity-scoring`: This dataset can by used to assess the ability of computational methods to detect meaning mismatches between languages. The model performance is measured in terms of accuracy by comparing the model predictions with the human judgments in REFreSD. Details about the results of a BERT-based model, Divergent mBERT, over this dataset can be found in the [paper](https://www.aclweb.org/anthology/2020.emnlp-main.121).
### Languages
The text is in English and French as found on Wikipedia. The associated BCP-47 codes are `en` and `fr`.
## Dataset Structure
### Data Instances
Each data point looks like this:
```python
{
'sentence_pair': {'en': 'The invention of farming some 10,000 years ago led to the development of agrarian societies , whether nomadic or peasant , the latter in particular almost always dominated by a strong sense of traditionalism .',
'fr': "En quelques décennies , l' activité économique de la vallée est passée d' une mono-activité agricole essentiellement vivrière , à une quasi mono-activité touristique , si l' on excepte un artisanat du bâtiment traditionnel important , en partie saisonnier ."}
'label': 0,
'all_labels': 0,
'rationale_en': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'rationale_fr': [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3],
}
```
### Data Fields
- `sentence_pair`: Dictionary of sentences containing the following field.
- `en`: The English sentence.
- `fr`: The corresponding (or not) French sentence.
- `label`: Binary. Whether both sentences correspond. `{0:divergent, 1:equivalent}`
- `all_labels`: 3-class label `{0: "unrelated", 1: "some_meaning_difference", 2:"no_meaning_difference"}`. The first two are sub-classes of the `divergent` label.
- `rationale_en`: A list of integers from 0-3 indicating the number of annotators who highlighted the token of the text in the English sentence during annotation. Word-aligned rationale for the divergent/equivalent label, from English.
- `rationale_fr`: A list of integers from 0-3 indicating the number of annotators who highlighted the token of the text in the French sentence during annotation. Word-aligned rationale for the divergent/equivalent label, from French.
### Data Splits
The dataset contains 1039 sentence pairs in a single `"train"` split. Of these pairs, 64% are annotated as divergent, and 40% contain fine-grained meaning divergences.
| Label | Number of Instances |
| ----------------------- | ------------------- |
| Unrelated | 252 |
| Some meaning difference | 418 |
| No meaning different | 369 |
## Dataset Creation
### Curation Rationale
The curators chose the English-French section of the WikiMatrix corpus because (1) it is likely to contain diverse, interesting divergence types since it consists of mined parallel sentences of diverse topics which are not necessarily generated by (human) translations, and (2) Wikipedia and WikiMatrix are widely used resources to train semantic representations and perform cross-lingual transfer in NLP.
### Source Data
#### Initial Data Collection and Normalization
The source for this corpus is the English and French portion of the [WikiMatrix corpus](https://arxiv.org/abs/1907.05791), which itself was extracted from Wikipedia articles. The curators excluded noisy samples by filtering out sentence pairs that a) were too short or too long, b) consisted mostly of numbers, or c) had a small token-level edit difference.
#### Who are the source language producers?
Some content of Wikipedia articles has been (human) translated from existing articles in another language while others have been written or edited independently in each language. Therefore, information on how the original text is created is not available.
### Annotations
#### Annotation process
The annotations were collected over the span of three weeks in April 2020. Annotators were presented with an English sentence and a French sentence. First, they highlighted spans and labeled them as 'added', 'changed', or 'other', where added spans contain information not contained in the other sentence, changed spans contain some information that is in the other sentence but whose meaning is not the same, and other spans have some different meaning not covered in the previous two cases, such as idioms. They then assessed the relation between the two sentences as either 'unrelated', 'some meaning differences', or 'no meaning difference'. See the [annotation guidelines](https://elbria.github.io/post/refresd/files/REFreSD_Annotation_Guidelines.pdf) for more information about the task and the annotation interface, and see the [DataSheet](https://elbria.github.io/post/refresd/files/REFreSD_Datasheet.pdf) for information about the annotator compensation.
The following table contains Inter-Annotator Agreement metrics for the dataset:
| Granularity | Method | IAA |
| ----------- | --------------- | ------------ |
| Sentence | Krippendorf's α | 0.60 |
| Span | macro F1 | 45.56 ± 7.60 |
| Token | macro F1 | 33.94 ± 8.24 |
#### Who are the annotators?
This dataset includes annotations from 6 participants recruited from the University of Maryland, College Park (UMD) educational institution. Participants ranged in age from 20–25 years, including one man and five women. For each participant, the curators ensured they were proficient in both languages of interest: three of them self-reported as English native speakers, one as a French native speaker, and two as bilingual English-French speakers.
### Personal and Sensitive Information
The dataset contains discussions of people as they appear in Wikipedia articles. It does not contain confidential information, nor does it contain identifying information about the source language producers or the annotators.
## Considerations for Using the Data
### Social Impact of Dataset
Models that are successful in the supported task require sophisticated semantic representations at the sentence level beyond the combined representations of the individual tokens in isolation. Such models could be used to curate parallel corpora for tasks like machine translation, cross-lingual transfer learning, or semantic modeling.
The statements in the dataset, however, are not necessarily representative of the world and may overrepresent one worldview if one language is primarily translated to, rather than an equal distribution of translations between the languages.
### Discussion of Biases
The English Wikipedia is known to have significantly more [contributors](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F) who identify as male than any other gender and who reside in either North America or Europe. This leads to an overrepresentation of male perspectives from these locations in the corpus in terms of both the topics covered and the language used to talk about those topics. It's not clear to what degree this holds true for the French Wikipedia. The REFreSD dataset itself has not yet been examined for the degree to which it contains the gender and other biases seen in the larger Wikipedia datasets.
### Other Known Limitations
It is unknown how many of the sentences in the dataset were written independently, and how many were written as [translations](https://en.wikipedia.org/wiki/Wikipedia:Translation) by either humans or machines from some other language to the languages of interest in this dataset.
## Additional Information
### Dataset Curators
The dataset curators are Eleftheria Briakou and Marine Carpuat, who are both affiliated with the University of Maryland, College Park's Department of Computer Science.
### Licensing Information
The project is licensed under the [MIT License](https://github.com/Elbria/xling-SemDiv/blob/master/LICENSE).
### Citation Information
```BibTeX
@inproceedings{briakou-carpuat-2020-detecting,
title = "Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank",
author = "Briakou, Eleftheria and Carpuat, Marine",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.121",
pages = "1563--1580",
}
```
### Contributions
Thanks to [@mpariente](https://github.com/mpariente) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset. |
kye/metamath-mistal-tokenized-16384 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 485833040
num_examples: 5930
download_size: 131269443
dataset_size: 485833040
---
# Dataset Card for "metamath-mistal-tokenized-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marcelofj/teste | ---
license: other
license_name: teste
license_link: LICENSE
---
|
ductai199x/synth-vid-detect | ---
license: cc-by-nc-sa-4.0
---
|
iohadrubin/top_terms | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: value
dtype: string
splits:
- name: train
num_bytes: 49818
num_examples: 64
download_size: 31740
dataset_size: 49818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "top_terms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nikosafo/dipl_original | ---
license: mit
---
|
severo/doc-image-7 | ---
size_categories:
- n<1K
---
# [doc] image dataset 7
This dataset contains 2 jpg image files in the /green directory, and 2 jpg image files in the /red directory.
|
CyberHarem/goto_hitori_bocchitherock | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Gotō Hitori
This is the dataset of Gotō Hitori, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 648 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 300 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 300 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 648 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 648 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 648 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
TigerResearch/tigerbot-wiki-qa-bart-en-10k | ---
license: apache-2.0
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 英文wiki类的问答数据
<p align="center" width="40%">
原始来源:[https://huggingface.co/datasets/michaelthwan/oa_wiki_qa_bart_10000row](https://huggingface.co/datasets/michaelthwan/oa_wiki_qa_bart_10000row)
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-wiki-qa-bart-en-10k')
``` |
mkqa | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ar
- da
- de
- en
- es
- fi
- fr
- he
- hu
- it
- ja
- km
- ko
- ms
- nl
- 'no'
- pl
- pt
- ru
- sv
- th
- tr
- vi
- zh
license:
- cc-by-3.0
multilinguality:
- multilingual
- translation
size_categories:
- 10K<n<100K
source_datasets:
- extended|natural_questions
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: mkqa
pretty_name: Multilingual Knowledge Questions and Answers
dataset_info:
features:
- name: example_id
dtype: string
- name: queries
struct:
- name: ar
dtype: string
- name: da
dtype: string
- name: de
dtype: string
- name: en
dtype: string
- name: es
dtype: string
- name: fi
dtype: string
- name: fr
dtype: string
- name: he
dtype: string
- name: hu
dtype: string
- name: it
dtype: string
- name: ja
dtype: string
- name: ko
dtype: string
- name: km
dtype: string
- name: ms
dtype: string
- name: nl
dtype: string
- name: 'no'
dtype: string
- name: pl
dtype: string
- name: pt
dtype: string
- name: ru
dtype: string
- name: sv
dtype: string
- name: th
dtype: string
- name: tr
dtype: string
- name: vi
dtype: string
- name: zh_cn
dtype: string
- name: zh_hk
dtype: string
- name: zh_tw
dtype: string
- name: query
dtype: string
- name: answers
struct:
- name: ar
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: da
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: de
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: en
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: es
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: fi
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: fr
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: he
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: hu
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: it
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: ja
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: ko
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: km
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: ms
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: nl
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: 'no'
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: pl
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: pt
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: ru
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: sv
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: th
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: tr
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: vi
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: zh_cn
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: zh_hk
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
- name: zh_tw
list:
- name: type
dtype:
class_label:
names:
'0': entity
'1': long_answer
'2': unanswerable
'3': date
'4': number
'5': number_with_unit
'6': short_phrase
'7': binary
- name: entity
dtype: string
- name: text
dtype: string
- name: aliases
list: string
config_name: mkqa
splits:
- name: train
num_bytes: 36005650
num_examples: 10000
download_size: 11903948
dataset_size: 36005650
---
# Dataset Card for MKQA: Multilingual Knowledge Questions & Answers
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- [**Homepage:**](https://github.com/apple/ml-mkqa/)
- [**Paper:**](https://arxiv.org/abs/2007.15207)
### Dataset Summary
MKQA contains 10,000 queries sampled from the [Google Natural Questions dataset](https://github.com/google-research-datasets/natural-questions).
For each query we collect new passage-independent answers.
These queries and answers are then human translated into 25 Non-English languages.
### Supported Tasks and Leaderboards
`question-answering`
### Languages
| Language code | Language name |
|---------------|---------------|
| `ar` | `Arabic` |
| `da` | `Danish` |
| `de` | `German` |
| `en` | `English` |
| `es` | `Spanish` |
| `fi` | `Finnish` |
| `fr` | `French` |
| `he` | `Hebrew` |
| `hu` | `Hungarian` |
| `it` | `Italian` |
| `ja` | `Japanese` |
| `ko` | `Korean` |
| `km` | `Khmer` |
| `ms` | `Malay` |
| `nl` | `Dutch` |
| `no` | `Norwegian` |
| `pl` | `Polish` |
| `pt` | `Portuguese` |
| `ru` | `Russian` |
| `sv` | `Swedish` |
| `th` | `Thai` |
| `tr` | `Turkish` |
| `vi` | `Vietnamese` |
| `zh_cn` | `Chinese (Simplified)` |
| `zh_hk` | `Chinese (Hong kong)` |
| `zh_tw` | `Chinese (Traditional)` |
## Dataset Structure
### Data Instances
An example from the data set looks as follows:
```
{
'example_id': 563260143484355911,
'queries': {
'en': "who sings i hear you knocking but you can't come in",
'ru': "кто поет i hear you knocking but you can't come in",
'ja': '「 I hear you knocking」は誰が歌っていますか',
'zh_cn': "《i hear you knocking but you can't come in》是谁演唱的",
...
},
'query': "who sings i hear you knocking but you can't come in",
'answers': {'en': [{'type': 'entity',
'entity': 'Q545186',
'text': 'Dave Edmunds',
'aliases': []}],
'ru': [{'type': 'entity',
'entity': 'Q545186',
'text': 'Эдмундс, Дэйв',
'aliases': ['Эдмундс', 'Дэйв Эдмундс', 'Эдмундс Дэйв', 'Dave Edmunds']}],
'ja': [{'type': 'entity',
'entity': 'Q545186',
'text': 'デイヴ・エドモンズ',
'aliases': ['デーブ・エドモンズ', 'デイブ・エドモンズ']}],
'zh_cn': [{'type': 'entity', 'text': '戴维·埃德蒙兹 ', 'entity': 'Q545186'}],
...
},
}
```
### Data Fields
Each example in the dataset contains the unique Natural Questions `example_id`, the original English `query`, and then `queries` and `answers` in 26 languages.
Each answer is labelled with an answer type. The breakdown is:
| Answer Type | Occurrence |
|---------------|---------------|
| `entity` | `4221` |
| `long_answer` | `1815` |
| `unanswerable` | `1427` |
| `date` | `1174` |
| `number` | `485` |
| `number_with_unit` | `394` |
| `short_phrase` | `346` |
| `binary` | `138` |
For each language, there can be more than one acceptable textual answer, in order to capture a variety of possible valid answers.
Detailed explanation of fields taken from [here](https://github.com/apple/ml-mkqa/#dataset)
when `entity` field is not available it is set to an empty string ''.
when `aliases` field is not available it is set to an empty list [].
### Data Splits
- Train: 10000
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[Google Natural Questions dataset](https://github.com/google-research-datasets/natural-questions)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY-SA 3.0](https://github.com/apple/ml-mkqa#license)
### Citation Information
```
@misc{mkqa,
title = {MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering},
author = {Shayne Longpre and Yi Lu and Joachim Daiber},
year = {2020},
URL = {https://arxiv.org/pdf/2007.15207.pdf}
}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |
jahb57/gpt2_token_embeddings | ---
dataset_info:
features:
- name: sentences
dtype: string
splits:
- name: train
num_bytes: 351
num_examples: 4
download_size: 1620
dataset_size: 351
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rqchao/spongebob | ---
license: mit
---
|
Parikshith/grow-1-monolingual-1m-ha-en-scored | ---
dataset_info:
features:
- name: src
dtype: string
- name: mt
dtype: string
- name: score_wmt20-comet-qe-da
dtype: float64
- name: score_wmt21-comet-qe-da
dtype: float64
- name: score_wmt23-cometkiwi-da-xl
dtype: float64
splits:
- name: train
num_bytes: 291625402
num_examples: 1100000
download_size: 197067897
dataset_size: 291625402
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713014384 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 11330
num_examples: 29
download_size: 9362
dataset_size: 11330
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713014384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shreyasharma/sentence_eval_aa | ---
dataset_info:
features:
- name: declarativized
dtype: string
- name: correct
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 103267
num_examples: 1359
- name: validation
num_bytes: 29118
num_examples: 379
- name: test
num_bytes: 29277
num_examples: 370
download_size: 77770
dataset_size: 161662
---
# Dataset Card for "sentence_eval_aa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mac_morpho | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- part-of-speech
pretty_name: Mac-Morpho
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': PREP+PROADJ
'1': IN
'2': PREP+PRO-KS
'3': NPROP
'4': PREP+PROSUB
'5': KC
'6': PROPESS
'7': NUM
'8': PROADJ
'9': PREP+ART
'10': KS
'11': PRO-KS
'12': ADJ
'13': ADV-KS
'14': N
'15': PREP
'16': PROSUB
'17': PREP+PROPESS
'18': PDEN
'19': V
'20': PREP+ADV
'21': PCP
'22': CUR
'23': ADV
'24': PU
'25': ART
splits:
- name: train
num_bytes: 12635011
num_examples: 37948
- name: test
num_bytes: 3095292
num_examples: 9987
- name: validation
num_bytes: 671356
num_examples: 1997
download_size: 2463485
dataset_size: 16401659
---
# Dataset Card for Mac-Morpho
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Mac-Morpho homepage](http://nilc.icmc.usp.br/macmorpho/)
- **Repository:** [Mac-Morpho repository](http://nilc.icmc.usp.br/macmorpho/)
- **Paper:** [Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese](https://journal-bcs.springeropen.com/articles/10.1186/s13173-014-0020-x)
- **Point of Contact:** [Erick R Fonseca](mailto:erickrfonseca@gmail.com)
### Dataset Summary
Mac-Morpho is a corpus of Brazilian Portuguese texts annotated with part-of-speech tags.
Its first version was released in 2003 [1], and since then, two revisions have been made in order
to improve the quality of the resource [2, 3].
The corpus is available for download split into train, development and test sections.
These are 76%, 4% and 20% of the corpus total, respectively (the reason for the unusual numbers
is that the corpus was first split into 80%/20% train/test, and then 5% of the train section was
set aside for development). This split was used in [3], and new POS tagging research with Mac-Morpho
is encouraged to follow it in order to make consistent comparisons possible.
[1] Aluísio, S., Pelizzoni, J., Marchi, A.R., de Oliveira, L., Manenti, R., Marquiafável, V. 2003.
An account of the challenge of tagging a reference corpus for brazilian portuguese.
In: Proceedings of the 6th International Conference on Computational Processing of the Portuguese Language. PROPOR 2003
[2] Fonseca, E.R., Rosa, J.L.G. 2013. Mac-morpho revisited: Towards robust part-of-speech.
In: Proceedings of the 9th Brazilian Symposium in Information and Human Language Technology – STIL
[3] Fonseca, E.R., Aluísio, Sandra Maria, Rosa, J.L.G. 2015.
Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese.
Journal of the Brazilian Computer Society.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Portuguese
## Dataset Structure
### Data Instances
An example from the Mac-Morpho dataset looks as follows:
```
{
"id": "0",
"pos_tags": [14, 19, 14, 15, 22, 7, 14, 9, 14, 9, 3, 15, 3, 3, 24],
"tokens": ["Jersei", "atinge", "média", "de", "Cr$", "1,4", "milhão", "na", "venda", "da", "Pinhal", "em", "São", "Paulo", "."]
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `pos`: the PoS tags of each token
The PoS tags correspond to this list:
```
"PREP+PROADJ", "IN", "PREP+PRO-KS", "NPROP", "PREP+PROSUB", "KC", "PROPESS", "NUM", "PROADJ", "PREP+ART", "KS",
"PRO-KS", "ADJ", "ADV-KS", "N", "PREP", "PROSUB", "PREP+PROPESS", "PDEN", "V", "PREP+ADV", "PCP", "CUR", "ADV", "PU", "ART"
```
### Data Splits
The data is split into train, validation and test set. The split sizes are as follow:
| Train | Val | Test |
| ------ | ----- | ----- |
| 37948 | 1997 | 9987 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{fonseca2015evaluating,
title={Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese},
author={Fonseca, Erick R and Rosa, Jo{\~a}o Lu{\'\i}s G and Alu{\'\i}sio, Sandra Maria},
journal={Journal of the Brazilian Computer Society},
volume={21},
number={1},
pages={2},
year={2015},
publisher={Springer}
}
```
### Contributions
Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset. |
ahishamm/QURANICWhisperDataset | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: name
dtype: string
- name: time
dtype: float64
- name: size
dtype: int64
- name: text
dtype: string
- name: hara
dtype: string
splits:
- name: train
num_bytes: 18929917700.788
num_examples: 21829
- name: test
num_bytes: 5921226236.56
num_examples: 9355
download_size: 15874350964
dataset_size: 24851143937.348
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CyberHarem/nitocris_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nitocris/ニトクリス/尼托克丽丝 (Fate/Grand Order)
This is the dataset of nitocris/ニトクリス/尼托克丽丝 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `purple_hair, dark_skin, long_hair, dark-skinned_female, animal_ears, facial_mark, purple_eyes, jackal_ears, very_long_hair, breasts, earrings, hairband, sidelocks, hoop_earrings, medium_breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 846.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nitocris_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 722.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nitocris_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1267 | 1.36 GiB | [Download](https://huggingface.co/datasets/CyberHarem/nitocris_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nitocris_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, blush, hetero, 1boy, nipples, jewelry, sex, solo_focus, vaginal, penis, sweat, cum_in_pussy, navel, open_mouth, looking_at_viewer, mosaic_censoring, spread_legs, completely_nude, thighs, collarbone, dark-skinned_male, egyptian_clothes, on_back |
| 1 | 5 |  |  |  |  |  | 1girl, bracelet, egyptian_clothes, looking_at_viewer, navel, smile, solo, blonde_hair, pelvic_curtain, revealing_clothes, two-tone_hair, blush, closed_mouth, holding, staff, simple_background, white_background |
| 2 | 8 |  |  |  |  |  | 1girl, holding, looking_at_viewer, solo, staff, bracelet, egyptian_clothes, navel, facepaint, simple_background, white_background |
| 3 | 7 |  |  |  |  |  | 1girl, egyptian_clothes, facepaint, looking_at_viewer, solo, bracelet, open_mouth, smile, usekh_collar, belly_chain, low-tied_long_hair, holding_staff, navel, sitting, thighs |
| 4 | 8 |  |  |  |  |  | 1girl, solo, white_bikini, bare_shoulders, blush, cleavage, looking_at_viewer, navel, simple_background, tiara, white_background, necklace, ponytail, closed_mouth, sarong, bracelet, collarbone, hair_tubes, sitting, smile, facepaint |
| 5 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, white_one-piece_swimsuit, blush, necklace, collarbone, closed_mouth, competition_swimsuit, covered_navel, armpits, open_mouth |
| 6 | 10 |  |  |  |  |  | 1girl, blue_sky, day, looking_at_viewer, solo, white_one-piece_swimsuit, cloud, necklace, outdoors, closed_mouth, smile, blush, covered_navel, competition_swimsuit, low-tied_long_hair, thighs, armpits, arms_up, beach, cowboy_shot, ocean |
| 7 | 6 |  |  |  |  |  | 1girl, blush, jewelry, looking_at_viewer, navel, nipples, solo, closed_mouth, collarbone, pussy, smile, uncensored, armpits, arms_up, completely_nude, cowboy_shot, facepaint, partially_submerged, water, wet |
| 8 | 12 |  |  |  |  |  | 1girl, solo, smile, elbow_gloves, navel, peaked_cap, white_gloves, red_necktie, skirt, facepaint, looking_at_viewer, alternate_costume, belt, midriff, low-tied_long_hair, open_mouth, bare_shoulders, bracelet, detached_collar, white_headwear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | hetero | 1boy | nipples | jewelry | sex | solo_focus | vaginal | penis | sweat | cum_in_pussy | navel | open_mouth | looking_at_viewer | mosaic_censoring | spread_legs | completely_nude | thighs | collarbone | dark-skinned_male | egyptian_clothes | on_back | bracelet | smile | solo | blonde_hair | pelvic_curtain | revealing_clothes | two-tone_hair | closed_mouth | holding | staff | simple_background | white_background | facepaint | usekh_collar | belly_chain | low-tied_long_hair | holding_staff | sitting | white_bikini | bare_shoulders | cleavage | tiara | necklace | ponytail | sarong | hair_tubes | white_one-piece_swimsuit | competition_swimsuit | covered_navel | armpits | blue_sky | day | cloud | outdoors | arms_up | beach | cowboy_shot | ocean | pussy | uncensored | partially_submerged | water | wet | elbow_gloves | peaked_cap | white_gloves | red_necktie | skirt | alternate_costume | belt | midriff | detached_collar | white_headwear |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------|:-------|:----------|:----------|:------|:-------------|:----------|:--------|:--------|:---------------|:--------|:-------------|:--------------------|:-------------------|:--------------|:------------------|:---------|:-------------|:--------------------|:-------------------|:----------|:-----------|:--------|:-------|:--------------|:-----------------|:--------------------|:----------------|:---------------|:----------|:--------|:--------------------|:-------------------|:------------|:---------------|:--------------|:---------------------|:----------------|:----------|:---------------|:-----------------|:-----------|:--------|:-----------|:-----------|:---------|:-------------|:---------------------------|:-----------------------|:----------------|:----------|:-----------|:------|:--------|:-----------|:----------|:--------|:--------------|:--------|:--------|:-------------|:----------------------|:--------|:------|:---------------|:-------------|:---------------|:--------------|:--------|:--------------------|:-------|:----------|:------------------|:-----------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | | | | | | | | | | X | | X | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | | | | | | | | | | X | | X | | | | | | | X | | X | | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | | | | | | | | | | | X | X | X | | | | X | | | X | | X | X | X | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | | | | | | | | | | | X | | X | | | | | X | | | | X | X | X | | | | | X | | | X | X | X | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | X | | | | | | | | | | | | X | X | | | | | X | | | | | X | X | | | | | X | | | | | | | | | | | | | | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | X | | | | | | | | | | | | | X | | | | X | | | | | | X | X | | | | | X | | | | | | | | X | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | X | | | X | X | | | | | | | X | | X | | | X | | X | | | | | X | X | | | | | X | | | | | X | | | | | | | | | | | | | | | | | X | | | | | X | | X | | X | X | X | X | X | | | | | | | | | | |
| 8 | 12 |  |  |  |  |  | X | | | | | | | | | | | | X | X | X | | | | | | | | | X | X | X | | | | | | | | | | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
bh8648/reports-kor-43 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: page_num
dtype: int64
splits:
- name: train
num_bytes: 14623911
num_examples: 4244
download_size: 7186966
dataset_size: 14623911
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "reports-kor-43"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iampalina/cs482-hw2 | ---
dataset_info:
features:
- name: key
dtype: string
- name: pickup_datetime
dtype: string
- name: pickup_longitude
dtype: float64
- name: pickup_latitude
dtype: float64
- name: dropoff_longitude
dtype: float64
- name: dropoff_latitude
dtype: float64
- name: passenger_count
dtype: int64
splits:
- name: test
num_bytes: 977751
num_examples: 9914
download_size: 520822
dataset_size: 977751
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
open-llm-leaderboard/details_MaziyarPanahi__YamshadowStrangemerges_32_Experiment24Ognoexperiment27 | ---
pretty_name: Evaluation run of MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27](https://huggingface.co/MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MaziyarPanahi__YamshadowStrangemerges_32_Experiment24Ognoexperiment27\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-09T10:28:53.543551](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__YamshadowStrangemerges_32_Experiment24Ognoexperiment27/blob/main/results_2024-04-09T10-28-53.543551.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6508821789914124,\n\
\ \"acc_stderr\": 0.03207251204949206,\n \"acc_norm\": 0.650057066127438,\n\
\ \"acc_norm_stderr\": 0.03274572904790381,\n \"mc1\": 0.6303549571603427,\n\
\ \"mc1_stderr\": 0.016898180706973878,\n \"mc2\": 0.7813193022414375,\n\
\ \"mc2_stderr\": 0.013666530160211392\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7150170648464164,\n \"acc_stderr\": 0.013191348179838795,\n\
\ \"acc_norm\": 0.7337883959044369,\n \"acc_norm_stderr\": 0.012915774781523198\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7171878111929895,\n\
\ \"acc_stderr\": 0.004494454911844619,\n \"acc_norm\": 0.8916550487950607,\n\
\ \"acc_norm_stderr\": 0.003101803574556311\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.041539484047423976,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.041539484047423976\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\
\ \"acc_stderr\": 0.03586879280080341,\n \"acc_norm\": 0.7569444444444444,\n\
\ \"acc_norm_stderr\": 0.03586879280080341\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.56,\n\
\ \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663454,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663454\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.03246956919789958,\n\
\ \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.03246956919789958\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41798941798941797,\n \"acc_stderr\": 0.025402555503260912,\n \"\
acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.025402555503260912\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7838709677419354,\n\
\ \"acc_stderr\": 0.02341529343356853,\n \"acc_norm\": 0.7838709677419354,\n\
\ \"acc_norm_stderr\": 0.02341529343356853\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n\
\ \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\
: 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n\
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3074074074074074,\n \"acc_stderr\": 0.028133252578815632,\n \
\ \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.028133252578815632\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.0303883535518868,\n \
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.0303883535518868\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"\
acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"\
acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5138888888888888,\n \"acc_stderr\": 0.03408655867977749,\n \"\
acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.03408655867977749\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8529411764705882,\n \"acc_stderr\": 0.024857478080250447,\n \"\
acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.024857478080250447\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.013586619219903343,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.013586619219903343\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7312138728323699,\n \"acc_stderr\": 0.023868003262500104,\n\
\ \"acc_norm\": 0.7312138728323699,\n \"acc_norm_stderr\": 0.023868003262500104\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4245810055865922,\n\
\ \"acc_stderr\": 0.016531170993278888,\n \"acc_norm\": 0.4245810055865922,\n\
\ \"acc_norm_stderr\": 0.016531170993278888\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.025553169991826524,\n\
\ \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.025553169991826524\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n\
\ \"acc_stderr\": 0.02608270069539966,\n \"acc_norm\": 0.6977491961414791,\n\
\ \"acc_norm_stderr\": 0.02608270069539966\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135114,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135114\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48936170212765956,\n \"acc_stderr\": 0.02982074719142248,\n \
\ \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.02982074719142248\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4726205997392438,\n\
\ \"acc_stderr\": 0.012751075788015057,\n \"acc_norm\": 0.4726205997392438,\n\
\ \"acc_norm_stderr\": 0.012751075788015057\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.02841820861940676,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.02841820861940676\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6764705882352942,\n \"acc_stderr\": 0.018926082916083383,\n \
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.018926082916083383\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.025538433368578334,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.025538433368578334\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6303549571603427,\n\
\ \"mc1_stderr\": 0.016898180706973878,\n \"mc2\": 0.7813193022414375,\n\
\ \"mc2_stderr\": 0.013666530160211392\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8484609313338595,\n \"acc_stderr\": 0.010077698907571776\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6952236542835482,\n \
\ \"acc_stderr\": 0.012679297549515425\n }\n}\n```"
repo_url: https://huggingface.co/MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|arc:challenge|25_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|gsm8k|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hellaswag|10_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T10-28-53.543551.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T10-28-53.543551.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- '**/details_harness|winogrande|5_2024-04-09T10-28-53.543551.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-09T10-28-53.543551.parquet'
- config_name: results
data_files:
- split: 2024_04_09T10_28_53.543551
path:
- results_2024-04-09T10-28-53.543551.parquet
- split: latest
path:
- results_2024-04-09T10-28-53.543551.parquet
---
# Dataset Card for Evaluation run of MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27](https://huggingface.co/MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MaziyarPanahi__YamshadowStrangemerges_32_Experiment24Ognoexperiment27",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-09T10:28:53.543551](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__YamshadowStrangemerges_32_Experiment24Ognoexperiment27/blob/main/results_2024-04-09T10-28-53.543551.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6508821789914124,
"acc_stderr": 0.03207251204949206,
"acc_norm": 0.650057066127438,
"acc_norm_stderr": 0.03274572904790381,
"mc1": 0.6303549571603427,
"mc1_stderr": 0.016898180706973878,
"mc2": 0.7813193022414375,
"mc2_stderr": 0.013666530160211392
},
"harness|arc:challenge|25": {
"acc": 0.7150170648464164,
"acc_stderr": 0.013191348179838795,
"acc_norm": 0.7337883959044369,
"acc_norm_stderr": 0.012915774781523198
},
"harness|hellaswag|10": {
"acc": 0.7171878111929895,
"acc_stderr": 0.004494454911844619,
"acc_norm": 0.8916550487950607,
"acc_norm_stderr": 0.003101803574556311
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047423976,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047423976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.03586879280080341,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.03586879280080341
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.56,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663454,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663454
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5574468085106383,
"acc_stderr": 0.03246956919789958,
"acc_norm": 0.5574468085106383,
"acc_norm_stderr": 0.03246956919789958
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563976,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563976
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3074074074074074,
"acc_stderr": 0.028133252578815632,
"acc_norm": 0.3074074074074074,
"acc_norm_stderr": 0.028133252578815632
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.0303883535518868,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.0303883535518868
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.03958027231121569,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.03958027231121569
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.024857478080250447,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.024857478080250447
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903343,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903343
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7312138728323699,
"acc_stderr": 0.023868003262500104,
"acc_norm": 0.7312138728323699,
"acc_norm_stderr": 0.023868003262500104
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4245810055865922,
"acc_stderr": 0.016531170993278888,
"acc_norm": 0.4245810055865922,
"acc_norm_stderr": 0.016531170993278888
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.025553169991826524,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.025553169991826524
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.02608270069539966,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.02608270069539966
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.024477222856135114,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.024477222856135114
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.02982074719142248,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.02982074719142248
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4726205997392438,
"acc_stderr": 0.012751075788015057,
"acc_norm": 0.4726205997392438,
"acc_norm_stderr": 0.012751075788015057
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.02841820861940676,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.02841820861940676
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.018926082916083383,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.018926082916083383
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578334,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578334
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6303549571603427,
"mc1_stderr": 0.016898180706973878,
"mc2": 0.7813193022414375,
"mc2_stderr": 0.013666530160211392
},
"harness|winogrande|5": {
"acc": 0.8484609313338595,
"acc_stderr": 0.010077698907571776
},
"harness|gsm8k|5": {
"acc": 0.6952236542835482,
"acc_stderr": 0.012679297549515425
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
heroza/isic_dummy | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': combined
'1': seb
splits:
- name: train
num_bytes: 210629176.0
num_examples: 150
- name: validation
num_bytes: 210629176.0
num_examples: 150
- name: test
num_bytes: 210629176.0
num_examples: 150
download_size: 631873878
dataset_size: 631887528.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
GeekOfBohemia/llm-lingo | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: start_time
dtype: string
- name: end_time
dtype: string
splits:
- name: train
num_bytes: 290856.0
num_examples: 2
- name: validation
num_bytes: 265865.0
num_examples: 2
download_size: 564804
dataset_size: 556721.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
liuyanchen1015/MULTI_VALUE_sst2_one_relativizer | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 76495
num_examples: 531
- name: test
num_bytes: 154872
num_examples: 1090
- name: train
num_bytes: 1510295
num_examples: 13051
download_size: 1051359
dataset_size: 1741662
---
# Dataset Card for "MULTI_VALUE_sst2_one_relativizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
itsadeel/mobile-ner-dataset | ---
license: apache-2.0
---
|
hromi/winograd_dpo | ---
license: gpl-3.0
---
|
medmac01/argilla-dpo-mix-7k-arabic | ---
language:
- ar
license: mit
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: dataset
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_rating
dtype: float64
- name: rejected_rating
dtype: float64
splits:
- name: test
num_bytes: 6991078
num_examples: 750
- name: train
num_bytes: 62886912
num_examples: 6750
download_size: 30613280
dataset_size: 69877990
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
tags:
- synthetic
- dpo
- distilabel
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.