datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
Barishni-blinchik/Floppa-dataset-small-v2 | ---
license: apache-2.0
---
# Dataset full name: Small Lynx Dataset
**Number of images**: 151 photos of lynxes
**Description**: The dataset contains a set of 151 images of lynxes of various sizes and poses. The images capture lynxes both in the wild and in captivity. Image quality varies depending on the source. The photographs show different angles of lynxes, their colors and features.
**Data sources**: The dataset was collected from open sources, including images from various online resources, as well as photographs provided by users.
**Purpose**: This dataset is intended for training neural networks and computer vision algorithms for classification or recognition problems of lynxes.
**Note**: Photos can contain different poses, lighting, and backgrounds, making this a diverse dataset for model training. |
edbeeching/godot_rl_VirtualCamera | ---
library_name: godot-rl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- godot-rl
- environments
- video-games
---
A RL environment called VirtualCamera for the Godot Game Engine.
This environment was created with: https://github.com/edbeeching/godot_rl_agents
## Downloading the environment
After installing Godot RL Agents, download the environment with:
```
gdrl.env_from_hub -r edbeeching/godot_rl_VirtualCamera
```
|
CyberHarem/hiiragi_shino_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hiiragi_shino/柊志乃 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of hiiragi_shino/柊志乃 (THE iDOLM@STER: Cinderella Girls), containing 57 images and their tags.
The core tags of this character are `long_hair, black_hair, breasts, brown_eyes, large_breasts, drinking_glass, wine_glass`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 57 | 52.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiiragi_shino_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 57 | 39.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiiragi_shino_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 116 | 72.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiiragi_shino_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 57 | 49.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiiragi_shino_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 116 | 87.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiiragi_shino_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hiiragi_shino_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------|
| 0 | 57 |  |  |  |  |  | 1girl, solo, blush, smile, looking_at_viewer, cleavage, cup, bare_shoulders, alcohol, necklace |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blush | smile | looking_at_viewer | cleavage | cup | bare_shoulders | alcohol | necklace |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------|:--------------------|:-----------|:------|:-----------------|:----------|:-----------|
| 0 | 57 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X |
|
Brian-M-Collins/generic_review_detection | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2659807
num_examples: 3464
- name: test
num_bytes: 2168768
num_examples: 1583
download_size: 0
dataset_size: 4828575
---
# Dataset Card for "generic_review_detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_macadeliccc__WestLake-7B-v2-laser-truthy-dpo | ---
pretty_name: Evaluation run of macadeliccc/WestLake-7B-v2-laser-truthy-dpo
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_macadeliccc__WestLake-7B-v2-laser-truthy-dpo\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-27T23:12:46.966500](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__WestLake-7B-v2-laser-truthy-dpo/blob/main/results_2024-01-27T23-12-46.966500.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6547174412221013,\n\
\ \"acc_stderr\": 0.03212700538885748,\n \"acc_norm\": 0.6539982424973038,\n\
\ \"acc_norm_stderr\": 0.03280741611061634,\n \"mc1\": 0.5605875152998776,\n\
\ \"mc1_stderr\": 0.017374520482513697,\n \"mc2\": 0.698081758589422,\n\
\ \"mc2_stderr\": 0.014987046174086506\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7141638225255973,\n \"acc_stderr\": 0.013203196088537372,\n\
\ \"acc_norm\": 0.7389078498293515,\n \"acc_norm_stderr\": 0.012835523909473835\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7168890659231228,\n\
\ \"acc_stderr\": 0.004495891440519419,\n \"acc_norm\": 0.8884684325831508,\n\
\ \"acc_norm_stderr\": 0.0031414591751392734\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6592592592592592,\n\
\ \"acc_stderr\": 0.040943762699967926,\n \"acc_norm\": 0.6592592592592592,\n\
\ \"acc_norm_stderr\": 0.040943762699967926\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4470899470899471,\n \"acc_stderr\": 0.025606723995777025,\n \"\
acc_norm\": 0.4470899470899471,\n \"acc_norm_stderr\": 0.025606723995777025\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.49206349206349204,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.49206349206349204,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7741935483870968,\n \"acc_stderr\": 0.023785577884181012,\n \"\
acc_norm\": 0.7741935483870968,\n \"acc_norm_stderr\": 0.023785577884181012\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"\
acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.031922715695483016,\n\
\ \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.031922715695483016\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8181818181818182,\n \"acc_stderr\": 0.027479603010538797,\n \"\
acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.027479603010538797\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.658974358974359,\n \"acc_stderr\": 0.024035489676335075,\n \
\ \"acc_norm\": 0.658974358974359,\n \"acc_norm_stderr\": 0.024035489676335075\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465066,\n \
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465066\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.0302839955258844,\n \
\ \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.0302839955258844\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"\
acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8403669724770643,\n \"acc_stderr\": 0.015703498348461783,\n \"\
acc_norm\": 0.8403669724770643,\n \"acc_norm_stderr\": 0.015703498348461783\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5046296296296297,\n \"acc_stderr\": 0.03409825519163572,\n \"\
acc_norm\": 0.5046296296296297,\n \"acc_norm_stderr\": 0.03409825519163572\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"\
acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7890295358649789,\n \"acc_stderr\": 0.026558372502661916,\n \
\ \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.026558372502661916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.04236511258094632,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.04236511258094632\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n\
\ \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092368,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092368\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.013586619219903343,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.013586619219903343\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.02378620325550829,\n\
\ \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.02378620325550829\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.44581005586592176,\n\
\ \"acc_stderr\": 0.016623998513333103,\n \"acc_norm\": 0.44581005586592176,\n\
\ \"acc_norm_stderr\": 0.016623998513333103\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.025553169991826524,\n\
\ \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.025553169991826524\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\
\ \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n\
\ \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7530864197530864,\n \"acc_stderr\": 0.023993501709042103,\n\
\ \"acc_norm\": 0.7530864197530864,\n \"acc_norm_stderr\": 0.023993501709042103\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4661016949152542,\n\
\ \"acc_stderr\": 0.01274085387294983,\n \"acc_norm\": 0.4661016949152542,\n\
\ \"acc_norm_stderr\": 0.01274085387294983\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.02850145286039655,\n\
\ \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.02850145286039655\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495148,\n \
\ \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495148\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\
\ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5605875152998776,\n\
\ \"mc1_stderr\": 0.017374520482513697,\n \"mc2\": 0.698081758589422,\n\
\ \"mc2_stderr\": 0.014987046174086506\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8666140489344909,\n \"acc_stderr\": 0.00955544802642297\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6815769522365428,\n \
\ \"acc_stderr\": 0.012832225723075408\n }\n}\n```"
repo_url: https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|arc:challenge|25_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|gsm8k|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hellaswag|10_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-27T23-12-46.966500.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-27T23-12-46.966500.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- '**/details_harness|winogrande|5_2024-01-27T23-12-46.966500.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-27T23-12-46.966500.parquet'
- config_name: results
data_files:
- split: 2024_01_27T23_12_46.966500
path:
- results_2024-01-27T23-12-46.966500.parquet
- split: latest
path:
- results_2024-01-27T23-12-46.966500.parquet
---
# Dataset Card for Evaluation run of macadeliccc/WestLake-7B-v2-laser-truthy-dpo
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_macadeliccc__WestLake-7B-v2-laser-truthy-dpo",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-27T23:12:46.966500](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__WestLake-7B-v2-laser-truthy-dpo/blob/main/results_2024-01-27T23-12-46.966500.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6547174412221013,
"acc_stderr": 0.03212700538885748,
"acc_norm": 0.6539982424973038,
"acc_norm_stderr": 0.03280741611061634,
"mc1": 0.5605875152998776,
"mc1_stderr": 0.017374520482513697,
"mc2": 0.698081758589422,
"mc2_stderr": 0.014987046174086506
},
"harness|arc:challenge|25": {
"acc": 0.7141638225255973,
"acc_stderr": 0.013203196088537372,
"acc_norm": 0.7389078498293515,
"acc_norm_stderr": 0.012835523909473835
},
"harness|hellaswag|10": {
"acc": 0.7168890659231228,
"acc_stderr": 0.004495891440519419,
"acc_norm": 0.8884684325831508,
"acc_norm_stderr": 0.0031414591751392734
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6592592592592592,
"acc_stderr": 0.040943762699967926,
"acc_norm": 0.6592592592592592,
"acc_norm_stderr": 0.040943762699967926
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4470899470899471,
"acc_stderr": 0.025606723995777025,
"acc_norm": 0.4470899470899471,
"acc_norm_stderr": 0.025606723995777025
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181012,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181012
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.031922715695483016,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.031922715695483016
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.027479603010538797,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.027479603010538797
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.658974358974359,
"acc_stderr": 0.024035489676335075,
"acc_norm": 0.658974358974359,
"acc_norm_stderr": 0.024035489676335075
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465066,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465066
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.0302839955258844,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.0302839955258844
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.03958027231121569,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.03958027231121569
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8403669724770643,
"acc_stderr": 0.015703498348461783,
"acc_norm": 0.8403669724770643,
"acc_norm_stderr": 0.015703498348461783
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5046296296296297,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.5046296296296297,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.026558372502661916,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.026558372502661916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094632,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094632
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092368,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092368
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903343,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903343
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7341040462427746,
"acc_stderr": 0.02378620325550829,
"acc_norm": 0.7341040462427746,
"acc_norm_stderr": 0.02378620325550829
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.44581005586592176,
"acc_stderr": 0.016623998513333103,
"acc_norm": 0.44581005586592176,
"acc_norm_stderr": 0.016623998513333103
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.025553169991826524,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.025553169991826524
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.02600330111788514,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.02600330111788514
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.023993501709042103,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.023993501709042103
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4661016949152542,
"acc_stderr": 0.01274085387294983,
"acc_norm": 0.4661016949152542,
"acc_norm_stderr": 0.01274085387294983
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6727941176470589,
"acc_stderr": 0.02850145286039655,
"acc_norm": 0.6727941176470589,
"acc_norm_stderr": 0.02850145286039655
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495148,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495148
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5605875152998776,
"mc1_stderr": 0.017374520482513697,
"mc2": 0.698081758589422,
"mc2_stderr": 0.014987046174086506
},
"harness|winogrande|5": {
"acc": 0.8666140489344909,
"acc_stderr": 0.00955544802642297
},
"harness|gsm8k|5": {
"acc": 0.6815769522365428,
"acc_stderr": 0.012832225723075408
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
urialon/converted_qmsum | ---
dataset_info:
features:
- name: id
dtype: string
- name: pid
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 70168352
num_examples: 1257
- name: validation
num_bytes: 15955428
num_examples: 272
- name: test
num_bytes: 16408856
num_examples: 281
download_size: 42693177
dataset_size: 102532636
---
# Dataset Card for "converted_qmsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
harshita23sh/us-financial-data-transformation | ---
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- text-generation
tags:
- finance
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: entities
list:
- name: entity
dtype: string
- name: entity name
dtype: string
- name: sentiment
dtype: string
splits:
- name: train
num_bytes: 217704861
num_examples: 63147
download_size: 118686420
dataset_size: 217704861
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FelixdoingAI/IP2P-adwm-200 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
- name: adversarial_image
dtype: image
splits:
- name: train
num_bytes: 128123657.0
num_examples: 200
download_size: 128127660
dataset_size: 128123657.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "IP2P-adwm-200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rwitz/filtered_pajama | ---
license: mit
dataset_info:
features:
- name: raw_content
dtype: string
- name: doc_id
dtype: string
- name: meta
dtype: string
- name: quality_signals
dtype: string
splits:
- name: train
num_bytes: 253776078
num_examples: 8200
download_size: 125603748
dataset_size: 253776078
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DarqueDante/zed | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: text_token_length
dtype: int64
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 8764212493
num_examples: 1949895
download_size: 4436537749
dataset_size: 8764212493
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wuliangfo/Chinese-Pixiv-Novel | ---
license: openrail
---
这是一个R-18(含R-18G)简体中文小说数据集,来自Pixiv网站
共有145163本,数据截止北京时间2023年9月12日晚7点
存储格式为Pixiv/userID/ID.txt,数据为txt正文,Pixiv/userID/ID-meta.txt,数据为额外信息(包括tag、title、Description等)
数据未经过清洗,可能包含低质量内容。 |
atmallen/qm_bob_grader_last_1.0e | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 14970044.0
num_examples: 200000
- name: validation
num_bytes: 1501418.0
num_examples: 20000
- name: test
num_bytes: 1502170.0
num_examples: 20000
download_size: 0
dataset_size: 17973632.0
---
# Dataset Card for "qm_bob__grader_last_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sree1994/babylm_childstories | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1464755
num_examples: 4800
- name: test
num_bytes: 369959
num_examples: 1200
download_size: 1174215
dataset_size: 1834714
---
# Dataset Card for "babylm_childstories"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mkja/erwq | ---
license: afl-3.0
---
|
coastalcph/mpararel_autorr | ---
dataset_info:
features:
- name: id
dtype: string
- name: language
dtype: string
- name: relation
dtype: string
- name: template
dtype: string
- name: template_id
dtype: int64
- name: query
dtype: string
- name: sub_uri
dtype: string
- name: obj_uri
dtype: string
- name: obj_label
sequence: string
- name: sub_label
dtype: string
- name: lineid
dtype: int64
splits:
- name: train
num_bytes: 595934494.9203491
num_examples: 2921206
download_size: 90180483
dataset_size: 595934494.9203491
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
marktrovinger/cartpole_gym_replay | ---
license: mit
---
|
ryanwible/openassistant-guanaco-prompt-reformatted-v2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15401731
num_examples: 9846
download_size: 8983165
dataset_size: 15401731
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
growth-cadet/jobpost_evalcrit_treshold80 | ---
dataset_info:
features:
- name: ats
dtype: string
- name: context
dtype: string
- name: sys5_obj
struct:
- name: focus_areas
list:
- name: description
dtype: string
- name: subject
dtype: string
- name: industries
list:
- name: description
dtype: string
- name: subject
dtype: string
- name: products_and_technologies
list:
- name: description
dtype: string
- name: subject
dtype: string
- name: eval_crit
struct:
- name: focus_areas
dtype: float64
- name: industries
dtype: float64
- name: products_and_technologies
dtype: float64
- name: eval_values
struct:
- name: focus_areas
sequence: int64
- name: industries
sequence: int64
- name: products_and_technologies
sequence: int64
- name: uuid
dtype: string
splits:
- name: train
num_bytes: 23886891.286948327
num_examples: 4138
- name: test
num_bytes: 12867056.713051673
num_examples: 2229
download_size: 39525537
dataset_size: 36753948.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
upaya07/NeurIPS-LLM-data | ---
configs:
- config_name: default
data_files:
- split: train
path: train_dataset.json
- split: test
path: eval_dataset.json
license: mit
---
- 🤖 We curated this dataset for [**NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day**](https://llm-efficiency-challenge.github.io/). <br>
- 🚀 Our [**Birbal-7B-V1**](https://huggingface.co/upaya07/Birbal-7B-V1) fine-tuned on this dataset achieved 🏆 first rank 🏆 in the competition.
Here is high-level diagram of our data preparation strategy:

# Natural Instructions Dataset Preparation
[Natural Instructions](https://github.com/allenai/natural-instructions) dataset is a community effort to create a large collection of tasks and their natural language definitions/instructions. As show in above diagram, we sample from Natural Instructions dataset. Here is the 4-step process:
- Out of 1600+ tasks files, we first manually select ~450 task files relevant to the competition. **We do not use any MMLU or translation tasks.**
- A task output in Natural Instructions dataset is expected to be either an exact match or an open ended generation. Hence, we manually annotate each task file as one of two categories: Exact Match or Generation.
- We run few-shot inference on selected task files. Running few-shot inference helps with controlled generation so we can compute model performance metric quantitatively. Refer to Input and Output Schema for Mistral Inference for an example.
- For Exact Match, we use accuracy as metric.
- For Generation task, we use Rouge score as performance metric.
- Sampling logic: We sample ~50k examples from Generation tasks and ~50k examples from Exact Match tasks. This makes it total ~100k instances from Natural Instructions dataset.
- For Exact match tasks: % of examples sampled from a task file depend on accuracy of that task. In general, we sample more from low-accuracy tasks and less from high-accuracy tasks. Total ~50k examples are sampled from exact match task files.
- For Generation tasks: % of examples sampled from a task file depend on Rouge score on that task. In general, we sample more from tasks with low rouge scores. Total ~50k examples are sampled from generation task files.
## Input and Output Schema for Mistral Inference
A record from a task file from Natural Instruction data is converted into below format. `orig_input` field is actual input without few-shot examples. `few_shot_prompt` field represents a few-shot example and is passed to Mistral-7B model for prediction. `answer` is ground truth and `prediction` is output generated by Mistral-7B base model.
```
{
"orig_input": "Context: I sold my $90,000.00 Mercedes G500 and bought 3 Prius's, because I got tired of being pulled over by Police. #Adapt @chrisrock\u2014 Isaiah Washington (@IWashington) April 1, 2015 Question: how many prius's did they buy? Answer: three",
"few_shot_prompt": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIn this task, you are given a context tweet, a question and corresponding answer of given question. Your task is to classify this question-answer pair into two categories: (1) \"yes\" if the given answer is right for question, and (2) \"no\" if the given answer is wrong for question.\n\n### Input:\nContext: Our prayers are with the students, educators & families at Independence High School & all the first responders on the scene. #PatriotPride\u2014 Doug Ducey (@dougducey) February 12, 2016 Question: at which school were first responders on the scene for? Answer: arizona high school\n\n### Response:\nno\n\n### Input:\nContext: @williebosshog huge love to you/your family huge respect for your business prosperities and the family values you still all behold. big fan\u2014 Liam Payne (@Real_Liam_Payne) January 18, 2014 Question: what was liam showing towards willy? Answer: huge respect\n\n### Response:\nyes\n\n### Input:\nContext: @williebosshog huge love to you/your family huge respect for your business prosperities and the family values you still all behold. big fan\u2014 Liam Payne (@Real_Liam_Payne) January 18, 2014 Question: what was liam showing towards willy? Answer: jealousy\n\n### Response:\nno\n\n### Input:\nContext: Our prayers are with the students, educators & families at Independence High School & all the first responders on the scene. #PatriotPride\u2014 Doug Ducey (@dougducey) February 12, 2016 Question: at which school were first responders on the scene for? Answer: independence high school\n\n### Response:\nyes\n\n### Input:\nContext: I sold my $90,000.00 Mercedes G500 and bought 3 Prius's, because I got tired of being pulled over by Police. #Adapt @chrisrock\u2014 Isaiah Washington (@IWashington) April 1, 2015 Question: how many prius's did they buy? Answer: three\n\n### Response:\n",
"answer": [
"yes"
],
"prediction": "yes\n\n### Input:\nContext: I sold my $90,000.00 Mercedes G500 and bought 3 Pri"
}
```
**Github Repo**: https://github.com/Upaya07/NeurIPS-llm-efficiency-challenge |
Rosenberg/CMeEE-V2 | ---
license: mit
---
|
MentalityDark/lulu | ---
license: openrail
---
|
open-llm-leaderboard/details_Cedaros__BetaMonarch-10.7B | ---
pretty_name: Evaluation run of Cedaros/BetaMonarch-10.7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Cedaros/BetaMonarch-10.7B](https://huggingface.co/Cedaros/BetaMonarch-10.7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Cedaros__BetaMonarch-10.7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-23T00:21:49.546083](https://huggingface.co/datasets/open-llm-leaderboard/details_Cedaros__BetaMonarch-10.7B/blob/main/results_2024-03-23T00-21-49.546083.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6472524979334344,\n\
\ \"acc_stderr\": 0.03220375404197512,\n \"acc_norm\": 0.649196443721123,\n\
\ \"acc_norm_stderr\": 0.03285956102418741,\n \"mc1\": 0.6083231334149327,\n\
\ \"mc1_stderr\": 0.017087795881769646,\n \"mc2\": 0.7685134164142694,\n\
\ \"mc2_stderr\": 0.013978190829507478\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7158703071672355,\n \"acc_stderr\": 0.013179442447653886,\n\
\ \"acc_norm\": 0.726962457337884,\n \"acc_norm_stderr\": 0.013019332762635746\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7066321449910377,\n\
\ \"acc_stderr\": 0.0045437504800657745,\n \"acc_norm\": 0.8836885082652858,\n\
\ \"acc_norm_stderr\": 0.003199428675985866\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\
\ \"acc_stderr\": 0.04188307537595852,\n \"acc_norm\": 0.6222222222222222,\n\
\ \"acc_norm_stderr\": 0.04188307537595852\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.027834912527544057,\n\
\ \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.027834912527544057\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n\
\ \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n\
\ \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"\
acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6473988439306358,\n\
\ \"acc_stderr\": 0.03643037168958548,\n \"acc_norm\": 0.6473988439306358,\n\
\ \"acc_norm_stderr\": 0.03643037168958548\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n\
\ \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.548936170212766,\n \"acc_stderr\": 0.032529096196131965,\n\
\ \"acc_norm\": 0.548936170212766,\n \"acc_norm_stderr\": 0.032529096196131965\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.04697085136647863,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.04697085136647863\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n\
\ \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41798941798941797,\n \"acc_stderr\": 0.025402555503260912,\n \"\
acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.025402555503260912\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n\
\ \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n\
\ \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n\
\ \"acc_stderr\": 0.023025899617188712,\n \"acc_norm\": 0.7935483870967742,\n\
\ \"acc_norm_stderr\": 0.023025899617188712\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n\
\ \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768763,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768763\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6435897435897436,\n \"acc_stderr\": 0.024283140529467305,\n\
\ \"acc_norm\": 0.6435897435897436,\n \"acc_norm_stderr\": 0.024283140529467305\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547308,\n \
\ \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547308\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977934,\n\
\ \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977934\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.39072847682119205,\n \"acc_stderr\": 0.039837983066598075,\n \"\
acc_norm\": 0.39072847682119205,\n \"acc_norm_stderr\": 0.039837983066598075\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"\
acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5370370370370371,\n \"acc_stderr\": 0.03400603625538272,\n \"\
acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.03400603625538272\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8284313725490197,\n \"acc_stderr\": 0.026460569561240644,\n \"\
acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.026460569561240644\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7932489451476793,\n \"acc_stderr\": 0.0263616516683891,\n \
\ \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.0263616516683891\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8244274809160306,\n \"acc_stderr\": 0.03336820338476072,\n\
\ \"acc_norm\": 0.8244274809160306,\n \"acc_norm_stderr\": 0.03336820338476072\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.038498560987940904,\n \"\
acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.038498560987940904\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\
\ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\
\ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.02158649400128137,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.02158649400128137\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n\
\ \"acc_stderr\": 0.013890862162876168,\n \"acc_norm\": 0.8148148148148148,\n\
\ \"acc_norm_stderr\": 0.013890862162876168\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7138728323699421,\n \"acc_stderr\": 0.024332146779134124,\n\
\ \"acc_norm\": 0.7138728323699421,\n \"acc_norm_stderr\": 0.024332146779134124\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4134078212290503,\n\
\ \"acc_stderr\": 0.01646981492840617,\n \"acc_norm\": 0.4134078212290503,\n\
\ \"acc_norm_stderr\": 0.01646981492840617\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7091503267973857,\n \"acc_stderr\": 0.02600480036395213,\n\
\ \"acc_norm\": 0.7091503267973857,\n \"acc_norm_stderr\": 0.02600480036395213\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n\
\ \"acc_stderr\": 0.026160584450140446,\n \"acc_norm\": 0.6945337620578779,\n\
\ \"acc_norm_stderr\": 0.026160584450140446\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7067901234567902,\n \"acc_stderr\": 0.025329888171900922,\n\
\ \"acc_norm\": 0.7067901234567902,\n \"acc_norm_stderr\": 0.025329888171900922\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4667535853976532,\n\
\ \"acc_stderr\": 0.012741974333897229,\n \"acc_norm\": 0.4667535853976532,\n\
\ \"acc_norm_stderr\": 0.012741974333897229\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \
\ \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6683006535947712,\n \"acc_stderr\": 0.019047485239360378,\n \
\ \"acc_norm\": 0.6683006535947712,\n \"acc_norm_stderr\": 0.019047485239360378\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.028920583220675585,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.028920583220675585\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.025870646766169146,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.025870646766169146\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197771,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197771\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6083231334149327,\n\
\ \"mc1_stderr\": 0.017087795881769646,\n \"mc2\": 0.7685134164142694,\n\
\ \"mc2_stderr\": 0.013978190829507478\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8334648776637726,\n \"acc_stderr\": 0.010470796496781096\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5344958301743745,\n \
\ \"acc_stderr\": 0.013739668147545915\n }\n}\n```"
repo_url: https://huggingface.co/Cedaros/BetaMonarch-10.7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|arc:challenge|25_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|gsm8k|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hellaswag|10_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T00-21-49.546083.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-23T00-21-49.546083.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- '**/details_harness|winogrande|5_2024-03-23T00-21-49.546083.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-23T00-21-49.546083.parquet'
- config_name: results
data_files:
- split: 2024_03_23T00_21_49.546083
path:
- results_2024-03-23T00-21-49.546083.parquet
- split: latest
path:
- results_2024-03-23T00-21-49.546083.parquet
---
# Dataset Card for Evaluation run of Cedaros/BetaMonarch-10.7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Cedaros/BetaMonarch-10.7B](https://huggingface.co/Cedaros/BetaMonarch-10.7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Cedaros__BetaMonarch-10.7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-23T00:21:49.546083](https://huggingface.co/datasets/open-llm-leaderboard/details_Cedaros__BetaMonarch-10.7B/blob/main/results_2024-03-23T00-21-49.546083.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6472524979334344,
"acc_stderr": 0.03220375404197512,
"acc_norm": 0.649196443721123,
"acc_norm_stderr": 0.03285956102418741,
"mc1": 0.6083231334149327,
"mc1_stderr": 0.017087795881769646,
"mc2": 0.7685134164142694,
"mc2_stderr": 0.013978190829507478
},
"harness|arc:challenge|25": {
"acc": 0.7158703071672355,
"acc_stderr": 0.013179442447653886,
"acc_norm": 0.726962457337884,
"acc_norm_stderr": 0.013019332762635746
},
"harness|hellaswag|10": {
"acc": 0.7066321449910377,
"acc_stderr": 0.0045437504800657745,
"acc_norm": 0.8836885082652858,
"acc_norm_stderr": 0.003199428675985866
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595852,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595852
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.027834912527544057,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.027834912527544057
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6473988439306358,
"acc_stderr": 0.03643037168958548,
"acc_norm": 0.6473988439306358,
"acc_norm_stderr": 0.03643037168958548
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.548936170212766,
"acc_stderr": 0.032529096196131965,
"acc_norm": 0.548936170212766,
"acc_norm_stderr": 0.032529096196131965
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.04697085136647863,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.04697085136647863
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.023025899617188712,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.023025899617188712
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768763,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768763
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6435897435897436,
"acc_stderr": 0.024283140529467305,
"acc_norm": 0.6435897435897436,
"acc_norm_stderr": 0.024283140529467305
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.02857834836547308,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.02857834836547308
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.030066761582977934,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.030066761582977934
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.39072847682119205,
"acc_stderr": 0.039837983066598075,
"acc_norm": 0.39072847682119205,
"acc_norm_stderr": 0.039837983066598075
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.01599015488507338,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.01599015488507338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.03400603625538272,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.03400603625538272
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8284313725490197,
"acc_stderr": 0.026460569561240644,
"acc_norm": 0.8284313725490197,
"acc_norm_stderr": 0.026460569561240644
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7932489451476793,
"acc_stderr": 0.0263616516683891,
"acc_norm": 0.7932489451476793,
"acc_norm_stderr": 0.0263616516683891
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8244274809160306,
"acc_stderr": 0.03336820338476072,
"acc_norm": 0.8244274809160306,
"acc_norm_stderr": 0.03336820338476072
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.038498560987940904,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.038498560987940904
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.03408997886857529,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.03408997886857529
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.02158649400128137,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.02158649400128137
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.013890862162876168,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.013890862162876168
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7138728323699421,
"acc_stderr": 0.024332146779134124,
"acc_norm": 0.7138728323699421,
"acc_norm_stderr": 0.024332146779134124
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4134078212290503,
"acc_stderr": 0.01646981492840617,
"acc_norm": 0.4134078212290503,
"acc_norm_stderr": 0.01646981492840617
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7091503267973857,
"acc_stderr": 0.02600480036395213,
"acc_norm": 0.7091503267973857,
"acc_norm_stderr": 0.02600480036395213
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.026160584450140446,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.026160584450140446
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7067901234567902,
"acc_stderr": 0.025329888171900922,
"acc_norm": 0.7067901234567902,
"acc_norm_stderr": 0.025329888171900922
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4667535853976532,
"acc_stderr": 0.012741974333897229,
"acc_norm": 0.4667535853976532,
"acc_norm_stderr": 0.012741974333897229
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6683006535947712,
"acc_stderr": 0.019047485239360378,
"acc_norm": 0.6683006535947712,
"acc_norm_stderr": 0.019047485239360378
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.028920583220675585,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.028920583220675585
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169146,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169146
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197771,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197771
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6083231334149327,
"mc1_stderr": 0.017087795881769646,
"mc2": 0.7685134164142694,
"mc2_stderr": 0.013978190829507478
},
"harness|winogrande|5": {
"acc": 0.8334648776637726,
"acc_stderr": 0.010470796496781096
},
"harness|gsm8k|5": {
"acc": 0.5344958301743745,
"acc_stderr": 0.013739668147545915
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Carve/scene_type_dataset | ---
license: other
license_name: other
license_link: LICENSE
---
|
LucasGomasCunha/Voz_Tinoco | ---
license: openrail
---
|
Ingrid0693/guanaco-llama2-CRM | ---
license: mit
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 19367
num_examples: 65
download_size: 11687
dataset_size: 19367
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JMYasir/trReviews-ds | ---
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 134596742.14721414
num_examples: 362520
- name: validation
num_bytes: 14955564.852785867
num_examples: 40281
download_size: 0
dataset_size: 149552307.0
---
# Dataset Card for "trReviews-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dabe22af/fakegenimg | ---
license: openrail
---
|
CyberHarem/kashino_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kashino/樫野/樫野 (Azur Lane)
This is the dataset of kashino/樫野/樫野 (Azur Lane), containing 500 images and their tags.
The core tags of this character are `cow_ears, animal_ears, cow_horns, horns, breasts, long_hair, brown_hair, cow_girl, purple_eyes, huge_breasts, bangs, hair_ornament, hair_flower, very_long_hair, cow_tail, tail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 908.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kashino_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 420.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kashino_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1312 | 979.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kashino_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 756.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kashino_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1312 | 1.52 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kashino_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kashino_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, blush, cleavage, looking_at_viewer, simple_background, white_bikini, white_flower, official_alternate_costume, solo, white_background, halterneck, bare_shoulders, navel |
| 1 | 5 |  |  |  |  |  | 1girl, cleavage, criss-cross_halter, crossed_bandaids, looking_at_viewer, multi-strapped_bikini, navel, solo, white_bikini, white_flower, bare_shoulders, milk_bottle, official_alternate_costume, simple_background, blush, collarbone, holding_bottle, sitting, thighs, white_background, large_breasts, thigh_strap |
| 2 | 15 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, kimono, looking_at_viewer, solo, blush, official_alternate_costume, white_flower, upper_body |
| 3 | 14 |  |  |  |  |  | 1girl, hetero, solo_focus, 1boy, nipples, blush, paizuri, white_bikini, looking_at_viewer, penis, white_flower, official_alternate_costume, cum_on_breasts, breast_grab, grabbing, heart, on_back, sweat, crossed_bandaids, cum_on_hair, facial, mosaic_censoring, pov |
| 4 | 8 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, open_mouth, sex, solo_focus, looking_at_viewer, navel, penis, cowgirl_position, girl_on_top, sweat, vaginal, cum_in_pussy, pov, spread_legs, collarbone, completely_nude, heart-shaped_pupils, overflow, bar_censor, hair_ribbon, heavy_breathing, mosaic_censoring |
| 5 | 23 |  |  |  |  |  | 1girl, looking_at_viewer, maid_headdress, solo, official_alternate_costume, bare_shoulders, blush, frills, apron, black_thighhighs, cowbell, large_breasts, underboob, simple_background, clothing_cutout, white_background, black_skirt |
| 6 | 15 |  |  |  |  |  | 1girl, looking_at_viewer, detached_collar, solo, white_leotard, cleavage, blush, highleg_leotard, wrist_cuffs, purple_bowtie, thighs, bare_shoulders, brown_thighhighs, playboy_bunny, simple_background, white_background |
| 7 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, purple_skirt, solo, large_breasts, pleated_skirt, blush, black_thighhighs, miniskirt, simple_background, long_sleeves, white_background, white_jacket, hair_ribbon, white_shirt, holding_sword, katana, thighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | cleavage | looking_at_viewer | simple_background | white_bikini | white_flower | official_alternate_costume | solo | white_background | halterneck | bare_shoulders | navel | criss-cross_halter | crossed_bandaids | multi-strapped_bikini | milk_bottle | collarbone | holding_bottle | sitting | thighs | large_breasts | thigh_strap | kimono | upper_body | hetero | solo_focus | 1boy | nipples | paizuri | penis | cum_on_breasts | breast_grab | grabbing | heart | on_back | sweat | cum_on_hair | facial | mosaic_censoring | pov | open_mouth | sex | cowgirl_position | girl_on_top | vaginal | cum_in_pussy | spread_legs | completely_nude | heart-shaped_pupils | overflow | bar_censor | hair_ribbon | heavy_breathing | maid_headdress | frills | apron | black_thighhighs | cowbell | underboob | clothing_cutout | black_skirt | detached_collar | white_leotard | highleg_leotard | wrist_cuffs | purple_bowtie | brown_thighhighs | playboy_bunny | purple_skirt | pleated_skirt | miniskirt | long_sleeves | white_jacket | white_shirt | holding_sword | katana |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------|:--------------------|:--------------------|:---------------|:---------------|:-----------------------------|:-------|:-------------------|:-------------|:-----------------|:--------|:---------------------|:-------------------|:------------------------|:--------------|:-------------|:-----------------|:----------|:---------|:----------------|:--------------|:---------|:-------------|:---------|:-------------|:-------|:----------|:----------|:--------|:-----------------|:--------------|:-----------|:--------|:----------|:--------|:--------------|:---------|:-------------------|:------|:-------------|:------|:-------------------|:--------------|:----------|:---------------|:--------------|:------------------|:----------------------|:-----------|:-------------|:--------------|:------------------|:-----------------|:---------|:--------|:-------------------|:----------|:------------|:------------------|:--------------|:------------------|:----------------|:------------------|:--------------|:----------------|:-------------------|:----------------|:---------------|:----------------|:------------|:---------------|:---------------|:--------------|:----------------|:---------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | X | X | X | | | X | X | X | | | X | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 14 |  |  |  |  |  | X | X | | X | | X | X | X | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | | X | | | | | | | | | X | | | | | X | | | | | | | | X | X | X | X | | X | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 23 |  |  |  |  |  | X | X | | X | X | | | X | X | X | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 6 | 15 |  |  |  |  |  | X | X | X | X | X | | | | X | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | |
| 7 | 11 |  |  |  |  |  | X | X | | X | X | | | | X | X | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
thanhduycao/data_for_synthesis_with_entities_align_v3 | ---
dataset_info:
config_name: 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
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: intent
dtype: string
- name: sentence_annotation
dtype: string
- name: entities
list:
- name: type
dtype: string
- name: filler
dtype: string
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: origin_transcription
dtype: string
- name: sentence_norm
dtype: string
- name: w2v2_large_transcription
dtype: string
- name: wer
dtype: int64
- name: entities_norm
list:
- name: filler
dtype: string
- name: type
dtype: string
- name: entities_align
dtype: string
splits:
- name: train
num_bytes: 2667449542.4493446
num_examples: 5029
download_size: 632908060
dataset_size: 2667449542.4493446
configs:
- config_name: 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
data_files:
- split: train
path: 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/train-*
---
# Dataset Card for "data_for_synthesis_with_entities_align_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
seanghay/khmer_mpwt_speech | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: raw_transcription
dtype: string
splits:
- name: train
num_bytes: 28186841.51
num_examples: 2058
download_size: 27267047
dataset_size: 28186841.51
task_categories:
- text-to-speech
language:
- km
pretty_name: Khmer MPWT Speech
size_categories:
- 1K<n<10K
---
## Dataset Info
I do not own this dataset. This dataset was imported from a mobile app from [**Ministry of Public Works and Transport**](https://play.google.com/store/apps/details?id=com.chanthol.drivingrules)
It's for research purposes only.
The dataset was manually reviewed, but there might still be errors.
## Metrics
Total Duration: 6957.366113 seconds (1.932 hours) |
timmytheBEST/girls | ---
license: creativeml-openrail-m
---
|
khoomeik/gzipscale-code-2.4M | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 9567596
num_examples: 9307
download_size: 3337702
dataset_size: 9567596
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
arkaprav0/trivy-go-test | ---
dataset_info:
features:
- name: id
dtype: string
- name: package_name
dtype: string
- name: installed_version
dtype: string
- name: affected_range
dtype: string
- name: fixed_version
dtype: string
- name: is_false_positive
dtype: int64
splits:
- name: train
num_bytes: 5479
num_examples: 75
download_size: 5484
dataset_size: 5479
---
# Dataset Card for "trivy-go-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sayan1101/llama-2-13b-subjectfinetune-grammar | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 1250979.4995054402
num_examples: 4549
- name: test
num_bytes: 139150.50049455985
num_examples: 506
download_size: 447422
dataset_size: 1390130.0
---
# Dataset Card for "llama-2-13b-subjectfinetune-grammar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dustinwloring1988/dolphin_v2 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1556526165
num_examples: 891857
download_size: 892691238
dataset_size: 1556526165
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
inductiva/fluid_cube | ---
size_categories:
- n<1K
---
# Fluid cube dataset
The Fluid Cube dataset contains 100 fluid dynamics simulations of a
fluid block flowing inside a unit cube domain. For each simulation,
the fluid block is set with different initial shape, position,
velocity, and fluid viscosity.
For more information on how the dataset was generated look a [this
blog post](https://inductiva.ai/blog/article/fluid-cube-dataset). The
dataset is the same as the one in the blog post but wrapped in
HuggingFace's `datasets` library.
# Versions
1. `1000_simulations`: The exact same dataset as presented in [this
blog post](https://inductiva.ai/blog/article/fluid-cube-dataset).
2. `10_simulations`: A subset of the previous the `1000_simulations`
dataset. Use for quick testing.
# Usage
To use the dataset just use:
```python
dataset = datasets.load_dataset('inductiva/fluid_cube', version='10_simulations', split='train')
```
The dataset has several columns:
```python
['block_position', 'block_dimensions', 'fluid_volume', 'block_velocity',
'block_velocity_magnitude', 'kinematic_viscosity', 'density', 'tank_dimensions',
'time_max', 'time_step', 'particle_radius', 'number_of_fluid_particles', 'simulation_time_steps']
```
The most important of which is the `simulation_time_steps` which is a
list of length equal to the number of time steps in the
simulation. Each element on the list is an array with shape
`(num_particles, 6)` that, on each row `i`, has first the
velocity in the x, y and z axis and secondly the position of the particle `i` in
the x, y and z axis.
# Dataset columns
* `block_position`: The Initial position of the block;
* `block_dimensions`: The dimensions of the block on each axis;
* `fluid_volume`: Volume of the fluid;
* `block_velocity`: The initial velocity of the block;
* `block_velocity_magnitude`: Initial velocity magnitude;
* `kinematic_viscosity`: Viscosity of the fluid;
* `density`: Fluid density;
* `tank_dimensions`: The dimensions of the tank where the fluid is contained;
* `time_max`: Time, in seconds, of the simulation;
* `time_step`: Elapsed time between each time steps in the simulation;
* `particle_radius`: Radius of the particles;
* `number_of_fluid_particles`: Number of particles;
|
Escalibur/realSergio | ---
license: unknown
---
|
open-llm-leaderboard/details_occultml__Helios-10.7B | ---
pretty_name: Evaluation run of occultml/Helios-10.7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [occultml/Helios-10.7B](https://huggingface.co/occultml/Helios-10.7B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_occultml__Helios-10.7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-04T12:17:31.612101](https://huggingface.co/datasets/open-llm-leaderboard/details_occultml__Helios-10.7B/blob/main/results_2024-01-04T12-17-31.612101.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.40988707960386334,\n\
\ \"acc_stderr\": 0.03401984092738561,\n \"acc_norm\": 0.414422676033673,\n\
\ \"acc_norm_stderr\": 0.03496456895834615,\n \"mc1\": 0.30599755201958384,\n\
\ \"mc1_stderr\": 0.016132229728155045,\n \"mc2\": 0.5552021116757884,\n\
\ \"mc2_stderr\": 0.01659507343053494\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.35665529010238906,\n \"acc_stderr\": 0.013998056902620203,\n\
\ \"acc_norm\": 0.3890784982935154,\n \"acc_norm_stderr\": 0.014247309976045609\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.3434574785899223,\n\
\ \"acc_stderr\": 0.004738920624724474,\n \"acc_norm\": 0.4660426209918343,\n\
\ \"acc_norm_stderr\": 0.004978260641742204\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4740740740740741,\n\
\ \"acc_stderr\": 0.04313531696750575,\n \"acc_norm\": 0.4740740740740741,\n\
\ \"acc_norm_stderr\": 0.04313531696750575\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.46710526315789475,\n \"acc_stderr\": 0.040601270352363966,\n\
\ \"acc_norm\": 0.46710526315789475,\n \"acc_norm_stderr\": 0.040601270352363966\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.41,\n\
\ \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \
\ \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.43018867924528303,\n \"acc_stderr\": 0.030471445867183238,\n\
\ \"acc_norm\": 0.43018867924528303,\n \"acc_norm_stderr\": 0.030471445867183238\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.3888888888888889,\n\
\ \"acc_stderr\": 0.04076663253918567,\n \"acc_norm\": 0.3888888888888889,\n\
\ \"acc_norm_stderr\": 0.04076663253918567\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816508,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816508\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3872832369942196,\n\
\ \"acc_stderr\": 0.037143259063020656,\n \"acc_norm\": 0.3872832369942196,\n\
\ \"acc_norm_stderr\": 0.037143259063020656\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.044405219061793275,\n\
\ \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.044405219061793275\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n\
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.34893617021276596,\n \"acc_stderr\": 0.031158522131357797,\n\
\ \"acc_norm\": 0.34893617021276596,\n \"acc_norm_stderr\": 0.031158522131357797\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3508771929824561,\n\
\ \"acc_stderr\": 0.044895393502706986,\n \"acc_norm\": 0.3508771929824561,\n\
\ \"acc_norm_stderr\": 0.044895393502706986\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.30344827586206896,\n \"acc_stderr\": 0.038312260488503336,\n\
\ \"acc_norm\": 0.30344827586206896,\n \"acc_norm_stderr\": 0.038312260488503336\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.24338624338624337,\n \"acc_stderr\": 0.022101128787415415,\n \"\
acc_norm\": 0.24338624338624337,\n \"acc_norm_stderr\": 0.022101128787415415\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.23809523809523808,\n\
\ \"acc_stderr\": 0.03809523809523812,\n \"acc_norm\": 0.23809523809523808,\n\
\ \"acc_norm_stderr\": 0.03809523809523812\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.4935483870967742,\n \"acc_stderr\": 0.02844163823354051,\n \"\
acc_norm\": 0.4935483870967742,\n \"acc_norm_stderr\": 0.02844163823354051\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.3793103448275862,\n \"acc_stderr\": 0.034139638059062345,\n \"\
acc_norm\": 0.3793103448275862,\n \"acc_norm_stderr\": 0.034139638059062345\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n\
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.4666666666666667,\n \"acc_stderr\": 0.03895658065271847,\n\
\ \"acc_norm\": 0.4666666666666667,\n \"acc_norm_stderr\": 0.03895658065271847\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.47474747474747475,\n \"acc_stderr\": 0.03557806245087314,\n \"\
acc_norm\": 0.47474747474747475,\n \"acc_norm_stderr\": 0.03557806245087314\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.5077720207253886,\n \"acc_stderr\": 0.036080032255696545,\n\
\ \"acc_norm\": 0.5077720207253886,\n \"acc_norm_stderr\": 0.036080032255696545\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.37435897435897436,\n \"acc_stderr\": 0.024537591572830506,\n\
\ \"acc_norm\": 0.37435897435897436,\n \"acc_norm_stderr\": 0.024537591572830506\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.29259259259259257,\n \"acc_stderr\": 0.027738969632176088,\n \
\ \"acc_norm\": 0.29259259259259257,\n \"acc_norm_stderr\": 0.027738969632176088\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.40756302521008403,\n \"acc_stderr\": 0.03191863374478465,\n\
\ \"acc_norm\": 0.40756302521008403,\n \"acc_norm_stderr\": 0.03191863374478465\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2582781456953642,\n \"acc_stderr\": 0.035737053147634576,\n \"\
acc_norm\": 0.2582781456953642,\n \"acc_norm_stderr\": 0.035737053147634576\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.4954128440366973,\n \"acc_stderr\": 0.021436420955529424,\n \"\
acc_norm\": 0.4954128440366973,\n \"acc_norm_stderr\": 0.021436420955529424\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.24537037037037038,\n \"acc_stderr\": 0.029346665094372937,\n \"\
acc_norm\": 0.24537037037037038,\n \"acc_norm_stderr\": 0.029346665094372937\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.4019607843137255,\n \"acc_stderr\": 0.034411900234824655,\n \"\
acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.034411900234824655\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.48523206751054854,\n \"acc_stderr\": 0.032533028078777386,\n \
\ \"acc_norm\": 0.48523206751054854,\n \"acc_norm_stderr\": 0.032533028078777386\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.4663677130044843,\n\
\ \"acc_stderr\": 0.033481800170603065,\n \"acc_norm\": 0.4663677130044843,\n\
\ \"acc_norm_stderr\": 0.033481800170603065\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.4580152671755725,\n \"acc_stderr\": 0.04369802690578756,\n\
\ \"acc_norm\": 0.4580152671755725,\n \"acc_norm_stderr\": 0.04369802690578756\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6033057851239669,\n \"acc_stderr\": 0.044658697805310094,\n \"\
acc_norm\": 0.6033057851239669,\n \"acc_norm_stderr\": 0.044658697805310094\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5370370370370371,\n\
\ \"acc_stderr\": 0.04820403072760627,\n \"acc_norm\": 0.5370370370370371,\n\
\ \"acc_norm_stderr\": 0.04820403072760627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.49693251533742333,\n \"acc_stderr\": 0.03928297078179663,\n\
\ \"acc_norm\": 0.49693251533742333,\n \"acc_norm_stderr\": 0.03928297078179663\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.30357142857142855,\n\
\ \"acc_stderr\": 0.04364226155841044,\n \"acc_norm\": 0.30357142857142855,\n\
\ \"acc_norm_stderr\": 0.04364226155841044\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.5048543689320388,\n \"acc_stderr\": 0.04950504382128919,\n\
\ \"acc_norm\": 0.5048543689320388,\n \"acc_norm_stderr\": 0.04950504382128919\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.5384615384615384,\n\
\ \"acc_stderr\": 0.03265903381186193,\n \"acc_norm\": 0.5384615384615384,\n\
\ \"acc_norm_stderr\": 0.03265903381186193\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-miscellaneous|5\"\
: {\n \"acc\": 0.5504469987228607,\n \"acc_stderr\": 0.017788725283507337,\n\
\ \"acc_norm\": 0.5504469987228607,\n \"acc_norm_stderr\": 0.017788725283507337\n\
\ },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5289017341040463,\n\
\ \"acc_stderr\": 0.026874085883518348,\n \"acc_norm\": 0.5289017341040463,\n\
\ \"acc_norm_stderr\": 0.026874085883518348\n },\n \"harness|hendrycksTest-moral_scenarios|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.013378001241813068,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.013378001241813068\n \
\ },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.49673202614379086,\n\
\ \"acc_stderr\": 0.02862930519400354,\n \"acc_norm\": 0.49673202614379086,\n\
\ \"acc_norm_stderr\": 0.02862930519400354\n },\n \"harness|hendrycksTest-philosophy|5\"\
: {\n \"acc\": 0.5819935691318328,\n \"acc_stderr\": 0.028013651891995072,\n\
\ \"acc_norm\": 0.5819935691318328,\n \"acc_norm_stderr\": 0.028013651891995072\n\
\ },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.49382716049382713,\n\
\ \"acc_stderr\": 0.027818623962583302,\n \"acc_norm\": 0.49382716049382713,\n\
\ \"acc_norm_stderr\": 0.027818623962583302\n },\n \"harness|hendrycksTest-professional_accounting|5\"\
: {\n \"acc\": 0.2907801418439716,\n \"acc_stderr\": 0.027090664368353178,\n\
\ \"acc_norm\": 0.2907801418439716,\n \"acc_norm_stderr\": 0.027090664368353178\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.29335071707953064,\n\
\ \"acc_stderr\": 0.011628520449582071,\n \"acc_norm\": 0.29335071707953064,\n\
\ \"acc_norm_stderr\": 0.011628520449582071\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.26838235294117646,\n \"acc_stderr\": 0.026917481224377232,\n\
\ \"acc_norm\": 0.26838235294117646,\n \"acc_norm_stderr\": 0.026917481224377232\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.46895424836601307,\n \"acc_stderr\": 0.020188804456361883,\n \
\ \"acc_norm\": 0.46895424836601307,\n \"acc_norm_stderr\": 0.020188804456361883\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4818181818181818,\n\
\ \"acc_stderr\": 0.04785964010794917,\n \"acc_norm\": 0.4818181818181818,\n\
\ \"acc_norm_stderr\": 0.04785964010794917\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.37142857142857144,\n \"acc_stderr\": 0.030932858792789848,\n\
\ \"acc_norm\": 0.37142857142857144,\n \"acc_norm_stderr\": 0.030932858792789848\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.5472636815920398,\n\
\ \"acc_stderr\": 0.03519702717576915,\n \"acc_norm\": 0.5472636815920398,\n\
\ \"acc_norm_stderr\": 0.03519702717576915\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.67,\n \"acc_stderr\": 0.047258156262526094,\n \
\ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.047258156262526094\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3614457831325301,\n\
\ \"acc_stderr\": 0.037400593820293204,\n \"acc_norm\": 0.3614457831325301,\n\
\ \"acc_norm_stderr\": 0.037400593820293204\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6023391812865497,\n \"acc_stderr\": 0.03753638955761691,\n\
\ \"acc_norm\": 0.6023391812865497,\n \"acc_norm_stderr\": 0.03753638955761691\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.30599755201958384,\n\
\ \"mc1_stderr\": 0.016132229728155045,\n \"mc2\": 0.5552021116757884,\n\
\ \"mc2_stderr\": 0.01659507343053494\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7071823204419889,\n \"acc_stderr\": 0.012789321118542616\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/occultml/Helios-10.7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|arc:challenge|25_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|gsm8k|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hellaswag|10_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-17-31.612101.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T12-17-31.612101.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- '**/details_harness|winogrande|5_2024-01-04T12-17-31.612101.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-04T12-17-31.612101.parquet'
- config_name: results
data_files:
- split: 2024_01_04T12_17_31.612101
path:
- results_2024-01-04T12-17-31.612101.parquet
- split: latest
path:
- results_2024-01-04T12-17-31.612101.parquet
---
# Dataset Card for Evaluation run of occultml/Helios-10.7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [occultml/Helios-10.7B](https://huggingface.co/occultml/Helios-10.7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_occultml__Helios-10.7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-04T12:17:31.612101](https://huggingface.co/datasets/open-llm-leaderboard/details_occultml__Helios-10.7B/blob/main/results_2024-01-04T12-17-31.612101.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.40988707960386334,
"acc_stderr": 0.03401984092738561,
"acc_norm": 0.414422676033673,
"acc_norm_stderr": 0.03496456895834615,
"mc1": 0.30599755201958384,
"mc1_stderr": 0.016132229728155045,
"mc2": 0.5552021116757884,
"mc2_stderr": 0.01659507343053494
},
"harness|arc:challenge|25": {
"acc": 0.35665529010238906,
"acc_stderr": 0.013998056902620203,
"acc_norm": 0.3890784982935154,
"acc_norm_stderr": 0.014247309976045609
},
"harness|hellaswag|10": {
"acc": 0.3434574785899223,
"acc_stderr": 0.004738920624724474,
"acc_norm": 0.4660426209918343,
"acc_norm_stderr": 0.004978260641742204
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4740740740740741,
"acc_stderr": 0.04313531696750575,
"acc_norm": 0.4740740740740741,
"acc_norm_stderr": 0.04313531696750575
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.46710526315789475,
"acc_stderr": 0.040601270352363966,
"acc_norm": 0.46710526315789475,
"acc_norm_stderr": 0.040601270352363966
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.43018867924528303,
"acc_stderr": 0.030471445867183238,
"acc_norm": 0.43018867924528303,
"acc_norm_stderr": 0.030471445867183238
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.04076663253918567,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.04076663253918567
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816508,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816508
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3872832369942196,
"acc_stderr": 0.037143259063020656,
"acc_norm": 0.3872832369942196,
"acc_norm_stderr": 0.037143259063020656
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.044405219061793275,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.044405219061793275
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.34893617021276596,
"acc_stderr": 0.031158522131357797,
"acc_norm": 0.34893617021276596,
"acc_norm_stderr": 0.031158522131357797
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3508771929824561,
"acc_stderr": 0.044895393502706986,
"acc_norm": 0.3508771929824561,
"acc_norm_stderr": 0.044895393502706986
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.30344827586206896,
"acc_stderr": 0.038312260488503336,
"acc_norm": 0.30344827586206896,
"acc_norm_stderr": 0.038312260488503336
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24338624338624337,
"acc_stderr": 0.022101128787415415,
"acc_norm": 0.24338624338624337,
"acc_norm_stderr": 0.022101128787415415
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.23809523809523808,
"acc_stderr": 0.03809523809523812,
"acc_norm": 0.23809523809523808,
"acc_norm_stderr": 0.03809523809523812
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4935483870967742,
"acc_stderr": 0.02844163823354051,
"acc_norm": 0.4935483870967742,
"acc_norm_stderr": 0.02844163823354051
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3793103448275862,
"acc_stderr": 0.034139638059062345,
"acc_norm": 0.3793103448275862,
"acc_norm_stderr": 0.034139638059062345
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.03895658065271847,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.03895658065271847
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.47474747474747475,
"acc_stderr": 0.03557806245087314,
"acc_norm": 0.47474747474747475,
"acc_norm_stderr": 0.03557806245087314
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5077720207253886,
"acc_stderr": 0.036080032255696545,
"acc_norm": 0.5077720207253886,
"acc_norm_stderr": 0.036080032255696545
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.37435897435897436,
"acc_stderr": 0.024537591572830506,
"acc_norm": 0.37435897435897436,
"acc_norm_stderr": 0.024537591572830506
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.29259259259259257,
"acc_stderr": 0.027738969632176088,
"acc_norm": 0.29259259259259257,
"acc_norm_stderr": 0.027738969632176088
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.40756302521008403,
"acc_stderr": 0.03191863374478465,
"acc_norm": 0.40756302521008403,
"acc_norm_stderr": 0.03191863374478465
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2582781456953642,
"acc_stderr": 0.035737053147634576,
"acc_norm": 0.2582781456953642,
"acc_norm_stderr": 0.035737053147634576
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.4954128440366973,
"acc_stderr": 0.021436420955529424,
"acc_norm": 0.4954128440366973,
"acc_norm_stderr": 0.021436420955529424
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.24537037037037038,
"acc_stderr": 0.029346665094372937,
"acc_norm": 0.24537037037037038,
"acc_norm_stderr": 0.029346665094372937
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.034411900234824655,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.034411900234824655
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.48523206751054854,
"acc_stderr": 0.032533028078777386,
"acc_norm": 0.48523206751054854,
"acc_norm_stderr": 0.032533028078777386
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.4663677130044843,
"acc_stderr": 0.033481800170603065,
"acc_norm": 0.4663677130044843,
"acc_norm_stderr": 0.033481800170603065
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.4580152671755725,
"acc_stderr": 0.04369802690578756,
"acc_norm": 0.4580152671755725,
"acc_norm_stderr": 0.04369802690578756
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6033057851239669,
"acc_stderr": 0.044658697805310094,
"acc_norm": 0.6033057851239669,
"acc_norm_stderr": 0.044658697805310094
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.04820403072760627,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.04820403072760627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.49693251533742333,
"acc_stderr": 0.03928297078179663,
"acc_norm": 0.49693251533742333,
"acc_norm_stderr": 0.03928297078179663
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.30357142857142855,
"acc_stderr": 0.04364226155841044,
"acc_norm": 0.30357142857142855,
"acc_norm_stderr": 0.04364226155841044
},
"harness|hendrycksTest-management|5": {
"acc": 0.5048543689320388,
"acc_stderr": 0.04950504382128919,
"acc_norm": 0.5048543689320388,
"acc_norm_stderr": 0.04950504382128919
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.5384615384615384,
"acc_stderr": 0.03265903381186193,
"acc_norm": 0.5384615384615384,
"acc_norm_stderr": 0.03265903381186193
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5504469987228607,
"acc_stderr": 0.017788725283507337,
"acc_norm": 0.5504469987228607,
"acc_norm_stderr": 0.017788725283507337
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5289017341040463,
"acc_stderr": 0.026874085883518348,
"acc_norm": 0.5289017341040463,
"acc_norm_stderr": 0.026874085883518348
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2,
"acc_stderr": 0.013378001241813068,
"acc_norm": 0.2,
"acc_norm_stderr": 0.013378001241813068
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.49673202614379086,
"acc_stderr": 0.02862930519400354,
"acc_norm": 0.49673202614379086,
"acc_norm_stderr": 0.02862930519400354
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5819935691318328,
"acc_stderr": 0.028013651891995072,
"acc_norm": 0.5819935691318328,
"acc_norm_stderr": 0.028013651891995072
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.49382716049382713,
"acc_stderr": 0.027818623962583302,
"acc_norm": 0.49382716049382713,
"acc_norm_stderr": 0.027818623962583302
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2907801418439716,
"acc_stderr": 0.027090664368353178,
"acc_norm": 0.2907801418439716,
"acc_norm_stderr": 0.027090664368353178
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.29335071707953064,
"acc_stderr": 0.011628520449582071,
"acc_norm": 0.29335071707953064,
"acc_norm_stderr": 0.011628520449582071
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.26838235294117646,
"acc_stderr": 0.026917481224377232,
"acc_norm": 0.26838235294117646,
"acc_norm_stderr": 0.026917481224377232
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.46895424836601307,
"acc_stderr": 0.020188804456361883,
"acc_norm": 0.46895424836601307,
"acc_norm_stderr": 0.020188804456361883
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.4818181818181818,
"acc_stderr": 0.04785964010794917,
"acc_norm": 0.4818181818181818,
"acc_norm_stderr": 0.04785964010794917
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.37142857142857144,
"acc_stderr": 0.030932858792789848,
"acc_norm": 0.37142857142857144,
"acc_norm_stderr": 0.030932858792789848
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.5472636815920398,
"acc_stderr": 0.03519702717576915,
"acc_norm": 0.5472636815920398,
"acc_norm_stderr": 0.03519702717576915
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526094,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526094
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3614457831325301,
"acc_stderr": 0.037400593820293204,
"acc_norm": 0.3614457831325301,
"acc_norm_stderr": 0.037400593820293204
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6023391812865497,
"acc_stderr": 0.03753638955761691,
"acc_norm": 0.6023391812865497,
"acc_norm_stderr": 0.03753638955761691
},
"harness|truthfulqa:mc|0": {
"mc1": 0.30599755201958384,
"mc1_stderr": 0.016132229728155045,
"mc2": 0.5552021116757884,
"mc2_stderr": 0.01659507343053494
},
"harness|winogrande|5": {
"acc": 0.7071823204419889,
"acc_stderr": 0.012789321118542616
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Kyle1668/pythia-semantic-memorization-perplexities | ---
configs:
- config_name: default
data_files:
- split: memories.deduped.12b
path: data/memories.deduped.12b-*
- split: memories.duped.12b
path: data/memories.duped.12b-*
- split: memories.duped.6.9b
path: data/memories.duped.6.9b-*
- split: pile.duped.6.9b
path: data/pile.duped.6.9b-*
- split: memories.duped.70m
path: data/memories.duped.70m-*
- split: memories.duped.160m
path: data/memories.duped.160m-*
- split: memories.duped.410m
path: data/memories.duped.410m-*
- split: pile.duped.70m
path: data/pile.duped.70m-*
- split: pile.duped.160m
path: data/pile.duped.160m-*
- split: pile.duped.410m
path: data/pile.duped.410m-*
- split: memories.duped.1.4b
path: data/memories.duped.1.4b-*
- split: memories.duped.1b
path: data/memories.duped.1b-*
- split: memories.duped.2.8b
path: data/memories.duped.2.8b-*
- split: pile.duped.1.4b
path: data/pile.duped.1.4b-*
- split: pile.duped.1b
path: data/pile.duped.1b-*
- split: pile.duped.2.8b
path: data/pile.duped.2.8b-*
- split: pile.duped.12b
path: data/pile.duped.12b-*
- split: memories.deduped.70m
path: data/memories.deduped.70m-*
- split: memories.deduped.160m
path: data/memories.deduped.160m-*
- split: memories.deduped.410m
path: data/memories.deduped.410m-*
- split: pile.deduped.70m
path: data/pile.deduped.70m-*
- split: pile.deduped.160m
path: data/pile.deduped.160m-*
- split: pile.deduped.410m
path: data/pile.deduped.410m-*
- split: memories.deduped.6.9b
path: data/memories.deduped.6.9b-*
- split: pile.deduped.6.9b
path: data/pile.deduped.6.9b-*
- split: pile.deduped.12b
path: data/pile.deduped.12b-*
- split: memories.deduped.2.8b
path: data/memories.deduped.2.8b-*
- split: pile.deduped.2.8b
path: data/pile.deduped.2.8b-*
- split: memories.deduped.1.4b
path: data/memories.deduped.1.4b-*
- split: memories.deduped.1b
path: data/memories.deduped.1b-*
- split: pile.deduped.1.4b
path: data/pile.deduped.1.4b-*
- split: pile.deduped.1b
path: data/pile.deduped.1b-*
dataset_info:
features:
- name: index
dtype: int32
- name: prompt_perplexity
dtype: float32
- name: generation_perplexity
dtype: float32
- name: sequence_perplexity
dtype: float32
splits:
- name: memories.deduped.12b
num_bytes: 29939456
num_examples: 1871216
- name: memories.duped.12b
num_bytes: 38117248
num_examples: 2382328
- name: memories.duped.6.9b
num_bytes: 33935616
num_examples: 2120976
- name: pile.duped.6.9b
num_bytes: 80000000
num_examples: 5000000
- name: memories.duped.70m
num_bytes: 7423248
num_examples: 463953
- name: memories.duped.160m
num_bytes: 11034768
num_examples: 689673
- name: memories.duped.410m
num_bytes: 15525456
num_examples: 970341
- name: pile.duped.70m
num_bytes: 80000000
num_examples: 5000000
- name: pile.duped.160m
num_bytes: 80000000
num_examples: 5000000
- name: pile.duped.410m
num_bytes: 80000000
num_examples: 5000000
- name: memories.duped.1.4b
num_bytes: 21979552
num_examples: 1373722
- name: memories.duped.1b
num_bytes: 20098256
num_examples: 1256141
- name: memories.duped.2.8b
num_bytes: 26801232
num_examples: 1675077
- name: pile.duped.1.4b
num_bytes: 80000000
num_examples: 5000000
- name: pile.duped.1b
num_bytes: 80000000
num_examples: 5000000
- name: pile.duped.2.8b
num_bytes: 80000000
num_examples: 5000000
- name: pile.duped.12b
num_bytes: 80000000
num_examples: 5000000
- name: memories.deduped.70m
num_bytes: 6583168
num_examples: 411448
- name: memories.deduped.160m
num_bytes: 9299120
num_examples: 581195
- name: memories.deduped.410m
num_bytes: 12976624
num_examples: 811039
- name: pile.deduped.70m
num_bytes: 80000000
num_examples: 5000000
- name: pile.deduped.160m
num_bytes: 80000000
num_examples: 5000000
- name: pile.deduped.410m
num_bytes: 80000000
num_examples: 5000000
- name: memories.deduped.6.9b
num_bytes: 26884704
num_examples: 1680294
- name: pile.deduped.6.9b
num_bytes: 80000000
num_examples: 5000000
- name: pile.deduped.12b
num_bytes: 80000000
num_examples: 5000000
- name: memories.deduped.2.8b
num_bytes: 21683376
num_examples: 1355211
- name: pile.deduped.2.8b
num_bytes: 80000000
num_examples: 5000000
- name: memories.deduped.1.4b
num_bytes: 16769552
num_examples: 1048097
- name: memories.deduped.1b
num_bytes: 16525840
num_examples: 1032865
- name: pile.deduped.1.4b
num_bytes: 80000000
num_examples: 5000000
- name: pile.deduped.1b
num_bytes: 80000000
num_examples: 5000000
download_size: 1891778367
dataset_size: 1595577216
---
# Dataset Card for "pythia-semantic-memorization-perplexities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hac541309/code-romance-cjk-wiki | ---
language: it
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8317039580
num_examples: 1515722
download_size: 4525674220
dataset_size: 8317039580
---
# Dataset Card for "code-romance-cjk-wiki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distilled-from-one-sec-cv12/chunk_88 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1337147732
num_examples: 260551
download_size: 1365668157
dataset_size: 1337147732
---
# Dataset Card for "chunk_88"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ronellcross22/Welcome_to_LangChain | ---
license: mit
---
|
sergiodtm/meu_modelo | ---
license: apache-2.0
---
|
Dimmas/Landscape_Segmentation | ---
license: bigscience-openrail-m
---
|
LRGB/coco_superpixels_edge_wt_region_boundary_30 | ---
task_categories:
- graph-ml
size_categories:
- 1M<n<10M
tags:
- lrgb
license: cc-by-4.0
dataset_info:
features:
- name: x
dtype: int64
- name: edge_index
dtype: int64
- name: edge_attr
dtype: int64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 3625184
num_examples: 113287
- name: val
num_bytes: 160032
num_examples: 5001
- name: test
num_bytes: 160032
num_examples: 5001
download_size: 3257505
dataset_size: 3945248
---
# `coco_superpixels_edge_wt_region_boundary_30`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| COCO-SP | Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
| Dataset | # Graphs | # Nodes | μ Nodes | μ Deg. | # Edges | μ Edges | μ Short. Path | μ Diameter
|---|---:|---:|---:|:---:|---:|---:|---:|---:|
| COCO-SP | 123,286 | 58,793,216 | 476.88 | 5.65 | 332,091,902 | 2,693.67 | 10.66±0.55 | 27.39±2.14 |
## Additional Information
### Dataset Curators
* Vijay Prakash Dwivedi ([vijaydwivedi75](https://github.com/vijaydwivedi75))
### Citation Information
```
@article{dwivedi2022LRGB,
title={Long Range Graph Benchmark},
author={Dwivedi, Vijay Prakash and Rampášek, Ladislav and Galkin, Mikhail and Parviz, Ali and Wolf, Guy and Luu, Anh Tuan and Beaini, Dominique},
journal={arXiv:2206.08164},
year={2022}
}
``` |
Marchanjo/spider-FIT-pt | ---
license: cc-by-sa-4.0
---
Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) |
AdapterOcean/med_alpaca_standardized_cluster_50 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 55401290
num_examples: 5976
download_size: 14923007
dataset_size: 55401290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml/OSACT4_hatespeech | ---
dataset_info:
features:
- name: tweet
dtype: string
- name: offensive
dtype: string
- name: hate
dtype: string
splits:
- name: train
num_bytes: 1417732
num_examples: 6838
- name: validation
num_bytes: 204725
num_examples: 999
download_size: 802812
dataset_size: 1622457
---
# Dataset Card for "OSACT4_hatespeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yezhengli9/wmt20-en-ru | ---
dataset_info:
features:
- name: id (string)
dtype: string
- name: translation (translation)
dtype: string
splits:
- name: train
num_bytes: 1803167
num_examples: 2002
download_size: 693889
dataset_size: 1803167
---
# Dataset Card for "wmt20-en-ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juliantcchu/textworld-GPT4 | ---
license: apache-2.0
--- |
ioclab/laplacian_image_aesthetic_3M | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 359597047282.0
num_examples: 3000000
download_size: 359170663793
dataset_size: 359597047282.0
---
# Dataset Card for "laplacian_image_aesthetic_3M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adityarra07/train_ds_noise2 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: 'null'
- name: sampling_rate
dtype: int64
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 10447485720.049213
num_examples: 45804
- name: test
num_bytes: 114045560.65026213
num_examples: 500
download_size: 10597554260
dataset_size: 10561531280.699476
---
# Dataset Card for "train_ds_noise2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_automl_eye_movements_gosdt_l256_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float64
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float64
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 5436000000
num_examples: 100000
- name: validation
num_bytes: 543600000
num_examples: 10000
download_size: 1404348878
dataset_size: 5979600000
---
# Dataset Card for "autotree_automl_eye_movements_gosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pccl-org/formal-logic-simple-order-new-objects-paired-faster-2000 | ---
dataset_info:
features:
- name: greater_than
dtype: string
- name: less_than
dtype: string
- name: paired_example
sequence:
sequence: string
- name: correct_example
sequence: string
- name: incorrect_example
sequence: string
- name: distance
dtype: int64
- name: index
dtype: int64
- name: index_in_distance
dtype: int64
splits:
- name: train
num_bytes: 506359052
num_examples: 1997003
download_size: 162272343
dataset_size: 506359052
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shrop/multinerd_en_filtered | ---
license: unknown
language:
- en
---
A filtered version of the [original English subset from the MultiNERD dataset by Babelscape](https://huggingface.co/datasets/Babelscape/multinerd). The version contains only 5 of the original 15 NER categories: PERSON(PER), ORGANIZATION(ORG), LOCATION(LOC), DISEASES(DIS),
ANIMAL(ANIM). Dataset filtered as part of a test.
See https://huggingface.co/datasets/Babelscape/multinerd for information on the dataset structure. |
vwxyzjn/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_1706373318 | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_len
dtype: int64
splits:
- name: train
num_bytes: 1600440249
num_examples: 116722
- name: validation
num_bytes: 88425771
num_examples: 6447
- name: test
num_bytes: 89922466
num_examples: 6553
download_size: 551824607
dataset_size: 1778788486
---
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task
The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset
These columns are taken directly from the aforementioned dataset:
* **id**: unique identifier for the post
* **subreddit**: subreddit the post was taken from
* **title**: title of the post
* **post**: body of the post
* **summary**: summary of the post
* **reference_response**: reference response for the post
These columns are added by this preprocessing script:
* **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last `
`. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below).
* **query_token**: tokenized version of `query`
* **reference_response_token**: tokenized version of `reference_response`
* **reference_response_token_len**: length of `reference_response_token`
* **query_reference_response**: concatenation of `query.strip()` and `reference_response`
* **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens
* **query_reference_response_token_len**: length of `query_reference_response_token`
# Args
```python
{'base_model': 'EleutherAI/pythia-1b-deduped',
'check_length_correctness': True,
'cnndm_params': TaskQueryHParams(length=1919,
format_str='Article:\n{article}\n\nTL;DR:\n',
truncate_field='article',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=None,
max_sft_query_response_length=None,
max_rm_response_length=155,
max_rm_query_response_length=2021),
'debug': False,
'hf_entity': 'vwxyzjn',
'push_to_hub': True,
'tldr_params': TaskQueryHParams(length=512,
format_str='SUBREDDIT: r/{subreddit}\n'
'\n'
'TITLE: {title}\n'
'\n'
'POST: {post}\n'
'\n'
'TL;DR:',
truncate_field='post',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=53,
max_sft_query_response_length=562,
max_rm_response_length=169,
max_rm_query_response_length=638)}
```
|
xglx893428923/history_curated | ---
license: gpl
---
|
tyzhu/lmind_hotpot_train1000_eval200_v1_doc_qa | ---
configs:
- config_name: default
data_files:
- split: train_qa
path: data/train_qa-*
- split: train_recite_qa
path: data/train_recite_qa-*
- split: eval_qa
path: data/eval_qa-*
- split: eval_recite_qa
path: data/eval_recite_qa-*
- split: all_docs
path: data/all_docs-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
splits:
- name: train_qa
num_bytes: 173266
num_examples: 1000
- name: train_recite_qa
num_bytes: 1024784
num_examples: 1000
- name: eval_qa
num_bytes: 33160
num_examples: 200
- name: eval_recite_qa
num_bytes: 208740
num_examples: 200
- name: all_docs
num_bytes: 1054269
num_examples: 2373
- name: train
num_bytes: 1227535
num_examples: 3373
- name: validation
num_bytes: 33160
num_examples: 200
download_size: 2356905
dataset_size: 3754914
---
# Dataset Card for "lmind_hotpot_train1000_eval200_v1_doc_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pumaML/NEW-GEN | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 1252876.2642514652
num_examples: 3378
- name: validation
num_bytes: 139455.7357485349
num_examples: 376
download_size: 896649
dataset_size: 1392332.0
---
# Dataset Card for "NEW-GEN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
htdung167/vivos-preprocessed-viewer | ---
dataset_info:
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: original_sentence
dtype: string
- name: preprocessed_sentence
dtype: string
splits:
- name: train
num_bytes: 1722176870.5
num_examples: 11660
- name: test
num_bytes: 86118591.0
num_examples: 760
download_size: 1772673347
dataset_size: 1808295461.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
pccl-org/formal-logic-simple-order-simple-objects-clavorier-500 | ---
dataset_info:
features:
- name: greater_than
dtype: string
- name: less_than
dtype: string
- name: correct_example
sequence: string
- name: incorrect_example
sequence: string
- name: distance
dtype: int64
- name: index
dtype: int64
splits:
- name: train
num_bytes: 19386150
num_examples: 124750
download_size: 0
dataset_size: 19386150
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "formal-logic-simple-order-simple-objects-clavorier-500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Joch2010/FORCEGPT | ---
license: apache-2.0
---
|
solkogan/SolDataset1 | ---
license: mit
task_categories:
- text-generation
- text2text-generation
- conversational
language:
- ru
size_categories:
- 100K<n<1M
---
### solkogan/SolDataset1
Датасет для тренировки инструкционной и диалоговой модели
### Citation
```
@MISC{solkogan/SolDataset1,
author = {Ivan Ramovich, Denis Petrov},
title = {Russian dataset for Conversational models},
url = {https://huggingface.co/datasets/solkogan/SolDataset1},
year = 2023
}
``` |
zolak/twitter_dataset_1713017958 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: float64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 47782460
num_examples: 122506
download_size: 24223767
dataset_size: 47782460
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lawful-good-project/dataset-qa-ip-law | ---
license: gpl-3.0
task_categories:
- question-answering
language:
- ru
tags:
- legal
size_categories:
- n<1K
---
Датасет для оценки производительности большой языковой модели.
# Контрибьюторы (в алфавитном порядке):
Ася Айнбунд
Дарья Анисимова
Юрий Батраков
Арсений Батуев
Егор Батурин
Андрей Бочков
Дмитрий Данилов
Максим Долотин
Алексей Дружинин
Константин Евменов
Лолита Князева
Владимир Королев
Антон Костин
Ярослав Котов
Сергей Лагутин
Иван Литвак
Илья Лопатин
Татьяна Максиян
Артур Маликов
Александр Медведев
Михаил
Кирилл Пантелеев
Александр Панюков
Алексей Суслов
Даниэль Торен
Данила Хайдуков
Антон Эсибов |
kewu93/three_styles | ---
license: cc
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 233434033.875
num_examples: 9897
- name: val
num_bytes: 98455383.139
num_examples: 4243
download_size: 339129751
dataset_size: 331889417.014
---
|
Heitorww3344/gusion2 | ---
license: openrail
---
|
makaveli10/indic-superb-whisper | ---
license: mit
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: duration
dtype: int64
splits:
- name: train
num_bytes: 41405593792.419
num_examples: 22271
- name: validation
num_bytes: 1589764095.0
num_examples: 833
download_size: 43368973310
dataset_size: 42995357887.419
---
|
tyzhu/random_letter_find_passage_train30_eval10_rare | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 6451
num_examples: 70
- name: validation
num_bytes: 1147
num_examples: 10
download_size: 6923
dataset_size: 7598
---
# Dataset Card for "random_letter_find_passage_train30_eval10_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mepmepmep/outline | ---
license: afl-3.0
---
|
datahrvoje/twitter_dataset_1713009804 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 25038
num_examples: 55
download_size: 12819
dataset_size: 25038
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davanstrien/dataset_name_imdb_test | ---
dataset_info:
features:
- name: annotation_id
dtype: int64
- name: annotator
dtype: int64
- name: created_at
dtype: string
- name: id
dtype: int64
- name: lead_time
dtype: float64
- name: sentiment
dtype:
class_label:
names:
'0': Negative
'1': Positive
- name: text
dtype: string
- name: updated_at
dtype: string
splits:
- name: train
num_bytes: 5885
num_examples: 4
download_size: 0
dataset_size: 5885
---
# Dataset Card for "dataset_name_imdb_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mir-hossain/conll2003_named_entities_alpaca_format | ---
dataset_info:
features:
- name: instruction
dtype: 'null'
- name: input
dtype: 'null'
- name: output
dtype: 'null'
splits:
- name: train
download_size: 0
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "conll2003_named_entities_alpaca_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SiguienteGlobal/preferenced-agent | ---
dataset_info:
features:
- name: max_length
dtype: int64
- name: size
dtype: int64
splits:
- name: train
num_bytes: 16
num_examples: 1
download_size: 1353
dataset_size: 16
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Heejung89/custom_test | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7786
num_examples: 32
download_size: 4172
dataset_size: 7786
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dog/fuego-20230215-095845-3f00ed | ---
tags:
- fuego
fuego:
id: 20230215-095845-3f00ed
status: done
script: run.py
requirements_file: requirements.txt
space_id: dog/actlearn-fuego-runner
space_hardware: cpu-basic
---
|
nhantruongcse/testing_1500_data_08_to_11_December | ---
dataset_info:
features:
- name: Date
dtype: string
- name: url
dtype: string
- name: Title
dtype: string
- name: Summary
dtype: string
- name: Content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 6491911
num_examples: 1500
download_size: 3384833
dataset_size: 6491911
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sam1120/parking-utcustom-test | ---
dataset_info:
features:
- name: name
dtype: string
- name: pixel_values
dtype: image
- name: labels
dtype: image
splits:
- name: train
num_bytes: 50554745.0
num_examples: 18
download_size: 14599431
dataset_size: 50554745.0
---
# Dataset Card for "parking-utcustom-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Phaedrus/rsna_fixed | ---
dataset_info:
features:
- name: image
dtype: image
- name: label1
dtype: image
- name: label2
dtype: image
- name: label3
dtype: image
- name: label4
dtype: image
- name: label5
dtype: image
- name: label6
dtype: image
- name: label7
dtype: image
- name: label8
dtype: image
- name: label9
dtype: image
- name: label10
dtype: image
- name: label11
dtype: image
- name: label12
dtype: image
- name: label13
dtype: image
- name: label14
dtype: image
splits:
- name: train
num_bytes: 29579297463.0
num_examples: 2000
download_size: 1123982789
dataset_size: 29579297463.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rsna_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mHossain/merge_new_para_detection_data_v6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 18268704.9
num_examples: 108000
- name: test
num_bytes: 2029856.1
num_examples: 12000
download_size: 9186455
dataset_size: 20298561.0
---
# Dataset Card for "merge_new_para_detection_data_v6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
semeru/code-code-MethodGeneration | ---
license: mit
Programminglanguage: "python"
version: "N/A"
Date: "Codesearchnet(Jun 2020 - paper release date)"
Contaminated: "Very Likely"
Size: "Standard Tokenizer (TreeSitter)"
---
### Dataset is imported from CodeXGLUE and pre-processed using their script.
# Where to find in Semeru:
The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/Method-Generation/dataset/codexglue_method_generation in Semeru
# CodeXGLUE -- Method Generation
Here is the introduction and pipeline for method generation task.
## Task Definition
Method generation is the prediction of a method body implementation conditioned on a signature, a docstring, and any more context.
## Dataset
We use CodeSearchNet Python dataset. The CodeSearchNet repositories are re-downloaded to extract all the methods, including their signatures, docstrings and bodies. We remove the methods that don't have docstrings and whose name contains 'test'. We preserve the context around this method for auxiliary information since it is really a difficult task to generator the method body only based on its signature/docstring. We also apply literal normalization for better user experience.
### Data Format
The data format of each line in `train/dev/test.jsonl` is:
```json
{
"signature": "def do_transform(self, v=<NUM_LIT:1>):",
"body": "if not self.transform:<EOL><INDENT>return<EOL><DEDENT>try:<EOL><INDENT>self.latest_value = utils.Transform ...",
"docstring": "Apply the transformation (if it exists) to the latest_value",
"id": "f19:c4:m1"
}
```
The `id` indicts where you can find this method in the raw data. In this instance, it means the 2nd method in the 2nd class in the 19th file. We apply literal normalization to function signature and body, replace `\n` with `<EOL>` and keep track in INDENT and DEDENT.
### Data Statistics
Data statistics are shown in the below table.
| Data Split | #Instances |
| ----------- | :---------: |
| Train | 893,538 |
| Dev | 20,000 |
| Test | 20,000 |
## Reference
<pre><code>@article{clement2021long,
title={Long-Range Modeling of Source Code Files with eWASH: Extended Window Access by Syntax Hierarchy},
author={Clement, Colin B and Lu, Shuai and Liu, Xiaoyu and Tufano, Michele and Drain, Dawn and Duan, Nan and Sundaresan, Neel and Svyatkovskiy, Alexey},
journal={arXiv preprint arXiv:2109.08780},
year={2021}
}</code></pre>
|
peter-h-o-r-v/autocast-initiative | ---
license: artistic-2.0
pretty_name: The Autocast Initiative
tags:
- art
- sound
- podcast
- podcasting
---
# The Autocast Initiative
This dataset archives podcasts in real-time. Podcasts that indentify with the principle of autocasting as their method for sharing audiofiles with an audience of subsribers.
All contributers are volonteers.
## The Principles Autocasting
* The content is primarily not created.
* Neither the files or the RSS feed is not manipulated after publish, other than to correct mistakes.
* * The "episode description" is the exception to the above. Use this field however you please.
* No method is to be considered "too low-effort" when it comes to generating audiofiles.
* For content protected by monetization is encouraged to commit scrambled content and provide means for unscrambling as they see fit.
* * Further monetization is encouraged.
* Get paid if you can.
## How to contribute
Create a folder for your autocast as so:
```
/archive/[Name of your feed]/
```
Do not substitute special characters (if possible)
In this folder, include your episodes as well as snapshots of your RSS feed at the time of publish (if possible)
```
/archive/[Name of your feed]/[001].mp3 // or whichever format you use
/archive/[Name of your feed]/[001].xml
/archive/[Name of your feed]/[002].mp3 // ...
/archive/[Name of your feed]/[002].xml
...
/archive/[Name of your feed]/[00n].mp3 // ...
/archive/[Name of your feed]/[00n].xml
...
```
If you intend to publish more than 1000 episodes in a single feed, figure it out (responsibly) |
yujiepan/no_robots_test400 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
- name: prompt_id
dtype: string
splits:
- name: train
num_bytes: 222353
num_examples: 400
download_size: 139530
dataset_size: 222353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "no_robots_test400"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
This is a subset of "no_robots", selecting 400 questions from the test set.
| category | messages |
|:-----------|-----------:|
| Brainstorm | 36 |
| Chat | 101 |
| Classify | 16 |
| Closed QA | 15 |
| Coding | 16 |
| Extract | 7 |
| Generation | 129 |
| Open QA | 34 |
| Rewrite | 21 |
| Summarize | 25 |
Code:
```python
import pandas as pd
import numpy as np
import numpy.random
from datasets import load_dataset, Dataset
from copy import deepcopy
def get_norobot_dataset():
ds = load_dataset('HuggingFaceH4/no_robots')
all_test_data = []
for sample in ds['test_sft']:
sample: dict
for i, message in enumerate(sample['messages']):
if message['role'] == 'user':
item = dict(
messages=deepcopy(sample['messages'][:i + 1]),
category=sample['category'],
prompt_id=sample['prompt_id'],
)
all_test_data.append(item)
return Dataset.from_list(all_test_data)
dataset = get_norobot_dataset().to_pandas()
dataset.groupby('category').count()
dataset['_sort_key'] = dataset['messages'].map(str)
dataset = dataset.sort_values(['_sort_key'])
subset = []
for category, group_df in sorted(dataset.groupby('category')):
n = int(len(group_df) * 0.603)
if n <= 20:
n = len(group_df)
indices = np.random.default_rng(seed=42).choice(len(group_df), size=n, replace=False)
subset.append(group_df.iloc[indices])
df = pd.concat(subset)
df = df.drop(columns=['_sort_key'])
df = df.reset_index(drop=True)
print(len(df))
print(df.groupby('category').count().to_string())
Dataset.from_pandas(df).push_to_hub('yujiepan/no_robots_test400')
```
|
Singularity4-2/goblet | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 8449447.0
num_examples: 200
- name: validation
num_bytes: 965072.0
num_examples: 23
download_size: 9419595
dataset_size: 9414519.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
naomifan29/wilhelmm | ---
license: openrail++
---
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_6_500 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 1723
num_examples: 63
download_size: 0
dataset_size: 1723
---
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_6_500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-3145a565-e770-4a8b-a969-f2fafcc9a1c0-3129 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
csdc-atl/query-document-retrieval-full | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: query
sequence: string
- name: positive
dtype: string
- name: negative
sequence: string
splits:
- name: train
num_bytes: 7739527819
num_examples: 468802
download_size: 3196560243
dataset_size: 7739527819
---
|
maywell/ELLL_KO_ONLY_100k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 107274919
num_examples: 100000
download_size: 67047505
dataset_size: 107274919
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ELLL_KO_ONLY_100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-15000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1109754
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MrOvkill/svg-positional-shapes | ---
license: apache-2.0
---
# Summary
This dataset is composed of random SVG shaped polygons each given a caption that specifically mentions the postion and color of objects.
# Checkpoints
**07-14-2024**. Ew, green stuff! - Moldy is now up and running. It's going at about 1 img&desc per second. 16k rows uploaded. P.S. Make that 32k! <3
# Roadmap ( Unsorted )
*. Create visualizer - SVG Images are picky, and you need a somewhat powerful graphics system to use them well. Or a web browser.
*. Trim - As stated below, a sizeable minority of available images are "junk", because I am one guy writing this in his spare time. I will, however, utilize a vision model to check images individually.
*. Expand - Obviously, as I work on the other items, I will continuosly build and improve upon this dataset. The goal is 1 million rows.
# Safety
This dataset is randomly generated using Python, XML ETree for SVG manipulation, and Together Computer for inference. This is RANDOM, UNCLEANED data - I watched a lot of it during the generation process, and I saw many visually appealing and potentially train-worthy examples, but I also saw many ( Possible arbitrary guess 10-30% junk or low quality. )
|
EsilaAycill/npc-llama2-230k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 45978
num_examples: 229
download_size: 23935
dataset_size: 45978
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
natyou/freshqa_10_06 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: id
dtype: int64
- name: split
dtype: string
- name: question
dtype: string
- name: effective_year
dtype: string
- name: next_review
dtype: string
- name: false_premise
dtype: bool
- name: num_hops
dtype: string
- name: fact_type
dtype: string
- name: source
dtype: string
- name: answer_0
dtype: string
- name: answer_1
dtype: string
- name: answer_2
dtype: string
- name: answer_3
dtype: string
- name: answer_4
dtype: string
- name: answer_5
dtype: string
- name: answer_6
dtype: string
- name: answer_7
dtype: string
- name: answer_8
dtype: string
- name: answer_9
dtype: string
- name: note
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 192891
num_examples: 500
- name: dev
num_bytes: 39203
num_examples: 100
download_size: 129810
dataset_size: 232094
---
# Dataset Card for "freshqa_10_06"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b | ---
pretty_name: Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [luffycodes/higgs-llama-vicuna-ep25-70b](https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-07T00:36:52.545264](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b_public/blob/main/results_2023-11-07T00-36-52.545264.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4165268456375839,\n\
\ \"em_stderr\": 0.005048607283288401,\n \"f1\": 0.49208158557047166,\n\
\ \"f1_stderr\": 0.004763681838536193,\n \"acc\": 0.5761731430558057,\n\
\ \"acc_stderr\": 0.012100109818181048\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.4165268456375839,\n \"em_stderr\": 0.005048607283288401,\n\
\ \"f1\": 0.49208158557047166,\n \"f1_stderr\": 0.004763681838536193\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3457164518574678,\n \
\ \"acc_stderr\": 0.013100422990441578\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8066298342541437,\n \"acc_stderr\": 0.01109979664592052\n\
\ }\n}\n```"
repo_url: https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_11_07T00_36_52.545264
path:
- '**/details_harness|drop|3_2023-11-07T00-36-52.545264.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-07T00-36-52.545264.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_07T00_36_52.545264
path:
- '**/details_harness|gsm8k|5_2023-11-07T00-36-52.545264.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-07T00-36-52.545264.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_07T00_36_52.545264
path:
- '**/details_harness|winogrande|5_2023-11-07T00-36-52.545264.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-07T00-36-52.545264.parquet'
- config_name: results
data_files:
- split: 2023_11_07T00_36_52.545264
path:
- results_2023-11-07T00-36-52.545264.parquet
- split: latest
path:
- results_2023-11-07T00-36-52.545264.parquet
---
# Dataset Card for Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [luffycodes/higgs-llama-vicuna-ep25-70b](https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-07T00:36:52.545264](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b_public/blob/main/results_2023-11-07T00-36-52.545264.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4165268456375839,
"em_stderr": 0.005048607283288401,
"f1": 0.49208158557047166,
"f1_stderr": 0.004763681838536193,
"acc": 0.5761731430558057,
"acc_stderr": 0.012100109818181048
},
"harness|drop|3": {
"em": 0.4165268456375839,
"em_stderr": 0.005048607283288401,
"f1": 0.49208158557047166,
"f1_stderr": 0.004763681838536193
},
"harness|gsm8k|5": {
"acc": 0.3457164518574678,
"acc_stderr": 0.013100422990441578
},
"harness|winogrande|5": {
"acc": 0.8066298342541437,
"acc_stderr": 0.01109979664592052
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_8 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_8
num_bytes: 202362
num_examples: 8
download_size: 45171
dataset_size: 202362
---
# Dataset Card for "VQAv2_sample_validation_facebook_opt_2.7b_VQAv2_visclues_ns_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wukx/n-grams_sample_probability | ---
license: openrail
---
|
autoevaluate/autoeval-eval-phpthinh__examplei-match-bd10ea-1748761024 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/examplei
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: ['f1']
dataset_name: phpthinh/examplei
dataset_config: match
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: phpthinh/examplei
* Config: match
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-b079e4-1737160612 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: Adrian/distilbert-base-uncased-finetuned-squad-colab
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Adrian/distilbert-base-uncased-finetuned-squad-colab
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@saad](https://huggingface.co/saad) for evaluating this model. |
Dundalia/TWOLAR_ds | ---
license: mit
dataset_info:
features:
- name: query
dtype: string
- name: query_id
dtype: string
- name: permutation
dtype: string
- name: retrieved_passages
list:
- name: docid
dtype: string
- name: original rank
dtype: int64
- name: rank
dtype: int64
- name: text
dtype: string
- name: source
dtype: string
- name: query_type
dtype: string
splits:
- name: train
num_bytes: 237864347
num_examples: 19000
- name: test
num_bytes: 12524226
num_examples: 1000
download_size: 112732214
dataset_size: 250388573
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
SergeiZu/TweetEvalForLlama | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: llama_training_data
dtype: string
splits:
- name: train
num_bytes: 8079531
num_examples: 21279
download_size: 3809565
dataset_size: 8079531
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.