datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
Multimodal-Fatima/VQAv2_sample_validation_embeddings | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: int64
- name: vision_embeddings
sequence: float32
splits:
- name: openai_clip_vit_large_patch14
num_bytes: 158671010.0
num_examples: 1000
download_size: 156748467
dataset_size: 158671010.0
---
# Dataset Card for "VQAv2_sample_validation_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aisyahhrazak/crawl-worldofbuzz | ---
language:
- en
---
About
- Data scraped from https://worldofbuzz.com |
rafaelsandroni/dpo-mix-pt-br | ---
dataset_info:
features:
- name: dataset
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_rating
dtype: float64
- name: rejected_rating
dtype: float64
splits:
- name: train
num_bytes: 41362946
num_examples: 6750
- name: test
num_bytes: 4586808
num_examples: 750
- name: t
num_bytes: 23472
num_examples: 10
download_size: 24253124
dataset_size: 45973226
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: t
path: data/t-*
---
|
Nicolas-BZRD/uld_loss_Llama-2-7b-chat-hf-FairytaleQA | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
dtype: string
- name: answers_generated
dtype: string
splits:
- name: train
num_bytes: 10266684
num_examples: 8548
- name: validation
num_bytes: 1156729
num_examples: 1025
download_size: 2646092
dataset_size: 11423413
---
# Dataset Card for "uld_loss_Llama-2-7b-chat-hf-FairytaleQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/code_instructions_standardized_cluster_2_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 39417733
num_examples: 40854
download_size: 17589450
dataset_size: 39417733
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code_instructions_standardized_cluster_2_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sakib131/bangla-conv-summary-dataset | ---
dataset_info:
features:
- name: id
dtype: string
- name: dialogue_bn
dtype: string
- name: summary_bn
dtype: string
splits:
- name: train
num_bytes: 24006508
num_examples: 14732
- name: test
num_bytes: 1352231
num_examples: 819
- name: validation
num_bytes: 1311704
num_examples: 818
download_size: 10303952
dataset_size: 26670443
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
AdapterOcean/python3-standardized_cluster_0_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 14844463
num_examples: 11266
download_size: 0
dataset_size: 14844463
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ummonk/GMUSpeechAccentArchive | ---
license: cc-by-nc-sa-2.0
language:
- en
pretty_name: GMU Speech Accent Archive
size_categories:
- 1K<n<10K
--- |
Babypotatotang/logo-captioning-BLIP-BrandInfoWBP | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 321581037.08
num_examples: 24080
- name: test
num_bytes: 82453208.54
num_examples: 6021
download_size: 265975818
dataset_size: 404034245.62
---
# Dataset Card for "logo-captioning-BLIP-BrandInfoWBP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
muneebmj0/muneeb_big_data | ---
license: llama2
---
|
Amir13/conll2003-persian | ---
annotations_creators:
- crowdsourced
language:
- fa
language_creators:
- machine-generated
license:
- other
multilinguality:
- monolingual
pretty_name: conll2003-persian
size_categories:
- 10K<n<100K
source_datasets:
- extended|conll2003
tags:
- named entity recognition
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- col_mapping:
ner_tags: tags
tokens: tokens
config: conll2003
metrics:
- name: seqeval
type: seqeval
splits:
eval_split: test
train_split: train
task: token-classification
task_id: entity_extraction
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you used the datasets and models in this repository, please cite it.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2302.09611,
doi = {10.48550/ARXIV.2302.09611},
url = {https://arxiv.org/abs/2302.09611},
author = {Sartipi, Amir and Fatemi, Afsaneh},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
[More Information Needed]
|
gr8brit/noahnerd | ---
license: other
---
|
hiennguyennq/test_V1 | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_digitous__Javalion-GPTJ | ---
pretty_name: Evaluation run of digitous/Javalion-GPTJ
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [digitous/Javalion-GPTJ](https://huggingface.co/digitous/Javalion-GPTJ) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_digitous__Javalion-GPTJ\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T16:30:40.510452](https://huggingface.co/datasets/open-llm-leaderboard/details_digitous__Javalion-GPTJ/blob/main/results_2023-10-17T16-30-40.510452.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0008389261744966443,\n\
\ \"em_stderr\": 0.0002964962989801232,\n \"f1\": 0.04887374161073851,\n\
\ \"f1_stderr\": 0.0012121662940147047,\n \"acc\": 0.3347011350709951,\n\
\ \"acc_stderr\": 0.008454252569236846\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0008389261744966443,\n \"em_stderr\": 0.0002964962989801232,\n\
\ \"f1\": 0.04887374161073851,\n \"f1_stderr\": 0.0012121662940147047\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.016679302501895376,\n \
\ \"acc_stderr\": 0.0035275958887224543\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6527229676400947,\n \"acc_stderr\": 0.013380909249751237\n\
\ }\n}\n```"
repo_url: https://huggingface.co/digitous/Javalion-GPTJ
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T16_30_40.510452
path:
- '**/details_harness|drop|3_2023-10-17T16-30-40.510452.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T16-30-40.510452.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T16_30_40.510452
path:
- '**/details_harness|gsm8k|5_2023-10-17T16-30-40.510452.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T16-30-40.510452.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T16_30_40.510452
path:
- '**/details_harness|winogrande|5_2023-10-17T16-30-40.510452.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T16-30-40.510452.parquet'
- config_name: results
data_files:
- split: 2023_10_17T16_30_40.510452
path:
- results_2023-10-17T16-30-40.510452.parquet
- split: latest
path:
- results_2023-10-17T16-30-40.510452.parquet
---
# Dataset Card for Evaluation run of digitous/Javalion-GPTJ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/digitous/Javalion-GPTJ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [digitous/Javalion-GPTJ](https://huggingface.co/digitous/Javalion-GPTJ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_digitous__Javalion-GPTJ",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T16:30:40.510452](https://huggingface.co/datasets/open-llm-leaderboard/details_digitous__Javalion-GPTJ/blob/main/results_2023-10-17T16-30-40.510452.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0008389261744966443,
"em_stderr": 0.0002964962989801232,
"f1": 0.04887374161073851,
"f1_stderr": 0.0012121662940147047,
"acc": 0.3347011350709951,
"acc_stderr": 0.008454252569236846
},
"harness|drop|3": {
"em": 0.0008389261744966443,
"em_stderr": 0.0002964962989801232,
"f1": 0.04887374161073851,
"f1_stderr": 0.0012121662940147047
},
"harness|gsm8k|5": {
"acc": 0.016679302501895376,
"acc_stderr": 0.0035275958887224543
},
"harness|winogrande|5": {
"acc": 0.6527229676400947,
"acc_stderr": 0.013380909249751237
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
eliwill/Watts | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5040818
num_examples: 17390
- name: validation
num_bytes: 99856
num_examples: 399
download_size: 2976066
dataset_size: 5140674
---
# Dataset Card for "Watts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
A3KLFG/1232456 | ---
license: apache-2.0
---
|
dianaCerni/DrepturileStudentului | ---
license: unknown
---
|
transcendingvictor/delphi-llama2-400k-validation-logprobs | ---
dataset_info:
features:
- name: logprobs
sequence: float64
splits:
- name: validation
num_bytes: 45818277
num_examples: 10982
download_size: 37682254
dataset_size: 45818277
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
ChunB1/kindle | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1301861
num_examples: 10000
download_size: 715932
dataset_size: 1301861
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
distilled-from-one-sec-cv12/chunk_128 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1164189068
num_examples: 226849
download_size: 1189606175
dataset_size: 1164189068
---
# Dataset Card for "chunk_128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Pipper/SolFuncs | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: comments
dtype: string
- name: code_string
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 15153841436.646095
num_examples: 591941
- name: test
num_bytes: 1894239779.676952
num_examples: 73993
- name: valid
num_bytes: 1894239779.676952
num_examples: 73993
download_size: 5432099769
dataset_size: 18942320996.0
---
# Dataset Card for "SolFuncs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/9bc865b4 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1354
dataset_size: 188
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "9bc865b4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hypersniper/riddles_v1 | ---
license: apache-2.0
language:
- en
---
# Riddle Processing with GPT-4
[Buy me Ko-fi](https://ko-fi.com/hypersniper)
## Credits
All credit for the original riddles goes to [crawsome's GitHub repository](https://github.com/crawsome/riddles).
## Project Overview
This project involves processing each riddle using GPT-4. The correct answers were provided to the model to generate a desirable output focused on reasoning and logical breakdown.
- `riddles.json` (`riddles_1`) — 386 samples, sourced from [crawsome's GitHub repository](https://github.com/crawsome/riddles).
- `riddles_2.json` — 83 samples, sourced from various Google searches.
### Prompt Structure
The following prompt was used to guide GPT-4 responses:
> "Answer the user's question using the answer. Do not mention you have the answer to the riddle. You are to answer it as if you never got the answer. Prioritize educational insights by breaking down the logic behind your decisions. Make sure to cover all relevant topics in a logical order."
### Goal
The primary goal of this dataset was to fine-tune a language model in outputting structured and logical reasoning.
## Performance
The performance of this small dataset is noteworthy. It demonstrates a proficient ability in breaking down a question logically and providing clear, reasoned answers.
## Example Outputs
### Fine-Tuning Details
- **Model:** Open Orca Mistral 7B
- **Fine-Tuning:** One epoch with the specified dataset
### Sample Questions and Outputs
#### Riddle 1
**Question:** _"I am easy to lift, but hard to throw. What am I?"_
**Output:**
The riddle describes an object that is lightweight and easy to lift, yet not suitable for throwing due to its properties, like lacking aerodynamics or a solid shape. A feather fits this description as it is lightweight, easy to lift, but not designed for throwing, being used instead for insulation, flight, and decoration.
**Correct Answer:** Feather
#### Riddle 2
**Question:** _"Rachel goes to the supermarket and buys 10 tomatoes. Unfortunately, on the way back home, all but 9 get ruined. How many tomatoes are left in a good condition?"_
**Output:**
Breaking down the problem: Rachel buys 10 tomatoes, but only 1 gets ruined, as 'all but 9' implies. Therefore, 9 tomatoes remain in good condition.
**Correct Answer:** 9
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_94 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1235253004.0
num_examples: 242587
download_size: 1261765733
dataset_size: 1235253004.0
---
# Dataset Card for "chunk_94"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigcode/humanevalpack | ---
license: mit
pretty_name: HumanEvalPack
language_creators:
- expert-generated
multilinguality:
- multilingual
language:
- code
tags:
- code
---

# Dataset Card for HumanEvalPack
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigcode-project/octopack
- **Paper:** [OctoPack: Instruction Tuning Code Large Language Models](https://arxiv.org/abs/2308.07124)
- **Point of Contact:** [Niklas Muennighoff](mailto:n.muennighoff@gmail.com)
### Dataset Summary
> HumanEvalPack is an extension of OpenAI's HumanEval to cover 6 total languages across 3 tasks. The Python split is exactly the same as OpenAI's Python HumanEval. The other splits are translated by humans (similar to HumanEval-X but with additional cleaning, see [here](https://github.com/bigcode-project/octopack/tree/main/evaluation/create/humaneval-x#modifications-muennighoff)). Refer to the [OctoPack paper](https://arxiv.org/abs/2308.07124) for more details.
>
- **Languages:** Python, JavaScript, Java, Go, C++, Rust
- **OctoPack🐙🎒:**
<table>
<tr>
<th>Data</t>
<td><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a></td>
<td>4TB of GitHub commits across 350 programming languages</td>
</tr>
<tr>
<th></t>
<td><a href=https://huggingface.co/datasets/bigcode/commitpackft>CommitPackFT</a></td>
<td>Filtered version of CommitPack for high-quality commit messages that resemble instructions</td>
</tr>
<tr>
<th>Model</t>
<td><a href=https://huggingface.co/bigcode/octocoder>OctoCoder</a></td>
<td>StarCoder (16B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th></t>
<td><a href=https://huggingface.co/bigcode/octogeex>OctoGeeX</a></td>
<td>CodeGeeX2 (6B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th>Evaluation</t>
<td><a href=https://huggingface.co/datasets/bigcode/humanevalpack>HumanEvalPack</a></td>
<td>Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages</td>
</tr>
</table>
## Usage
```python
# pip install -q datasets
from datasets import load_dataset
# Languages: "python", "js", "java", "go", "cpp", "rust"
ds = load_dataset("bigcode/humanevalpack", "python")["test"]
ds[0]
```
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
"task_id": "Python/0",
"prompt": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True\n \"\"\"\n",
"declaration": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n",
"canonical_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = abs(elem - elem2)\n if distance < threshold:\n return True\n\n return False\n",
"buggy_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = elem - elem2\n if distance < threshold:\n return True\n\n return False\n",
"bug_type": "missing logic",
"failure_symptoms": "incorrect output",
"entry_point": "has_close_elements",
"import": ""
"test_setup": ""
"test": "\n\n\n\n\ndef check(has_close_elements):\n assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False\n assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True\n assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False\n assert has_close_elements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True\n assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True\n assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False\n\ncheck(has_close_elements)",
"example_test": "def check(has_close_elements):\n assert has_close_elements([1.0, 2.0, 3.0], 0.5) == False\n assert has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) == True\ncheck(has_close_elements)\n",
"signature": "has_close_elements(numbers: List[float], threshold: float) -> bool",
"docstring": "Check if in given list of numbers, are any two numbers closer to each other than\ngiven threshold.\n>>> has_close_elements([1.0, 2.0, 3.0], 0.5)\nFalse\n>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\nTrue",
"instruction": "Write a Python function `has_close_elements(numbers: List[float], threshold: float) -> bool` to solve the following problem:\nCheck if in given list of numbers, are any two numbers closer to each other than\ngiven threshold.\n>>> has_close_elements([1.0, 2.0, 3.0], 0.5)\nFalse\n>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\nTrue"
}
```
### Data Fields
The data fields are the same among all splits:
- `task_id`: Indicates the language (Python/JavaScript/Java/Go/C++/Rust) and task id (from 0 to 163) of the problem
- `prompt`: the prompt for models relying on code continuation
- `declaration`: the declaration of the function (same as prompt but without the docstring)
- `canonical_solution`: the correct solution passing all unit tests for the problem
- `buggy_solution`: same as `canonical_solution` but with a subtle human-written bug causing the unit tests to fail
- `bug_type`: the type of the bug in `buggy_solution` (one of [`missing logic`, `excess logic`, `value misuse`, `operator misuse`, `variable misuse`, `function misuse`])
- `failure_symptoms`: the problem the bug causes (one of [`incorrect output`, `stackoverflow`, `infinite loop`])
- `entry_point`: the name of the function
- `import`: imports necessary for the solution (only present for Go)
- `test_setup`: imports necessary for the test execution (only present for Go)
- `test`: the unit tests for the problem
- `example_test`: additional unit tests different from `test` that could be e.g. provided to the model (these are not used in the paper)
- `signature`: the signature of the function
- `docstring`: the docstring describing the problem
- `instruction`: an instruction for HumanEvalSynthesize in the form `Write a {language_name} function {signature} to solve the following problem:\n{docstring}`
## Citation Information
```bibtex
@article{muennighoff2023octopack,
title={OctoPack: Instruction Tuning Code Large Language Models},
author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre},
journal={arXiv preprint arXiv:2308.07124},
year={2023}
}
``` |
starbotica/markkistler | ---
license: unknown
---
|
KaioSan/Blizto | ---
license: openrail
---
|
roleplay4fun/20240327_aesir_segmented_experiment_00 | ---
dataset_info:
features:
- name: segments
list:
- name: label
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 7722816
num_examples: 980
download_size: 4264607
dataset_size: 7722816
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Luciya/llama-2-nuv-intent-big | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 850629
num_examples: 1563
download_size: 131113
dataset_size: 850629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-big"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_qa_context_v5_full_recite_ans_sent_no_permute_rerun | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4850217.0
num_examples: 2385
- name: validation
num_bytes: 631113
num_examples: 300
download_size: 1204825
dataset_size: 5481330.0
---
# Dataset Card for "squad_qa_context_v5_full_recite_ans_sent_no_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/citrinne_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of citrinne/シトリニカ (Fire Emblem)
This is the dataset of citrinne/シトリニカ (Fire Emblem), containing 144 images and their tags.
The core tags of this character are `short_hair, blonde_hair, red_eyes, hair_ornament, breasts, bangs, medium_breasts, earrings`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 144 | 234.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/citrinne_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 144 | 127.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/citrinne_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 364 | 280.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/citrinne_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 144 | 207.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/citrinne_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 364 | 415.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/citrinne_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/citrinne_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, cleavage, smile, solo, detached_sleeves, holding_book, looking_at_viewer, blush, brown_dress, necklace, open_mouth |
| 1 | 8 |  |  |  |  |  | 1girl, dress, smile, solo, cleavage, upper_body, bare_shoulders, detached_sleeves, looking_at_viewer, necklace, official_alternate_costume |
| 2 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, navel, solo, long_sleeves, midriff, closed_mouth, crop_top, smile, stomach, arms_up, collarbone, shirt, simple_background, blush, choker, cleavage, on_back, pants, small_breasts, thighhighs, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | smile | solo | detached_sleeves | holding_book | looking_at_viewer | blush | brown_dress | necklace | open_mouth | dress | upper_body | bare_shoulders | official_alternate_costume | navel | long_sleeves | midriff | closed_mouth | crop_top | stomach | arms_up | collarbone | shirt | simple_background | choker | on_back | pants | small_breasts | thighhighs | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:-------|:-------------------|:---------------|:--------------------|:--------|:--------------|:-----------|:-------------|:--------|:-------------|:-----------------|:-----------------------------|:--------|:---------------|:----------|:---------------|:-----------|:----------|:----------|:-------------|:--------|:--------------------|:---------|:----------|:--------|:----------------|:-------------|:-------------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | | X | | | X | | X | X | X | X | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Idontknowwhattocallmyself/yfmpuff | ---
license: openrail
---
|
baptistecolle/sam-controlnet | ---
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: sentences
struct:
- name: tokens
list: string
- name: raw
dtype: string
- name: imgid
dtype: int32
- name: sentid
dtype: int32
- name: cocoid
dtype: int32
- name: masks
sequence:
sequence:
sequence: bool
splits:
- name: train
num_bytes: 24048007.0
num_examples: 10
download_size: 1501863
dataset_size: 24048007.0
---
# Dataset Card for "sam-controlnet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
claws-lab/XLingHealth | ---
dataset_info:
features:
- name: question_English
dtype: string
- name: answer_English
dtype: string
- name: question_Chinese
dtype: string
- name: answer_Chinese
dtype: string
- name: question_Spanish
dtype: string
- name: answer_Spanish
dtype: string
- name: question_Hindi
dtype: string
- name: answer_Hindi
dtype: string
- name: answer_ids
dtype: int64
- name: label
dtype: int64
- name: id
dtype: int64
splits:
- name: liveqa
num_bytes: 7181107
num_examples: 1230
- name: medicationqa
num_bytes: 8507105
num_examples: 3450
- name: healthqa
num_bytes: 82047006
num_examples: 11340
download_size: 25265727
dataset_size: 97735218
license: apache-2.0
task_categories:
- text-classification
- text-generation
- question-answering
language:
- en
- es
- zh
- hi
tags:
- medical
- health
- healthcare
pretty_name: XLingHealth
size_categories:
- 10K<n<100K
---
# Dataset Card for "XLingHealth"
[XLingHealth](https://claws-lab.github.io/XLingEval/) is a **Cross-Ling**ual **Health**care benchmark for clinical health inquiry that features the top four [most spoken languages in the world](https://en.wikipedia.org/wiki/List_of_languages_by_total_number_of_speakers): English, Spanish, Chinese, and Hindi.
## Statistics
| Dataset | \#Examples | \#Words (Q) | \#Words (A) |
|--------------|------------|-------------------|---------------------|
| HealthQA | 1,134 | 7.72 ± 2.41 | 242.85 ± 221.88 |
| LiveQA | 246 | 41.76 ± 37.38 | 115.25 ± 112.75 |
| MedicationQA | 690 | 6.86 ± 2.83 | 61.50 ± 69.44 |
- `#Words (Q)` and `\#Words (A)` represent the average number of words in the questions and ground-truth answers of the datasets, respectively.
- In the **HealthQA** dataset, each question is already associated with 1 correct answer (termed "positive example") and 9 incorrect/irrelevant answers (termed "negative examples"). Thus, the total number of examples in HealthQA is 11,340
- **LiveQA** and **MedicationQA** do not provide negative question-answer pairs. Therefore, for each question in these datasets, we randomly sampled 4 responses from the entire set of answers to serve as negative examples. Thus, the total number of examples is 1230 and 3450 for **LiveQA** and **MedicationQA**, respectively.
## Introduction
Large language models (LLMs) are transforming the ways the general public accesses and consumes information. Their influence is particularly pronounced in pivotal sectors like healthcare, where lay individuals are increasingly appropriating LLMs as conversational agents for everyday queries. While LLMs demonstrate impressive language understanding and generation proficiencies, concerns regarding their safety remain paramount in these high-stake domains. Moreover, the development of LLMs is disproportionately focused on English. It remains unclear how these LLMs perform in the context of non-English languages, a gap that is critical for ensuring equity in the real-world use of these systems.This paper provides a framework to investigate the effectiveness of LLMs as multi-lingual dialogue systems for healthcare queries. Our empirically derived framework XlingEval focuses on three fundamental criteria for evaluating LLM responses to naturalistic human-authored health-related questions: correctness, consistency, and verifiability. Through extensive experiments on four major global languages, including English, Spanish, Chinese, and Hindi, spanning three expert-annotated large health Q&A datasets, and through an amalgamation of algorithmic and human-evaluation strategies, we found a pronounced disparity in LLM responses across these languages, indicating a need for enhanced cross-lingual capabilities. We further propose XlingHealth, a cross-lingual benchmark for examining the multilingual capabilities of LLMs in the healthcare context. Our findings underscore the pressing need to bolster the cross-lingual capacities of these models, and to provide an equitable information ecosystem accessible to all.
```bibtex
@inproceedings{jin2023better,
title = {Better to Ask in English: Cross-Lingual Evaluation of Large Language Models for Healthcare Queries},
author = {Jin, Yiqiao and Chandra, Mohit and Verma, Gaurav and Hu, Yibo and De Choudhury, Munmun and Kumar, Srijan},
year = {2024},
booktitle = {The Web Conference},
}
```
|
xyy888/in_context_examples | ---
license: mit
---
|
clarin-knext/kpwr-long | ---
license: cc-by-sa-4.0
---
|
Geonmo/gcc12m_caption_only | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1329443791
num_examples: 12423374
download_size: 943024335
dataset_size: 1329443791
---
# Dataset Card for "gcc12m_caption_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
argilla/uf-zephyr-iteration2 | ---
dataset_info:
features:
- name: input
dtype: string
splits:
- name: train
num_bytes: 6238
num_examples: 10
download_size: 6878
dataset_size: 6238
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uf-zephyr-iteration2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceTB/wiki_natural_sciences_college_high_school_students_1k | ---
dataset_info:
features:
- name: top_category
dtype: string
- name: subcategory_1
dtype: string
- name: subcategory_2
dtype: string
- name: subcategory_3
dtype: string
- name: subcategory_4
dtype: string
- name: subcategory_5
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
- name: token_length
dtype: int64
splits:
- name: train
num_bytes: 5677576
num_examples: 1000
download_size: 2912620
dataset_size: 5677576
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_johnsnowlabs__BioLing-7B-Dare | ---
pretty_name: Evaluation run of johnsnowlabs/BioLing-7B-Dare
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [johnsnowlabs/BioLing-7B-Dare](https://huggingface.co/johnsnowlabs/BioLing-7B-Dare)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_johnsnowlabs__BioLing-7B-Dare\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-09T23:35:31.101237](https://huggingface.co/datasets/open-llm-leaderboard/details_johnsnowlabs__BioLing-7B-Dare/blob/main/results_2024-04-09T23-35-31.101237.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6024796549026502,\n\
\ \"acc_stderr\": 0.03310506649883824,\n \"acc_norm\": 0.6093893210665232,\n\
\ \"acc_norm_stderr\": 0.03379950815279291,\n \"mc1\": 0.37576499388004897,\n\
\ \"mc1_stderr\": 0.016954584060214297,\n \"mc2\": 0.5374700664619642,\n\
\ \"mc2_stderr\": 0.015268834464528118\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5725255972696246,\n \"acc_stderr\": 0.014456862944650649,\n\
\ \"acc_norm\": 0.6177474402730375,\n \"acc_norm_stderr\": 0.014200454049979272\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6357299342760406,\n\
\ \"acc_stderr\": 0.004802413919932668,\n \"acc_norm\": 0.8258315076677952,\n\
\ \"acc_norm_stderr\": 0.0037847921724660665\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5185185185185185,\n\
\ \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.5185185185185185,\n\
\ \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\
\ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n\
\ \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.027834912527544064,\n\
\ \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.027834912527544064\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n\
\ \"acc_stderr\": 0.03800968060554858,\n \"acc_norm\": 0.7083333333333334,\n\
\ \"acc_norm_stderr\": 0.03800968060554858\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n\
\ \"acc_stderr\": 0.037242495958177295,\n \"acc_norm\": 0.6069364161849711,\n\
\ \"acc_norm_stderr\": 0.037242495958177295\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.04755129616062946,\n\
\ \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.04755129616062946\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5063829787234042,\n \"acc_stderr\": 0.032683358999363366,\n\
\ \"acc_norm\": 0.5063829787234042,\n \"acc_norm_stderr\": 0.032683358999363366\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n\
\ \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n\
\ \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n\
\ \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3862433862433862,\n \"acc_stderr\": 0.02507598176760168,\n \"\
acc_norm\": 0.3862433862433862,\n \"acc_norm_stderr\": 0.02507598176760168\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n\
\ \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n\
\ \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7290322580645161,\n\
\ \"acc_stderr\": 0.025284416114900156,\n \"acc_norm\": 0.7290322580645161,\n\
\ \"acc_norm_stderr\": 0.025284416114900156\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.03510766597959215,\n\
\ \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.03510766597959215\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7212121212121212,\n \"acc_stderr\": 0.03501438706296781,\n\
\ \"acc_norm\": 0.7212121212121212,\n \"acc_norm_stderr\": 0.03501438706296781\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7777777777777778,\n \"acc_stderr\": 0.02962022787479047,\n \"\
acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.02962022787479047\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\
\ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6051282051282051,\n \"acc_stderr\": 0.0247843169421564,\n \
\ \"acc_norm\": 0.6051282051282051,\n \"acc_norm_stderr\": 0.0247843169421564\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547308,\n \
\ \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547308\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135363,\n\
\ \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135363\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8091743119266055,\n \"acc_stderr\": 0.01684767640009109,\n \"\
acc_norm\": 0.8091743119266055,\n \"acc_norm_stderr\": 0.01684767640009109\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5462962962962963,\n \"acc_stderr\": 0.033953227263757976,\n \"\
acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.033953227263757976\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7696078431372549,\n \"acc_stderr\": 0.029554292605695066,\n \"\
acc_norm\": 0.7696078431372549,\n \"acc_norm_stderr\": 0.029554292605695066\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7552742616033755,\n \"acc_stderr\": 0.02798569938703643,\n \
\ \"acc_norm\": 0.7552742616033755,\n \"acc_norm_stderr\": 0.02798569938703643\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6636771300448431,\n\
\ \"acc_stderr\": 0.031708824268455,\n \"acc_norm\": 0.6636771300448431,\n\
\ \"acc_norm_stderr\": 0.031708824268455\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n\
\ \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.743801652892562,\n \"acc_stderr\": 0.03984979653302871,\n \"acc_norm\"\
: 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302871\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n\
\ \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8290598290598291,\n\
\ \"acc_stderr\": 0.0246624968452098,\n \"acc_norm\": 0.8290598290598291,\n\
\ \"acc_norm_stderr\": 0.0246624968452098\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7752234993614304,\n\
\ \"acc_stderr\": 0.014927447101937155,\n \"acc_norm\": 0.7752234993614304,\n\
\ \"acc_norm_stderr\": 0.014927447101937155\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.661849710982659,\n \"acc_stderr\": 0.02546977014940017,\n\
\ \"acc_norm\": 0.661849710982659,\n \"acc_norm_stderr\": 0.02546977014940017\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3005586592178771,\n\
\ \"acc_stderr\": 0.015334566806251164,\n \"acc_norm\": 0.3005586592178771,\n\
\ \"acc_norm_stderr\": 0.015334566806251164\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6699346405228758,\n \"acc_stderr\": 0.0269256546536157,\n\
\ \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.0269256546536157\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6463022508038585,\n\
\ \"acc_stderr\": 0.02715520810320086,\n \"acc_norm\": 0.6463022508038585,\n\
\ \"acc_norm_stderr\": 0.02715520810320086\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6481481481481481,\n \"acc_stderr\": 0.026571483480719964,\n\
\ \"acc_norm\": 0.6481481481481481,\n \"acc_norm_stderr\": 0.026571483480719964\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.425531914893617,\n \"acc_stderr\": 0.029494827600144373,\n \
\ \"acc_norm\": 0.425531914893617,\n \"acc_norm_stderr\": 0.029494827600144373\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41264667535853977,\n\
\ \"acc_stderr\": 0.012573836633799013,\n \"acc_norm\": 0.41264667535853977,\n\
\ \"acc_norm_stderr\": 0.012573836633799013\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6397058823529411,\n \"acc_stderr\": 0.02916312857067073,\n\
\ \"acc_norm\": 0.6397058823529411,\n \"acc_norm_stderr\": 0.02916312857067073\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6029411764705882,\n \"acc_stderr\": 0.019794488900024113,\n \
\ \"acc_norm\": 0.6029411764705882,\n \"acc_norm_stderr\": 0.019794488900024113\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n\
\ \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n\
\ \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291296,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291296\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7711442786069652,\n\
\ \"acc_stderr\": 0.029705284056772432,\n \"acc_norm\": 0.7711442786069652,\n\
\ \"acc_norm_stderr\": 0.029705284056772432\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536955,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536955\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n\
\ \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n\
\ \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.03158149539338734,\n\
\ \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.03158149539338734\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.37576499388004897,\n\
\ \"mc1_stderr\": 0.016954584060214297,\n \"mc2\": 0.5374700664619642,\n\
\ \"mc2_stderr\": 0.015268834464528118\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7695343330702447,\n \"acc_stderr\": 0.011835872164836676\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2630780894617134,\n \
\ \"acc_stderr\": 0.01212817260737593\n }\n}\n```"
repo_url: https://huggingface.co/johnsnowlabs/BioLing-7B-Dare
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|arc:challenge|25_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|gsm8k|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hellaswag|10_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T23-35-31.101237.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T23-35-31.101237.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- '**/details_harness|winogrande|5_2024-04-09T23-35-31.101237.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-09T23-35-31.101237.parquet'
- config_name: results
data_files:
- split: 2024_04_09T23_35_31.101237
path:
- results_2024-04-09T23-35-31.101237.parquet
- split: latest
path:
- results_2024-04-09T23-35-31.101237.parquet
---
# Dataset Card for Evaluation run of johnsnowlabs/BioLing-7B-Dare
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [johnsnowlabs/BioLing-7B-Dare](https://huggingface.co/johnsnowlabs/BioLing-7B-Dare) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_johnsnowlabs__BioLing-7B-Dare",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-09T23:35:31.101237](https://huggingface.co/datasets/open-llm-leaderboard/details_johnsnowlabs__BioLing-7B-Dare/blob/main/results_2024-04-09T23-35-31.101237.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6024796549026502,
"acc_stderr": 0.03310506649883824,
"acc_norm": 0.6093893210665232,
"acc_norm_stderr": 0.03379950815279291,
"mc1": 0.37576499388004897,
"mc1_stderr": 0.016954584060214297,
"mc2": 0.5374700664619642,
"mc2_stderr": 0.015268834464528118
},
"harness|arc:challenge|25": {
"acc": 0.5725255972696246,
"acc_stderr": 0.014456862944650649,
"acc_norm": 0.6177474402730375,
"acc_norm_stderr": 0.014200454049979272
},
"harness|hellaswag|10": {
"acc": 0.6357299342760406,
"acc_stderr": 0.004802413919932668,
"acc_norm": 0.8258315076677952,
"acc_norm_stderr": 0.0037847921724660665
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.043163785995113245,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.043163785995113245
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.027834912527544064,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.027834912527544064
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7083333333333334,
"acc_stderr": 0.03800968060554858,
"acc_norm": 0.7083333333333334,
"acc_norm_stderr": 0.03800968060554858
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6069364161849711,
"acc_stderr": 0.037242495958177295,
"acc_norm": 0.6069364161849711,
"acc_norm_stderr": 0.037242495958177295
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.04755129616062946,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.04755129616062946
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5063829787234042,
"acc_stderr": 0.032683358999363366,
"acc_norm": 0.5063829787234042,
"acc_norm_stderr": 0.032683358999363366
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.046151869625837026,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.046151869625837026
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3862433862433862,
"acc_stderr": 0.02507598176760168,
"acc_norm": 0.3862433862433862,
"acc_norm_stderr": 0.02507598176760168
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.04360314860077459,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.04360314860077459
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7290322580645161,
"acc_stderr": 0.025284416114900156,
"acc_norm": 0.7290322580645161,
"acc_norm_stderr": 0.025284416114900156
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.46798029556650245,
"acc_stderr": 0.03510766597959215,
"acc_norm": 0.46798029556650245,
"acc_norm_stderr": 0.03510766597959215
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7212121212121212,
"acc_stderr": 0.03501438706296781,
"acc_norm": 0.7212121212121212,
"acc_norm_stderr": 0.03501438706296781
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.02962022787479047,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.02962022787479047
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6051282051282051,
"acc_stderr": 0.0247843169421564,
"acc_norm": 0.6051282051282051,
"acc_norm_stderr": 0.0247843169421564
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.02857834836547308,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.02857834836547308
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135363,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135363
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8091743119266055,
"acc_stderr": 0.01684767640009109,
"acc_norm": 0.8091743119266055,
"acc_norm_stderr": 0.01684767640009109
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.033953227263757976,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.033953227263757976
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7696078431372549,
"acc_stderr": 0.029554292605695066,
"acc_norm": 0.7696078431372549,
"acc_norm_stderr": 0.029554292605695066
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7552742616033755,
"acc_stderr": 0.02798569938703643,
"acc_norm": 0.7552742616033755,
"acc_norm_stderr": 0.02798569938703643
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6636771300448431,
"acc_stderr": 0.031708824268455,
"acc_norm": 0.6636771300448431,
"acc_norm_stderr": 0.031708824268455
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.03984979653302871,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.03984979653302871
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7055214723926381,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.7055214723926381,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8290598290598291,
"acc_stderr": 0.0246624968452098,
"acc_norm": 0.8290598290598291,
"acc_norm_stderr": 0.0246624968452098
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7752234993614304,
"acc_stderr": 0.014927447101937155,
"acc_norm": 0.7752234993614304,
"acc_norm_stderr": 0.014927447101937155
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.661849710982659,
"acc_stderr": 0.02546977014940017,
"acc_norm": 0.661849710982659,
"acc_norm_stderr": 0.02546977014940017
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3005586592178771,
"acc_stderr": 0.015334566806251164,
"acc_norm": 0.3005586592178771,
"acc_norm_stderr": 0.015334566806251164
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6699346405228758,
"acc_stderr": 0.0269256546536157,
"acc_norm": 0.6699346405228758,
"acc_norm_stderr": 0.0269256546536157
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6463022508038585,
"acc_stderr": 0.02715520810320086,
"acc_norm": 0.6463022508038585,
"acc_norm_stderr": 0.02715520810320086
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6481481481481481,
"acc_stderr": 0.026571483480719964,
"acc_norm": 0.6481481481481481,
"acc_norm_stderr": 0.026571483480719964
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.425531914893617,
"acc_stderr": 0.029494827600144373,
"acc_norm": 0.425531914893617,
"acc_norm_stderr": 0.029494827600144373
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.41264667535853977,
"acc_stderr": 0.012573836633799013,
"acc_norm": 0.41264667535853977,
"acc_norm_stderr": 0.012573836633799013
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6397058823529411,
"acc_stderr": 0.02916312857067073,
"acc_norm": 0.6397058823529411,
"acc_norm_stderr": 0.02916312857067073
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6029411764705882,
"acc_stderr": 0.019794488900024113,
"acc_norm": 0.6029411764705882,
"acc_norm_stderr": 0.019794488900024113
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.028795185574291296,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.028795185574291296
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7711442786069652,
"acc_stderr": 0.029705284056772432,
"acc_norm": 0.7711442786069652,
"acc_norm_stderr": 0.029705284056772432
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.783625730994152,
"acc_stderr": 0.03158149539338734,
"acc_norm": 0.783625730994152,
"acc_norm_stderr": 0.03158149539338734
},
"harness|truthfulqa:mc|0": {
"mc1": 0.37576499388004897,
"mc1_stderr": 0.016954584060214297,
"mc2": 0.5374700664619642,
"mc2_stderr": 0.015268834464528118
},
"harness|winogrande|5": {
"acc": 0.7695343330702447,
"acc_stderr": 0.011835872164836676
},
"harness|gsm8k|5": {
"acc": 0.2630780894617134,
"acc_stderr": 0.01212817260737593
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
nixjoe/mylover1 | ---
license: unknown
---
|
CyberHarem/kasumi_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kasumi/カスミ (Pokémon)
This is the dataset of kasumi/カスミ (Pokémon), containing 500 images and their tags.
The core tags of this character are `orange_hair, short_hair, side_ponytail, breasts, bangs, green_eyes, blue_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 534.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 320.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1223 | 682.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 479.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1223 | 932.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kasumi_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | 1girl, crop_top, midriff, navel, solo, poke_ball_(basic), suspenders, holding_poke_ball, short_shorts, denim_shorts, sleeveless, looking_at_viewer, white_background, simple_background, open_mouth, blush, tank_top, hand_on_hip, yellow_shirt, :d, blue_shorts, cowboy_shot |
| 1 | 6 |  |  |  |  |  | 1girl, crop_top, looking_at_viewer, midriff, navel, sleeveless_shirt, smile, suspenders, yellow_shirt, blush, closed_mouth, arm_behind_head, arm_up, armpits, collarbone, cowboy_shot, eyelashes, green_shorts, pokemon_(creature), solo, tank_top |
| 2 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, nipples, solo, blush, collarbone, completely_nude, day, navel, outdoors, pussy, smile, medium_breasts, closed_mouth, grass, tree, uncensored |
| 3 | 7 |  |  |  |  |  | 1girl, nipples, blush, completely_nude, eyelashes, hair_tie, navel, open_mouth, outdoors, shiny_skin, pussy, spread_legs, tongue, collarbone, grass, mosaic_censoring, 1boy, anus, day, hetero, on_back, penis, raised_eyebrows, sex, smile, solo |
| 4 | 6 |  |  |  |  |  | 1boy, 1girl, hetero, nipples, solo_focus, blush, looking_at_viewer, navel, open_mouth, smile, collarbone, completely_nude, hair_between_eyes, water, heart-shaped_pupils, large_breasts, penis, pool, pov |
| 5 | 6 |  |  |  |  |  | looking_at_viewer, navel, nipples, 1girl, blush, collarbone, hair_tie, medium_breasts, pussy, completely_nude, eyelashes, shiny_skin, simple_background, white_background, closed_mouth, solo |
| 6 | 6 |  |  |  |  |  | 1girl, blush, christmas, looking_at_viewer, naked_ribbon, red_headwear, red_ribbon, santa_hat, solo, closed_mouth, collarbone, medium_breasts, navel, nipples, indoors, large_breasts, stomach |
| 7 | 12 |  |  |  |  |  | 1girl, pokemon_(creature), pokephilia, hetero, solo_focus, 1boy, furry_with_non-furry, furry_male, large_breasts, navel, bar_censor, penis, slingshot_swimsuit, black_one-piece_swimsuit, sweat, areola_slip, cleavage, hypnosis, looking_at_viewer, simple_background, coin, fingerless_gloves, hand_on_hip, heart, huge_breasts, shiny_skin, smile, standing, white_background |
| 8 | 5 |  |  |  |  |  | 1girl, competition_swimsuit, open_jacket, white_one-piece_swimsuit, covered_navel, looking_at_viewer, :d, blush, long_sleeves, open_mouth, solo, white_jacket, blue_background, closed_mouth, hair_between_eyes, hand_on_hip, medium_breasts, off_shoulder, small_breasts, standing, toes, white_background |
| 9 | 7 |  |  |  |  |  | 1girl, covered_nipples, impossible_clothes, skin_tight, smile, solo, white_gloves, elbow_gloves, hand_on_hip, looking_at_viewer, team_rocket_uniform, thighhighs, huge_breasts, large_breasts, standing, alternate_breast_size, black_bodysuit, covered_navel, grey_background, shiny_clothes, thigh_boots, thighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | crop_top | midriff | navel | solo | poke_ball_(basic) | suspenders | holding_poke_ball | short_shorts | denim_shorts | sleeveless | looking_at_viewer | white_background | simple_background | open_mouth | blush | tank_top | hand_on_hip | yellow_shirt | :d | blue_shorts | cowboy_shot | sleeveless_shirt | smile | closed_mouth | arm_behind_head | arm_up | armpits | collarbone | eyelashes | green_shorts | pokemon_(creature) | nipples | completely_nude | day | outdoors | pussy | medium_breasts | grass | tree | uncensored | hair_tie | shiny_skin | spread_legs | tongue | mosaic_censoring | 1boy | anus | hetero | on_back | penis | raised_eyebrows | sex | solo_focus | hair_between_eyes | water | heart-shaped_pupils | large_breasts | pool | pov | christmas | naked_ribbon | red_headwear | red_ribbon | santa_hat | indoors | stomach | pokephilia | furry_with_non-furry | furry_male | bar_censor | slingshot_swimsuit | black_one-piece_swimsuit | sweat | areola_slip | cleavage | hypnosis | coin | fingerless_gloves | heart | huge_breasts | standing | competition_swimsuit | open_jacket | white_one-piece_swimsuit | covered_navel | long_sleeves | white_jacket | blue_background | off_shoulder | small_breasts | toes | covered_nipples | impossible_clothes | skin_tight | white_gloves | elbow_gloves | team_rocket_uniform | thighhighs | alternate_breast_size | black_bodysuit | grey_background | shiny_clothes | thigh_boots | thighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:----------|:--------|:-------|:--------------------|:-------------|:--------------------|:---------------|:---------------|:-------------|:--------------------|:-------------------|:--------------------|:-------------|:--------|:-----------|:--------------|:---------------|:-----|:--------------|:--------------|:-------------------|:--------|:---------------|:------------------|:---------|:----------|:-------------|:------------|:---------------|:---------------------|:----------|:------------------|:------|:-----------|:--------|:-----------------|:--------|:-------|:-------------|:-----------|:-------------|:--------------|:---------|:-------------------|:-------|:-------|:---------|:----------|:--------|:------------------|:------|:-------------|:--------------------|:--------|:----------------------|:----------------|:-------|:------|:------------|:---------------|:---------------|:-------------|:------------|:----------|:----------|:-------------|:-----------------------|:-------------|:-------------|:---------------------|:---------------------------|:--------|:--------------|:-----------|:-----------|:-------|:--------------------|:--------|:---------------|:-----------|:-----------------------|:--------------|:---------------------------|:----------------|:---------------|:---------------|:------------------|:---------------|:----------------|:-------|:------------------|:---------------------|:-------------|:---------------|:---------------|:----------------------|:-------------|:------------------------|:-----------------|:------------------|:----------------|:--------------|:---------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | | X | | | | | X | | | | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | | X | X | | | | | | | X | | | | X | | | | | | | | X | X | | | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | | X | X | | | | | | | | | | X | X | | | | | | | | X | | | | | X | X | | | X | X | X | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | | X | | | | | | | | X | | | X | X | | | | | | | | X | | | | | X | | | | X | X | | | | | | | | | | | | | X | | X | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | X | X | | | | | | | X | X | X | | X | | | | | | | | | X | | | | X | X | | | X | X | | | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | X | X | | | | | | | X | | | | X | | | | | | | | | X | | | | X | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 12 |  |  |  |  |  | X | | | X | | | | | | | | X | X | X | | | | X | | | | | | X | | | | | | | | X | | | | | | | | | | | X | | | | X | | X | | X | | | X | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | | X | | | | | | | X | X | | X | X | | X | | X | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 9 | 7 |  |  |  |  |  | X | | | | X | | | | | | | X | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
LNTANOooo/open_hermes2.5_v3 | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: Econ_domain_expert
num_bytes: 2586192.0
num_examples: 658
- name: platypus
num_bytes: 24313722.202696867
num_examples: 20838
- name: EvolInstruct_70k
num_bytes: 88240968.07420495
num_examples: 44140
- name: LMSys_Chatbot_Arena
num_bytes: 3209728.696711328
num_examples: 2460
- name: lmsys1m
num_bytes: 1642856.6771819138
num_examples: 950
- name: CogStackMed
num_bytes: 4314956.0
num_examples: 4400
- name: UnnaturalInstructions
num_bytes: 3338715.840329602
num_examples: 6431
- name: GPT_4_Comparison_Data
num_bytes: 750546.0
num_examples: 1514
- name: None
num_bytes: 194715445.44100824
num_examples: 107494
- name: cot_alpaca_gpt4
num_bytes: 32743635.389400143
num_examples: 41507
- name: metamath
num_bytes: 21272191.0
num_examples: 27593
- name: CamelAI
num_bytes: 143091373.0
num_examples: 76053
- name: airoboros2.2
num_bytes: 43282002.08555703
num_examples: 28702
- name: glaive_code_assist
num_bytes: 311951781.1964973
num_examples: 179406
- name: caseus_custom
num_bytes: 1852195.0
num_examples: 2402
download_size: 416791659
dataset_size: 877306308.6035874
configs:
- config_name: default
data_files:
- split: Econ_domain_expert
path: data/Econ_domain_expert-*
- split: platypus
path: data/platypus-*
- split: EvolInstruct_70k
path: data/EvolInstruct_70k-*
- split: LMSys_Chatbot_Arena
path: data/LMSys_Chatbot_Arena-*
- split: lmsys1m
path: data/lmsys1m-*
- split: CogStackMed
path: data/CogStackMed-*
- split: UnnaturalInstructions
path: data/UnnaturalInstructions-*
- split: GPT_4_Comparison_Data
path: data/GPT_4_Comparison_Data-*
- split: None
path: data/None-*
- split: cot_alpaca_gpt4
path: data/cot_alpaca_gpt4-*
- split: metamath
path: data/metamath-*
- split: CamelAI
path: data/CamelAI-*
- split: airoboros2.2
path: data/airoboros2.2-*
- split: glaive_code_assist
path: data/glaive_code_assist-*
- split: caseus_custom
path: data/caseus_custom-*
---
|
Atipico1/mrqa_preprocessed | ---
dataset_info:
features:
- name: subset
dtype: string
- name: qid
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: masked_query
dtype: string
- name: context
dtype: string
- name: answer_sent
dtype: string
- name: answer_in_context
sequence: string
- name: query_embedding
sequence: float32
splits:
- name: train
num_bytes: 1316665714
num_examples: 323487
download_size: 1360868118
dataset_size: 1316665714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fiveflow/for_align | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 35614538
num_examples: 17281
- name: test
num_bytes: 3992474
num_examples: 1915
download_size: 22211168
dataset_size: 39607012
---
# Dataset Card for "for_align"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ujan/github_classification_no_empty_readme | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: names
dtype: string
- name: readmes
dtype: string
- name: topics
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 51299344.74701966
num_examples: 10334
- name: validation
num_bytes: 6413659.126490169
num_examples: 1292
- name: test
num_bytes: 6413659.126490169
num_examples: 1292
download_size: 29121376
dataset_size: 64126663.0
---
# Dataset Card for "github_classification_no_empty_readme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CatFather/CatCat | ---
license: apache-2.0
---
|
liuyanchen1015/MULTI_VALUE_mrpc_analytic_whose_relativizer | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 1824
num_examples: 6
- name: train
num_bytes: 5390
num_examples: 17
- name: validation
num_bytes: 339
num_examples: 1
download_size: 16449
dataset_size: 7553
---
# Dataset Card for "MULTI_VALUE_mrpc_analytic_whose_relativizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-a741994f-efcd-40c8-8652-be4f42ba26cd-31 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: ['matthews_correlation']
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
mask-distilled-one-sec-cv12/chunk_133 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1276788448
num_examples: 250744
download_size: 1302483404
dataset_size: 1276788448
---
# Dataset Card for "chunk_133"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distilled-from-one-sec-cv12/chunk_150 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 854000724
num_examples: 166407
download_size: 871542028
dataset_size: 854000724
---
# Dataset Card for "chunk_150"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Srinivas7/Data_bot | ---
license: other
---
|
qfrodicio/intention-prediction-es | ---
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: tokens
sequence: string
- name: intentions
sequence: string
splits:
- name: train
num_bytes: 525868
num_examples: 1622
- name: validation
num_bytes: 150615
num_examples: 440
- name: test
num_bytes: 179949
num_examples: 536
download_size: 217096
dataset_size: 856432
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
rdiehlmartinez/pythia-training-metrics | ---
license: apache-2.0
---
|
srishtikumbhare/trial | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype: string
splits:
- name: train
num_bytes: 264370838.0
num_examples: 50
download_size: 263685192
dataset_size: 264370838.0
---
# Dataset Card for "trial"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Atipico1/nq-test | ---
dataset_info:
- config_name: adversary
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: gpt_answer_sentence
dtype: string
- name: gpt_adv_sentence
sequence: string
- name: is_valid_adv_sentence
dtype: bool
- name: gpt_adv_passage
sequence: string
- name: is_valid_adv_passage
dtype: bool
splits:
- name: train
num_bytes: 28520709
num_examples: 3610
download_size: 16013125
dataset_size: 28520709
- config_name: adversary-sent
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: gpt_answer_sentence
dtype: string
- name: gpt_adv_sentence
sequence: string
- name: is_valid_adv_sentence
dtype: bool
- name: gpt_adv_passage
sequence: string
- name: is_valid_adv_passage
dtype: bool
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float32
- name: text
dtype: string
splits:
- name: train
num_bytes: 7841766
num_examples: 3610
download_size: 4333156
dataset_size: 7841766
- config_name: adversary_v2
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: gpt_answer_sentence
dtype: string
- name: gpt_adv_sentence
sequence: string
- name: is_valid_adv_sentence
dtype: bool
- name: gpt_adv_passage
sequence: string
- name: is_valid_adv_passage
dtype: bool
splits:
- name: train
num_bytes: 28520588
num_examples: 3610
download_size: 16014456
dataset_size: 28520588
- config_name: adversary_v2-sent
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: gpt_answer_sentence
dtype: string
- name: gpt_adv_sentence
sequence: string
- name: is_valid_adv_sentence
dtype: bool
- name: gpt_adv_passage
sequence: string
- name: is_valid_adv_passage
dtype: bool
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float32
- name: text
dtype: string
splits:
- name: train
num_bytes: 10475330
num_examples: 3610
download_size: 5805847
dataset_size: 10475330
- config_name: adversary_v3
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: gpt_answer_sentence
dtype: string
- name: gpt_adv_sentence
sequence: string
- name: is_valid_adv_sentence
dtype: bool
- name: gpt_adv_passage
sequence: string
- name: is_valid_adv_passage
dtype: bool
splits:
- name: train
num_bytes: 28520588
num_examples: 3610
download_size: 16014456
dataset_size: 28520588
- config_name: conflict
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: gpt_answer_sentence
dtype: string
- name: entity_type
dtype: string
- name: similar_entity
dtype: string
- name: similar_entity_score
dtype: float32
- name: random_entity
dtype: string
- name: random_entity_score
dtype: float64
splits:
- name: train
num_bytes: 24192330
num_examples: 3610
download_size: 13890009
dataset_size: 24192330
- config_name: conflict_v1
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: gpt_answer_sentence
dtype: string
- name: entity_type
dtype: string
- name: similar_entity
dtype: string
- name: similar_entity_score
dtype: float32
- name: random_entity
dtype: string
- name: random_entity_score
dtype: float64
- name: gpt_conflict_sentence
sequence: string
- name: is_valid_conflict_sentence
dtype: bool
- name: gpt_conflict_passage
sequence: string
- name: is_valid_conflict_passage
dtype: bool
splits:
- name: train
num_bytes: 25835423
num_examples: 3610
download_size: 14872958
dataset_size: 25835423
- config_name: conflict_v1-sent
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: gpt_answer_sentence
dtype: string
- name: entity_type
dtype: string
- name: similar_entity
dtype: string
- name: similar_entity_score
dtype: float32
- name: random_entity
dtype: string
- name: random_entity_score
dtype: float64
- name: gpt_conflict_sentence
sequence: string
- name: is_valid_conflict_sentence
dtype: bool
- name: gpt_conflict_passage
sequence: string
- name: is_valid_conflict_passage
dtype: bool
- name: hasanswer
dtype: bool
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float32
- name: text
dtype: string
splits:
- name: train
num_bytes: 5157182
num_examples: 3610
download_size: 3192949
dataset_size: 5157182
- config_name: default
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 23673906
num_examples: 3610
download_size: 13529716
dataset_size: 23673906
configs:
- config_name: adversary
data_files:
- split: train
path: adversary/train-*
- config_name: adversary-sent
data_files:
- split: train
path: adversary-sent/train-*
- config_name: adversary_v2
data_files:
- split: train
path: adversary_v2/train-*
- config_name: adversary_v2-sent
data_files:
- split: train
path: adversary_v2-sent/train-*
- config_name: adversary_v3
data_files:
- split: train
path: adversary_v3/train-*
- config_name: conflict
data_files:
- split: train
path: conflict/train-*
- config_name: conflict_v1
data_files:
- split: train
path: conflict_v1/train-*
- config_name: conflict_v1-sent
data_files:
- split: train
path: conflict_v1-sent/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ghsxntos/Veigh | ---
license: openrail
---
|
CyberHarem/nailah_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nailah (Fire Emblem)
This is the dataset of nailah (Fire Emblem), containing 60 images and their tags.
The core tags of this character are `animal_ears, breasts, long_hair, wolf_ears, tail, eyepatch, green_eyes, wolf_tail, purple_hair, dark_skin, dark-skinned_female, large_breasts, wolf_girl`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 60 | 79.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nailah_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 60 | 46.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nailah_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 136 | 90.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nailah_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 60 | 71.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nailah_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 136 | 121.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nailah_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nailah_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 22 |  |  |  |  |  | 1girl, solo, tattoo, medium_breasts, simple_background, sitting, looking_at_viewer |
| 1 | 5 |  |  |  |  |  | 1girl, blush, nipples, nude, solo, spread_legs, navel, spread_pussy, sweat, tattoo, clitoris, female_pubic_hair, huge_breasts, looking_at_viewer, on_back, open_mouth, pussy_juice, smile, tongue_out, uncensored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | tattoo | medium_breasts | simple_background | sitting | looking_at_viewer | blush | nipples | nude | spread_legs | navel | spread_pussy | sweat | clitoris | female_pubic_hair | huge_breasts | on_back | open_mouth | pussy_juice | smile | tongue_out | uncensored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------|:-----------------|:--------------------|:----------|:--------------------|:--------|:----------|:-------|:--------------|:--------|:---------------|:--------|:-----------|:--------------------|:---------------|:----------|:-------------|:--------------|:--------|:-------------|:-------------|
| 0 | 22 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
GreeneryScenery/SheepsDiffusionNet | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: image
dtype: image
- name: conditioning_image
dtype: image
splits:
- name: train
num_bytes: 8712517143.812
num_examples: 32719
download_size: 8690399921
dataset_size: 8712517143.812
---
# Dataset Card for "SheepsDiffusionNet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
emozilla/pg19-test | ---
dataset_info:
features:
- name: short_book_title
dtype: string
- name: publication_date
dtype: int32
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 40482852
num_examples: 100
download_size: 24874679
dataset_size: 40482852
---
# Dataset Card for "pg19-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ricahrd/polex | ---
license: openrail
---
|
open-llm-leaderboard/details_AA051615__A0204 | ---
pretty_name: Evaluation run of AA051615/A0204
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [AA051615/A0204](https://huggingface.co/AA051615/A0204) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051615__A0204\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-04T22:24:08.088490](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051615__A0204/blob/main/results_2024-02-04T22-24-08.088490.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.8557918259619507,\n\
\ \"acc_stderr\": 0.022903287117849617,\n \"acc_norm\": 0.8653459094330703,\n\
\ \"acc_norm_stderr\": 0.02323340239827946,\n \"mc1\": 0.4039167686658507,\n\
\ \"mc1_stderr\": 0.01717727682258428,\n \"mc2\": 0.57938549088033,\n\
\ \"mc2_stderr\": 0.0155079218690995\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6544368600682594,\n \"acc_stderr\": 0.01389693846114568,\n\
\ \"acc_norm\": 0.7030716723549488,\n \"acc_norm_stderr\": 0.01335202597672522\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6471818362875921,\n\
\ \"acc_stderr\": 0.004768701562988872,\n \"acc_norm\": 0.8441545508862777,\n\
\ \"acc_norm_stderr\": 0.0036196748640350157\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.8592592592592593,\n\
\ \"acc_stderr\": 0.030041362609516897,\n \"acc_norm\": 0.8592592592592593,\n\
\ \"acc_norm_stderr\": 0.030041362609516897\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.9144736842105263,\n \"acc_stderr\": 0.022758677130888604,\n\
\ \"acc_norm\": 0.9144736842105263,\n \"acc_norm_stderr\": 0.022758677130888604\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.86,\n\
\ \"acc_stderr\": 0.03487350880197768,\n \"acc_norm\": 0.86,\n \
\ \"acc_norm_stderr\": 0.03487350880197768\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8981132075471698,\n \"acc_stderr\": 0.01861754975827668,\n\
\ \"acc_norm\": 0.8981132075471698,\n \"acc_norm_stderr\": 0.01861754975827668\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9861111111111112,\n\
\ \"acc_stderr\": 0.00978652836519694,\n \"acc_norm\": 0.9861111111111112,\n\
\ \"acc_norm_stderr\": 0.00978652836519694\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.8,\n\
\ \"acc_stderr\": 0.04020151261036844,\n \"acc_norm\": 0.8,\n \
\ \"acc_norm_stderr\": 0.04020151261036844\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \
\ \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.861271676300578,\n\
\ \"acc_stderr\": 0.02635654191584046,\n \"acc_norm\": 0.861271676300578,\n\
\ \"acc_norm_stderr\": 0.02635654191584046\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.0465501041131961,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.0465501041131961\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n\
\ \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.8978723404255319,\n \"acc_stderr\": 0.019795708842206803,\n\
\ \"acc_norm\": 0.8978723404255319,\n \"acc_norm_stderr\": 0.019795708842206803\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.7719298245614035,\n\
\ \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.7719298245614035,\n\
\ \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.903448275862069,\n \"acc_stderr\": 0.024612198971682625,\n\
\ \"acc_norm\": 0.903448275862069,\n \"acc_norm_stderr\": 0.024612198971682625\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.8465608465608465,\n \"acc_stderr\": 0.018562074482688474,\n \"\
acc_norm\": 0.8465608465608465,\n \"acc_norm_stderr\": 0.018562074482688474\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.6507936507936508,\n\
\ \"acc_stderr\": 0.04263906892795131,\n \"acc_norm\": 0.6507936507936508,\n\
\ \"acc_norm_stderr\": 0.04263906892795131\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.044084400227680794\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.9451612903225807,\n \"acc_stderr\": 0.012951418509899199,\n \"\
acc_norm\": 0.9451612903225807,\n \"acc_norm_stderr\": 0.012951418509899199\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.8472906403940886,\n \"acc_stderr\": 0.025308904539380637,\n \"\
acc_norm\": 0.8472906403940886,\n \"acc_norm_stderr\": 0.025308904539380637\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\"\
: 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.9333333333333333,\n \"acc_stderr\": 0.019478290326359282,\n\
\ \"acc_norm\": 0.9333333333333333,\n \"acc_norm_stderr\": 0.019478290326359282\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9696969696969697,\n \"acc_stderr\": 0.012213156893572809,\n \"\
acc_norm\": 0.9696969696969697,\n \"acc_norm_stderr\": 0.012213156893572809\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9896373056994818,\n \"acc_stderr\": 0.007308424386792219,\n\
\ \"acc_norm\": 0.9896373056994818,\n \"acc_norm_stderr\": 0.007308424386792219\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8897435897435897,\n \"acc_stderr\": 0.01588033126105611,\n \
\ \"acc_norm\": 0.8897435897435897,\n \"acc_norm_stderr\": 0.01588033126105611\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.7333333333333333,\n \"acc_stderr\": 0.026962424325073838,\n \
\ \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.026962424325073838\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.9243697478991597,\n \"acc_stderr\": 0.017174988814938515,\n\
\ \"acc_norm\": 0.9243697478991597,\n \"acc_norm_stderr\": 0.017174988814938515\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.7086092715231788,\n \"acc_stderr\": 0.037101857261199966,\n \"\
acc_norm\": 0.7086092715231788,\n \"acc_norm_stderr\": 0.037101857261199966\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9596330275229358,\n \"acc_stderr\": 0.008438519002748255,\n \"\
acc_norm\": 0.9596330275229358,\n \"acc_norm_stderr\": 0.008438519002748255\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.7962962962962963,\n \"acc_stderr\": 0.027467401804057993,\n \"\
acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.027467401804057993\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9901960784313726,\n \"acc_stderr\": 0.006915323418523288,\n \"\
acc_norm\": 0.9901960784313726,\n \"acc_norm_stderr\": 0.006915323418523288\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9493670886075949,\n \"acc_stderr\": 0.014271760025370185,\n \
\ \"acc_norm\": 0.9493670886075949,\n \"acc_norm_stderr\": 0.014271760025370185\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.9013452914798207,\n\
\ \"acc_stderr\": 0.020013729184919227,\n \"acc_norm\": 0.9013452914798207,\n\
\ \"acc_norm_stderr\": 0.020013729184919227\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.9083969465648855,\n \"acc_stderr\": 0.025300035578642962,\n\
\ \"acc_norm\": 0.9083969465648855,\n \"acc_norm_stderr\": 0.025300035578642962\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.9669421487603306,\n \"acc_stderr\": 0.016321006329034302,\n \"\
acc_norm\": 0.9669421487603306,\n \"acc_norm_stderr\": 0.016321006329034302\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.9351851851851852,\n\
\ \"acc_stderr\": 0.023800937426629216,\n \"acc_norm\": 0.9351851851851852,\n\
\ \"acc_norm_stderr\": 0.023800937426629216\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.9631901840490797,\n \"acc_stderr\": 0.014793820323252032,\n\
\ \"acc_norm\": 0.9631901840490797,\n \"acc_norm_stderr\": 0.014793820323252032\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.7321428571428571,\n\
\ \"acc_stderr\": 0.04203277291467763,\n \"acc_norm\": 0.7321428571428571,\n\
\ \"acc_norm_stderr\": 0.04203277291467763\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.9029126213592233,\n \"acc_stderr\": 0.02931596291881348,\n\
\ \"acc_norm\": 0.9029126213592233,\n \"acc_norm_stderr\": 0.02931596291881348\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9743589743589743,\n\
\ \"acc_stderr\": 0.010354979197709014,\n \"acc_norm\": 0.9743589743589743,\n\
\ \"acc_norm_stderr\": 0.010354979197709014\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776348,\n \
\ \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776348\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9425287356321839,\n\
\ \"acc_stderr\": 0.008322796947412078,\n \"acc_norm\": 0.9425287356321839,\n\
\ \"acc_norm_stderr\": 0.008322796947412078\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8728323699421965,\n \"acc_stderr\": 0.017936766865149886,\n\
\ \"acc_norm\": 0.8728323699421965,\n \"acc_norm_stderr\": 0.017936766865149886\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.841340782122905,\n\
\ \"acc_stderr\": 0.01221939954934151,\n \"acc_norm\": 0.841340782122905,\n\
\ \"acc_norm_stderr\": 0.01221939954934151\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.9052287581699346,\n \"acc_stderr\": 0.016771331271836467,\n\
\ \"acc_norm\": 0.9052287581699346,\n \"acc_norm_stderr\": 0.016771331271836467\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.887459807073955,\n\
\ \"acc_stderr\": 0.017949292186800664,\n \"acc_norm\": 0.887459807073955,\n\
\ \"acc_norm_stderr\": 0.017949292186800664\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.904320987654321,\n \"acc_stderr\": 0.016366973744175263,\n\
\ \"acc_norm\": 0.904320987654321,\n \"acc_norm_stderr\": 0.016366973744175263\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.7553191489361702,\n \"acc_stderr\": 0.02564555362226673,\n \
\ \"acc_norm\": 0.7553191489361702,\n \"acc_norm_stderr\": 0.02564555362226673\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.8252933507170795,\n\
\ \"acc_stderr\": 0.009698125789145208,\n \"acc_norm\": 0.8252933507170795,\n\
\ \"acc_norm_stderr\": 0.009698125789145208\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.9301470588235294,\n \"acc_stderr\": 0.01548401244105634,\n\
\ \"acc_norm\": 0.9301470588235294,\n \"acc_norm_stderr\": 0.01548401244105634\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.9003267973856209,\n \"acc_stderr\": 0.012119053136608476,\n \
\ \"acc_norm\": 0.9003267973856209,\n \"acc_norm_stderr\": 0.012119053136608476\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.8181818181818182,\n\
\ \"acc_stderr\": 0.036942843353378024,\n \"acc_norm\": 0.8181818181818182,\n\
\ \"acc_norm_stderr\": 0.036942843353378024\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8979591836734694,\n \"acc_stderr\": 0.01937850847450596,\n\
\ \"acc_norm\": 0.8979591836734694,\n \"acc_norm_stderr\": 0.01937850847450596\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.945273631840796,\n\
\ \"acc_stderr\": 0.016082815796263243,\n \"acc_norm\": 0.945273631840796,\n\
\ \"acc_norm_stderr\": 0.016082815796263243\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.97,\n \"acc_stderr\": 0.01714466079977652,\n \
\ \"acc_norm\": 0.97,\n \"acc_norm_stderr\": 0.01714466079977652\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.6746987951807228,\n\
\ \"acc_stderr\": 0.03647168523683226,\n \"acc_norm\": 0.6746987951807228,\n\
\ \"acc_norm_stderr\": 0.03647168523683226\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.9298245614035088,\n \"acc_stderr\": 0.019591541754525123,\n\
\ \"acc_norm\": 0.9298245614035088,\n \"acc_norm_stderr\": 0.019591541754525123\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4039167686658507,\n\
\ \"mc1_stderr\": 0.01717727682258428,\n \"mc2\": 0.57938549088033,\n\
\ \"mc2_stderr\": 0.0155079218690995\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8082083662194159,\n \"acc_stderr\": 0.011065209664659527\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5852918877937832,\n \
\ \"acc_stderr\": 0.013570623842304511\n }\n}\n```"
repo_url: https://huggingface.co/AA051615/A0204
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|arc:challenge|25_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|gsm8k|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hellaswag|10_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T22-24-08.088490.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-04T22-24-08.088490.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- '**/details_harness|winogrande|5_2024-02-04T22-24-08.088490.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-04T22-24-08.088490.parquet'
- config_name: results
data_files:
- split: 2024_02_04T22_24_08.088490
path:
- results_2024-02-04T22-24-08.088490.parquet
- split: latest
path:
- results_2024-02-04T22-24-08.088490.parquet
---
# Dataset Card for Evaluation run of AA051615/A0204
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [AA051615/A0204](https://huggingface.co/AA051615/A0204) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_AA051615__A0204",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-04T22:24:08.088490](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051615__A0204/blob/main/results_2024-02-04T22-24-08.088490.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.8557918259619507,
"acc_stderr": 0.022903287117849617,
"acc_norm": 0.8653459094330703,
"acc_norm_stderr": 0.02323340239827946,
"mc1": 0.4039167686658507,
"mc1_stderr": 0.01717727682258428,
"mc2": 0.57938549088033,
"mc2_stderr": 0.0155079218690995
},
"harness|arc:challenge|25": {
"acc": 0.6544368600682594,
"acc_stderr": 0.01389693846114568,
"acc_norm": 0.7030716723549488,
"acc_norm_stderr": 0.01335202597672522
},
"harness|hellaswag|10": {
"acc": 0.6471818362875921,
"acc_stderr": 0.004768701562988872,
"acc_norm": 0.8441545508862777,
"acc_norm_stderr": 0.0036196748640350157
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.8592592592592593,
"acc_stderr": 0.030041362609516897,
"acc_norm": 0.8592592592592593,
"acc_norm_stderr": 0.030041362609516897
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.9144736842105263,
"acc_stderr": 0.022758677130888604,
"acc_norm": 0.9144736842105263,
"acc_norm_stderr": 0.022758677130888604
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197768,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197768
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8981132075471698,
"acc_stderr": 0.01861754975827668,
"acc_norm": 0.8981132075471698,
"acc_norm_stderr": 0.01861754975827668
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9861111111111112,
"acc_stderr": 0.00978652836519694,
"acc_norm": 0.9861111111111112,
"acc_norm_stderr": 0.00978652836519694
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036844,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036844
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932261,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932261
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.861271676300578,
"acc_stderr": 0.02635654191584046,
"acc_norm": 0.861271676300578,
"acc_norm_stderr": 0.02635654191584046
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.0465501041131961,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.0465501041131961
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.8978723404255319,
"acc_stderr": 0.019795708842206803,
"acc_norm": 0.8978723404255319,
"acc_norm_stderr": 0.019795708842206803
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.03947152782669415,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.03947152782669415
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.903448275862069,
"acc_stderr": 0.024612198971682625,
"acc_norm": 0.903448275862069,
"acc_norm_stderr": 0.024612198971682625
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.8465608465608465,
"acc_stderr": 0.018562074482688474,
"acc_norm": 0.8465608465608465,
"acc_norm_stderr": 0.018562074482688474
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.6507936507936508,
"acc_stderr": 0.04263906892795131,
"acc_norm": 0.6507936507936508,
"acc_norm_stderr": 0.04263906892795131
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.74,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.74,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9451612903225807,
"acc_stderr": 0.012951418509899199,
"acc_norm": 0.9451612903225807,
"acc_norm_stderr": 0.012951418509899199
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.8472906403940886,
"acc_stderr": 0.025308904539380637,
"acc_norm": 0.8472906403940886,
"acc_norm_stderr": 0.025308904539380637
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.9333333333333333,
"acc_stderr": 0.019478290326359282,
"acc_norm": 0.9333333333333333,
"acc_norm_stderr": 0.019478290326359282
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9696969696969697,
"acc_stderr": 0.012213156893572809,
"acc_norm": 0.9696969696969697,
"acc_norm_stderr": 0.012213156893572809
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9896373056994818,
"acc_stderr": 0.007308424386792219,
"acc_norm": 0.9896373056994818,
"acc_norm_stderr": 0.007308424386792219
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8897435897435897,
"acc_stderr": 0.01588033126105611,
"acc_norm": 0.8897435897435897,
"acc_norm_stderr": 0.01588033126105611
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.026962424325073838,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.026962424325073838
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.9243697478991597,
"acc_stderr": 0.017174988814938515,
"acc_norm": 0.9243697478991597,
"acc_norm_stderr": 0.017174988814938515
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.7086092715231788,
"acc_stderr": 0.037101857261199966,
"acc_norm": 0.7086092715231788,
"acc_norm_stderr": 0.037101857261199966
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9596330275229358,
"acc_stderr": 0.008438519002748255,
"acc_norm": 0.9596330275229358,
"acc_norm_stderr": 0.008438519002748255
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.027467401804057993,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.027467401804057993
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9901960784313726,
"acc_stderr": 0.006915323418523288,
"acc_norm": 0.9901960784313726,
"acc_norm_stderr": 0.006915323418523288
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9493670886075949,
"acc_stderr": 0.014271760025370185,
"acc_norm": 0.9493670886075949,
"acc_norm_stderr": 0.014271760025370185
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.9013452914798207,
"acc_stderr": 0.020013729184919227,
"acc_norm": 0.9013452914798207,
"acc_norm_stderr": 0.020013729184919227
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.9083969465648855,
"acc_stderr": 0.025300035578642962,
"acc_norm": 0.9083969465648855,
"acc_norm_stderr": 0.025300035578642962
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.9669421487603306,
"acc_stderr": 0.016321006329034302,
"acc_norm": 0.9669421487603306,
"acc_norm_stderr": 0.016321006329034302
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.9351851851851852,
"acc_stderr": 0.023800937426629216,
"acc_norm": 0.9351851851851852,
"acc_norm_stderr": 0.023800937426629216
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.9631901840490797,
"acc_stderr": 0.014793820323252032,
"acc_norm": 0.9631901840490797,
"acc_norm_stderr": 0.014793820323252032
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.7321428571428571,
"acc_stderr": 0.04203277291467763,
"acc_norm": 0.7321428571428571,
"acc_norm_stderr": 0.04203277291467763
},
"harness|hendrycksTest-management|5": {
"acc": 0.9029126213592233,
"acc_stderr": 0.02931596291881348,
"acc_norm": 0.9029126213592233,
"acc_norm_stderr": 0.02931596291881348
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9743589743589743,
"acc_stderr": 0.010354979197709014,
"acc_norm": 0.9743589743589743,
"acc_norm_stderr": 0.010354979197709014
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776348,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776348
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9425287356321839,
"acc_stderr": 0.008322796947412078,
"acc_norm": 0.9425287356321839,
"acc_norm_stderr": 0.008322796947412078
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8728323699421965,
"acc_stderr": 0.017936766865149886,
"acc_norm": 0.8728323699421965,
"acc_norm_stderr": 0.017936766865149886
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.841340782122905,
"acc_stderr": 0.01221939954934151,
"acc_norm": 0.841340782122905,
"acc_norm_stderr": 0.01221939954934151
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.9052287581699346,
"acc_stderr": 0.016771331271836467,
"acc_norm": 0.9052287581699346,
"acc_norm_stderr": 0.016771331271836467
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.887459807073955,
"acc_stderr": 0.017949292186800664,
"acc_norm": 0.887459807073955,
"acc_norm_stderr": 0.017949292186800664
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.904320987654321,
"acc_stderr": 0.016366973744175263,
"acc_norm": 0.904320987654321,
"acc_norm_stderr": 0.016366973744175263
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.7553191489361702,
"acc_stderr": 0.02564555362226673,
"acc_norm": 0.7553191489361702,
"acc_norm_stderr": 0.02564555362226673
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.8252933507170795,
"acc_stderr": 0.009698125789145208,
"acc_norm": 0.8252933507170795,
"acc_norm_stderr": 0.009698125789145208
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.9301470588235294,
"acc_stderr": 0.01548401244105634,
"acc_norm": 0.9301470588235294,
"acc_norm_stderr": 0.01548401244105634
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.9003267973856209,
"acc_stderr": 0.012119053136608476,
"acc_norm": 0.9003267973856209,
"acc_norm_stderr": 0.012119053136608476
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.036942843353378024,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.036942843353378024
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8979591836734694,
"acc_stderr": 0.01937850847450596,
"acc_norm": 0.8979591836734694,
"acc_norm_stderr": 0.01937850847450596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.945273631840796,
"acc_stderr": 0.016082815796263243,
"acc_norm": 0.945273631840796,
"acc_norm_stderr": 0.016082815796263243
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.97,
"acc_stderr": 0.01714466079977652,
"acc_norm": 0.97,
"acc_norm_stderr": 0.01714466079977652
},
"harness|hendrycksTest-virology|5": {
"acc": 0.6746987951807228,
"acc_stderr": 0.03647168523683226,
"acc_norm": 0.6746987951807228,
"acc_norm_stderr": 0.03647168523683226
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.9298245614035088,
"acc_stderr": 0.019591541754525123,
"acc_norm": 0.9298245614035088,
"acc_norm_stderr": 0.019591541754525123
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4039167686658507,
"mc1_stderr": 0.01717727682258428,
"mc2": 0.57938549088033,
"mc2_stderr": 0.0155079218690995
},
"harness|winogrande|5": {
"acc": 0.8082083662194159,
"acc_stderr": 0.011065209664659527
},
"harness|gsm8k|5": {
"acc": 0.5852918877937832,
"acc_stderr": 0.013570623842304511
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
hk-kaden-kim/pix2struct-chartcaptioning-v0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: chartType
dtype: string
- name: chartElement
dtype: string
- name: dataType
dtype: string
splits:
- name: train
num_bytes: 72690622.389
num_examples: 1373
- name: validation
num_bytes: 10174935.0
num_examples: 199
- name: test
num_bytes: 20658483.0
num_examples: 399
download_size: 102432750
dataset_size: 103524040.389
---
# Dataset Card for "pix2struct-chartcaptioning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Flyfer/CBDataSetBeta1 | ---
license: apache-2.0
---
|
HUBioDataLab/SELFormer-selfies | ---
license: gpl
---
|
FINNUMBER/FINCH_TRAIN_MCQA | ---
dataset_info:
features:
- name: task
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 19919781
num_examples: 4710
download_size: 10307988
dataset_size: 19919781
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shi3z/alpaca_cleaned_ja_json | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- ja
configs:
- config_name: default
data_files:
- split: train
path: "alpaca_cleaned_ja.json"
- split: test
path: "alpaca_cleaned_ja.json"
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
heliosprime/twitter_dataset_1713002042 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 7325
num_examples: 16
download_size: 8640
dataset_size: 7325
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713002042"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mllab/alfa_bki | ---
license: unknown
language:
- ru
tags:
- bank
- loan
- time-series
size_categories:
- 1M<n<10M
pretty_name: Alfa BKI
---
### Dataset Summary
Alfa BKI is a unique high-quality dataset collected from the real data source of credit history bureaus (in Russian "бюро кредитных историй/БКИ"). It contains the history of corresponding credit products and the applicants' default on the loan.
### Supported Tasks and Leaderboards
The dataset is supposed to be used for training models for the classical bank task of predicting the default of the applicant.
## Dataset Structure
### Data Instances
The example of one sample is provided below
```
{
'app_id': 0,
'history':
[
[ 0, 1, 18, 9, 2, 3, 16, 10, 11, 3, 3, 0, 2, 11, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 16, 2, 17, 1, 1, 1, 0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 4, 1, 3, 4, 1, 0, 0 ],
[ 0, 2, 18, 9, 14, 14, 12, 12, 0, 3, 3, 0, 2, 11, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 16, 2, 17, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 4, 1, 3, 4, 1, 0, 0 ],
[ 0, 3, 18, 9, 4, 8, 1, 11, 11, 0, 5, 0, 2, 8, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 15, 2, 17, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 4, 1, 2, 3, 1, 1, 1 ],
[ 0, 4, 4, 1, 9, 12, 16, 7, 12, 2, 3, 0, 2, 4, 6, 16, 5, 4, 8, 0, 1, 1, 1, 1, 16, 2, 17, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 4, 1, 3, 1, 1, 0, 0 ],
[ 0, 5, 5, 12, 15, 2, 11, 12, 10, 2, 3, 0, 2, 4, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 16, 2, 17, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 4, 1, 3, 4, 1, 0, 0 ],
[ 0, 6, 5, 0, 11, 8, 12, 11, 4, 2, 3, 0, 2, 4, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 9, 5, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 3, 4, 3, 3, 3, 4, 1, 2, 3, 1, 0, 1 ],
[ 0, 7, 3, 9, 1, 2, 12, 14, 15, 5, 3, 0, 2, 3, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 16, 2, 17, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 4, 1, 3, 4, 1, 0, 0 ],
[ 0, 8, 2, 9, 2, 3, 12, 14, 15, 5, 3, 0, 2, 13, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 16, 2, 17, 1, 1, 1, 0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 4, 1, 3, 4, 1, 0, 0 ],
[ 0, 9, 1, 9, 11, 13, 14, 8, 2, 5, 1, 0, 2, 11, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 1, 2, 17, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 4, 1, 2, 4, 1, 0, 0 ],
[ 0, 10, 7, 9, 2, 10, 8, 8, 16, 4, 2, 0, 2, 11, 6, 16, 5, 4, 8, 1, 1, 1, 1, 1, 15, 2, 17, 0, 1, 1, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 4, 1, 2, 4, 1, 0, 0 ]
],
'flag': 0
}
```
### Data Fields
- `id`: application ID.
- `history`: an array of transactions where each credit product is represented as a 37-dimensional array, each element of the array represents a corresponding feature from the following list.
- `id`: application ID.
- `rn`: serial number of the credit product in the credit history.
- `pre_since_opened`: days from the date of opening the loan to the date of data collection.
- `pre_since_confirmed`: days from the date of confirmation of the loan information to the date of data collection.
- `pre_pterm`: planned number of days from the opening date of the loan to the closing date.
- `pre_fterm`: actual number of days from the opening date of the loan to the closing date.
- `pre_till_pclose`: planned number of days from the date of data collection to the closing date of the loan.
- `pre_till_fclose`: actual number of days from the date of data collection to the closing date of the loan.
- `pre_loans_credit_limit`: credit limit.
- `pre_loans_next_pay_summ`: amount of the next loan payment.
- `pre_loans_outstanding`: remaining unpaid loan amount.
- `pre_loans_total_overdue`: current overdue debt.
- `pre_loans_max_overdue_sum`: maximum overdue debt.
- `pre_loans_credit_cost_rate`: full cost of the loan.
- `pre_loans5`: number of delays up to 5 days.
- `pre_loans530`: number of delays from 5 to 30 days.
- `pre_loans3060`: number of delays from 30 to 60 days.
- `pre_loans6090`: number of delays from 60 to 90 days.
- `pre_loans90`: the number of delays of more than 90 days.
- `is_zero_loans_5`: flag: no delays up to 5 days.
- `is_zero_loans_530`: flag: no delays from 5 to 30 days.
- `is_zero_loans_3060`: flag: no delays from 30 to 60 days.
- `is_zero_loans_6090`: flag: no delays from 60 to 90 days.
- `is_zero_loans90`: flag: no delays for more than 90 days.
- `pre_util`: ratio of the remaining unpaid loan amount to the credit limit.
- `pre_over2limit`: ratio of current overdue debt to the credit limit.
- `pre_maxover2limit`: ratio of the maximum overdue debt to the credit limit.
- `is_zero_util`: flag: the ratio of the remaining unpaid loan amount to the credit limit is 0.
- `is_zero_over2limit`: flag: the ratio of the current overdue debt to the credit limit is 0.
- `is_zero_maxover2limit`: flag: the ratio of the maximum overdue debt to the credit limit is 0.
- `enc_paym_{0..n}`: monthly payment statuses for the last n months.
- `enc_loans_account_holder_type`: type of attitude to credit.
- `enc_loans_credit_status`: loan status.
- `enc_loans_account_cur`: loan currency.
- `enc_loans_credit_type`: type of loan.
- `pclose_flag`: flag: the planned number of days from the opening date of the loan to the closing date is not defined.
- `fclose_flag`: flag: the actual number of days from the opening date of the loan to the closing date is not determined.
- `flag`: target, 1 – the fact that the client has defaulted. |
v2ray/Tony-Chase-Transcripts | ---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- not-for-all-audiences
size_categories:
- 1K<n<10K
---
# Tony Chase Transcripts
Around 3500 transcripts of videos from [Tony Chase](https://www.youtube.com/@tonychaseiamnotgaynotnever2043) captioned with GPT-3.5-Turbo. |
honggen/shp_all_BERT | ---
license: apache-2.0
task_categories:
- text2text-generation
language:
- en
---
The data is orginally from Standford SHP dataset. I build two dataset: hard and soft based on the BERT-Large embedding of CLS.
Hard is the most similiar pairs and soft is the most dis-similiar pairs. To make the two dataset different, I also filter out the overlaped pairs (since some prompt only has one pair).
The prompt distribution is not uniform, so I only selected one pair for each prompt for hard and soft dataset. |
cambridgeltl/posqa | ---
license: apache-2.0
task_categories:
- text-classification
- question-answering
language:
- en
size_categories:
- 1K<n<10K
---
This dataset is based on our publication *POSQA: Probe the World Models of LLMs with Size Comparisons* ([PDF](https://arxiv.org/abs/2310.13394)).
More details: [GitHub Repo](https://github.com/cambridgeltl/POSQA)
|
bdsaglam/musique-jerx-rltf-openai | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: chat
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 19894680
num_examples: 14376
- name: validation
num_bytes: 1763088
num_examples: 1252
download_size: 10934779
dataset_size: 21657768
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
datasets-examples/doc-image-1 | ---
size_categories:
- n<1K
---
# [doc] image dataset 1
This dataset contains 4 jpeg files at the root. |
katarinayuan/ProtDescribe | ---
configs:
- config_name: ProtDescribe
data_files: "uniprot_sprot_filtered.tsv"
#data_url: https://miladeepgraphlearningproteindata.s3.us-east-2.amazonaws.com/uniprotdata/uniprot_sprot_filtered.tsv
sep: "\t"
default: true
license: apache-2.0
--- |
ShoukanLabs/OpenNiji-65001_100000 | ---
dataset_info:
features:
- name: image
dtype: image
- name: url
dtype: string
- name: prompt
dtype: string
- name: style
dtype: string
splits:
- name: train
num_bytes: 58289413339.615
num_examples: 34997
download_size: 12143883603
dataset_size: 58289413339.615
---
# Dataset Card for "OpenNiji-65001_100000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_qa_num_v5_full_recite_ans_sent_first_permute_rerun | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7835731.7738175
num_examples: 4778
- name: validation
num_bytes: 403389
num_examples: 300
download_size: 1567845
dataset_size: 8239120.7738175
---
# Dataset Card for "squad_qa_num_v5_full_recite_ans_sent_first_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_BFauber__opt125m_10e5_30ep | ---
pretty_name: Evaluation run of BFauber/opt125m_10e5_30ep
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [BFauber/opt125m_10e5_30ep](https://huggingface.co/BFauber/opt125m_10e5_30ep)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BFauber__opt125m_10e5_30ep\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-02T19:37:52.073116](https://huggingface.co/datasets/open-llm-leaderboard/details_BFauber__opt125m_10e5_30ep/blob/main/results_2024-02-02T19-37-52.073116.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.240060989381837,\n\
\ \"acc_stderr\": 0.03017661784008925,\n \"acc_norm\": 0.24034190934400632,\n\
\ \"acc_norm_stderr\": 0.030973459310700287,\n \"mc1\": 0.24724602203182375,\n\
\ \"mc1_stderr\": 0.01510240479735965,\n \"mc2\": 0.4721732134354149,\n\
\ \"mc2_stderr\": 0.01570223334109098\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.2226962457337884,\n \"acc_stderr\": 0.012158314774829931,\n\
\ \"acc_norm\": 0.25597269624573377,\n \"acc_norm_stderr\": 0.012753013241244513\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.27972515435172274,\n\
\ \"acc_stderr\": 0.004479467619464779,\n \"acc_norm\": 0.30302728540131446,\n\
\ \"acc_norm_stderr\": 0.004586276903267079\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.32592592592592595,\n\
\ \"acc_stderr\": 0.040491220417025055,\n \"acc_norm\": 0.32592592592592595,\n\
\ \"acc_norm_stderr\": 0.040491220417025055\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.18421052631578946,\n \"acc_stderr\": 0.0315469804508223,\n\
\ \"acc_norm\": 0.18421052631578946,\n \"acc_norm_stderr\": 0.0315469804508223\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.28,\n\
\ \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.28,\n \
\ \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.22264150943396227,\n \"acc_stderr\": 0.0256042334708991,\n\
\ \"acc_norm\": 0.22264150943396227,\n \"acc_norm_stderr\": 0.0256042334708991\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536955,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536955\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\"\
: 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036622,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036622\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n\
\ \"acc_stderr\": 0.03156809362703174,\n \"acc_norm\": 0.21965317919075145,\n\
\ \"acc_norm_stderr\": 0.03156809362703174\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.19607843137254902,\n \"acc_stderr\": 0.03950581861179962,\n\
\ \"acc_norm\": 0.19607843137254902,\n \"acc_norm_stderr\": 0.03950581861179962\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n\
\ \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2425531914893617,\n \"acc_stderr\": 0.028020226271200217,\n\
\ \"acc_norm\": 0.2425531914893617,\n \"acc_norm_stderr\": 0.028020226271200217\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21929824561403508,\n\
\ \"acc_stderr\": 0.03892431106518752,\n \"acc_norm\": 0.21929824561403508,\n\
\ \"acc_norm_stderr\": 0.03892431106518752\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.3448275862068966,\n \"acc_stderr\": 0.03960933549451207,\n\
\ \"acc_norm\": 0.3448275862068966,\n \"acc_norm_stderr\": 0.03960933549451207\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2328042328042328,\n \"acc_stderr\": 0.02176596167215452,\n \"\
acc_norm\": 0.2328042328042328,\n \"acc_norm_stderr\": 0.02176596167215452\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15079365079365079,\n\
\ \"acc_stderr\": 0.03200686497287392,\n \"acc_norm\": 0.15079365079365079,\n\
\ \"acc_norm_stderr\": 0.03200686497287392\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.17,\n \"acc_stderr\": 0.0377525168068637,\n \
\ \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2064516129032258,\n\
\ \"acc_stderr\": 0.023025899617188716,\n \"acc_norm\": 0.2064516129032258,\n\
\ \"acc_norm_stderr\": 0.023025899617188716\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.22660098522167488,\n \"acc_stderr\": 0.02945486383529298,\n\
\ \"acc_norm\": 0.22660098522167488,\n \"acc_norm_stderr\": 0.02945486383529298\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\"\
: 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.19696969696969696,\n \"acc_stderr\": 0.028335609732463348,\n \"\
acc_norm\": 0.19696969696969696,\n \"acc_norm_stderr\": 0.028335609732463348\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.21243523316062177,\n \"acc_stderr\": 0.029519282616817244,\n\
\ \"acc_norm\": 0.21243523316062177,\n \"acc_norm_stderr\": 0.029519282616817244\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.24102564102564103,\n \"acc_stderr\": 0.021685546665333188,\n\
\ \"acc_norm\": 0.24102564102564103,\n \"acc_norm_stderr\": 0.021685546665333188\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2518518518518518,\n \"acc_stderr\": 0.02646611753895991,\n \
\ \"acc_norm\": 0.2518518518518518,\n \"acc_norm_stderr\": 0.02646611753895991\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.18487394957983194,\n \"acc_stderr\": 0.025215992877954205,\n\
\ \"acc_norm\": 0.18487394957983194,\n \"acc_norm_stderr\": 0.025215992877954205\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.25165562913907286,\n \"acc_stderr\": 0.035433042343899844,\n \"\
acc_norm\": 0.25165562913907286,\n \"acc_norm_stderr\": 0.035433042343899844\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.22568807339449543,\n \"acc_stderr\": 0.01792308766780306,\n \"\
acc_norm\": 0.22568807339449543,\n \"acc_norm_stderr\": 0.01792308766780306\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.18055555555555555,\n \"acc_stderr\": 0.026232878971491652,\n \"\
acc_norm\": 0.18055555555555555,\n \"acc_norm_stderr\": 0.026232878971491652\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.25980392156862747,\n \"acc_stderr\": 0.030778554678693257,\n \"\
acc_norm\": 0.25980392156862747,\n \"acc_norm_stderr\": 0.030778554678693257\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.26582278481012656,\n \"acc_stderr\": 0.028756799629658342,\n \
\ \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.028756799629658342\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.21973094170403587,\n\
\ \"acc_stderr\": 0.027790177064383605,\n \"acc_norm\": 0.21973094170403587,\n\
\ \"acc_norm_stderr\": 0.027790177064383605\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2366412213740458,\n \"acc_stderr\": 0.03727673575596918,\n\
\ \"acc_norm\": 0.2366412213740458,\n \"acc_norm_stderr\": 0.03727673575596918\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2727272727272727,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.26851851851851855,\n\
\ \"acc_stderr\": 0.04284467968052192,\n \"acc_norm\": 0.26851851851851855,\n\
\ \"acc_norm_stderr\": 0.04284467968052192\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.2147239263803681,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.2147239263803681,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.04109974682633932,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.04109974682633932\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n\
\ \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2264957264957265,\n\
\ \"acc_stderr\": 0.027421007295392912,\n \"acc_norm\": 0.2264957264957265,\n\
\ \"acc_norm_stderr\": 0.027421007295392912\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26181353767560667,\n\
\ \"acc_stderr\": 0.015720838678445266,\n \"acc_norm\": 0.26181353767560667,\n\
\ \"acc_norm_stderr\": 0.015720838678445266\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.2514450867052023,\n \"acc_stderr\": 0.02335736578587404,\n\
\ \"acc_norm\": 0.2514450867052023,\n \"acc_norm_stderr\": 0.02335736578587404\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n\
\ \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n\
\ \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.2581699346405229,\n \"acc_stderr\": 0.025058503316958157,\n\
\ \"acc_norm\": 0.2581699346405229,\n \"acc_norm_stderr\": 0.025058503316958157\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2829581993569132,\n\
\ \"acc_stderr\": 0.02558306248998483,\n \"acc_norm\": 0.2829581993569132,\n\
\ \"acc_norm_stderr\": 0.02558306248998483\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2191358024691358,\n \"acc_stderr\": 0.0230167056402622,\n\
\ \"acc_norm\": 0.2191358024691358,\n \"acc_norm_stderr\": 0.0230167056402622\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.21631205673758866,\n \"acc_stderr\": 0.0245617205605628,\n \
\ \"acc_norm\": 0.21631205673758866,\n \"acc_norm_stderr\": 0.0245617205605628\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.258148631029987,\n\
\ \"acc_stderr\": 0.011176923719313402,\n \"acc_norm\": 0.258148631029987,\n\
\ \"acc_norm_stderr\": 0.011176923719313402\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.1875,\n \"acc_stderr\": 0.023709788253811766,\n \
\ \"acc_norm\": 0.1875,\n \"acc_norm_stderr\": 0.023709788253811766\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.24509803921568626,\n \"acc_stderr\": 0.01740181671142766,\n \
\ \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.01740181671142766\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2,\n\
\ \"acc_stderr\": 0.03831305140884601,\n \"acc_norm\": 0.2,\n \
\ \"acc_norm_stderr\": 0.03831305140884601\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.2163265306122449,\n \"acc_stderr\": 0.026358916334904035,\n\
\ \"acc_norm\": 0.2163265306122449,\n \"acc_norm_stderr\": 0.026358916334904035\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n\
\ \"acc_stderr\": 0.030360490154014652,\n \"acc_norm\": 0.24378109452736318,\n\
\ \"acc_norm_stderr\": 0.030360490154014652\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.28313253012048195,\n\
\ \"acc_stderr\": 0.03507295431370518,\n \"acc_norm\": 0.28313253012048195,\n\
\ \"acc_norm_stderr\": 0.03507295431370518\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.3391812865497076,\n \"acc_stderr\": 0.03631053496488905,\n\
\ \"acc_norm\": 0.3391812865497076,\n \"acc_norm_stderr\": 0.03631053496488905\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24724602203182375,\n\
\ \"mc1_stderr\": 0.01510240479735965,\n \"mc2\": 0.4721732134354149,\n\
\ \"mc2_stderr\": 0.01570223334109098\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5201262825572218,\n \"acc_stderr\": 0.014041096664344329\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/BFauber/opt125m_10e5_30ep
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|arc:challenge|25_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|gsm8k|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hellaswag|10_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T19-37-52.073116.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T19-37-52.073116.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- '**/details_harness|winogrande|5_2024-02-02T19-37-52.073116.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-02T19-37-52.073116.parquet'
- config_name: results
data_files:
- split: 2024_02_02T19_37_52.073116
path:
- results_2024-02-02T19-37-52.073116.parquet
- split: latest
path:
- results_2024-02-02T19-37-52.073116.parquet
---
# Dataset Card for Evaluation run of BFauber/opt125m_10e5_30ep
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [BFauber/opt125m_10e5_30ep](https://huggingface.co/BFauber/opt125m_10e5_30ep) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BFauber__opt125m_10e5_30ep",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-02T19:37:52.073116](https://huggingface.co/datasets/open-llm-leaderboard/details_BFauber__opt125m_10e5_30ep/blob/main/results_2024-02-02T19-37-52.073116.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.240060989381837,
"acc_stderr": 0.03017661784008925,
"acc_norm": 0.24034190934400632,
"acc_norm_stderr": 0.030973459310700287,
"mc1": 0.24724602203182375,
"mc1_stderr": 0.01510240479735965,
"mc2": 0.4721732134354149,
"mc2_stderr": 0.01570223334109098
},
"harness|arc:challenge|25": {
"acc": 0.2226962457337884,
"acc_stderr": 0.012158314774829931,
"acc_norm": 0.25597269624573377,
"acc_norm_stderr": 0.012753013241244513
},
"harness|hellaswag|10": {
"acc": 0.27972515435172274,
"acc_stderr": 0.004479467619464779,
"acc_norm": 0.30302728540131446,
"acc_norm_stderr": 0.004586276903267079
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.040491220417025055,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.040491220417025055
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.18421052631578946,
"acc_stderr": 0.0315469804508223,
"acc_norm": 0.18421052631578946,
"acc_norm_stderr": 0.0315469804508223
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22264150943396227,
"acc_stderr": 0.0256042334708991,
"acc_norm": 0.22264150943396227,
"acc_norm_stderr": 0.0256042334708991
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036622,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036622
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.21965317919075145,
"acc_stderr": 0.03156809362703174,
"acc_norm": 0.21965317919075145,
"acc_norm_stderr": 0.03156809362703174
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.19607843137254902,
"acc_stderr": 0.03950581861179962,
"acc_norm": 0.19607843137254902,
"acc_norm_stderr": 0.03950581861179962
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2425531914893617,
"acc_stderr": 0.028020226271200217,
"acc_norm": 0.2425531914893617,
"acc_norm_stderr": 0.028020226271200217
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.21929824561403508,
"acc_stderr": 0.03892431106518752,
"acc_norm": 0.21929824561403508,
"acc_norm_stderr": 0.03892431106518752
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.3448275862068966,
"acc_stderr": 0.03960933549451207,
"acc_norm": 0.3448275862068966,
"acc_norm_stderr": 0.03960933549451207
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2328042328042328,
"acc_stderr": 0.02176596167215452,
"acc_norm": 0.2328042328042328,
"acc_norm_stderr": 0.02176596167215452
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15079365079365079,
"acc_stderr": 0.03200686497287392,
"acc_norm": 0.15079365079365079,
"acc_norm_stderr": 0.03200686497287392
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.17,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.17,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2064516129032258,
"acc_stderr": 0.023025899617188716,
"acc_norm": 0.2064516129032258,
"acc_norm_stderr": 0.023025899617188716
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.22660098522167488,
"acc_stderr": 0.02945486383529298,
"acc_norm": 0.22660098522167488,
"acc_norm_stderr": 0.02945486383529298
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.19696969696969696,
"acc_stderr": 0.028335609732463348,
"acc_norm": 0.19696969696969696,
"acc_norm_stderr": 0.028335609732463348
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.21243523316062177,
"acc_stderr": 0.029519282616817244,
"acc_norm": 0.21243523316062177,
"acc_norm_stderr": 0.029519282616817244
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.24102564102564103,
"acc_stderr": 0.021685546665333188,
"acc_norm": 0.24102564102564103,
"acc_norm_stderr": 0.021685546665333188
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2518518518518518,
"acc_stderr": 0.02646611753895991,
"acc_norm": 0.2518518518518518,
"acc_norm_stderr": 0.02646611753895991
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.18487394957983194,
"acc_stderr": 0.025215992877954205,
"acc_norm": 0.18487394957983194,
"acc_norm_stderr": 0.025215992877954205
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.25165562913907286,
"acc_stderr": 0.035433042343899844,
"acc_norm": 0.25165562913907286,
"acc_norm_stderr": 0.035433042343899844
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.22568807339449543,
"acc_stderr": 0.01792308766780306,
"acc_norm": 0.22568807339449543,
"acc_norm_stderr": 0.01792308766780306
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.18055555555555555,
"acc_stderr": 0.026232878971491652,
"acc_norm": 0.18055555555555555,
"acc_norm_stderr": 0.026232878971491652
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25980392156862747,
"acc_stderr": 0.030778554678693257,
"acc_norm": 0.25980392156862747,
"acc_norm_stderr": 0.030778554678693257
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.26582278481012656,
"acc_stderr": 0.028756799629658342,
"acc_norm": 0.26582278481012656,
"acc_norm_stderr": 0.028756799629658342
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.21973094170403587,
"acc_stderr": 0.027790177064383605,
"acc_norm": 0.21973094170403587,
"acc_norm_stderr": 0.027790177064383605
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2366412213740458,
"acc_stderr": 0.03727673575596918,
"acc_norm": 0.2366412213740458,
"acc_norm_stderr": 0.03727673575596918
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2727272727272727,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.2727272727272727,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.26851851851851855,
"acc_stderr": 0.04284467968052192,
"acc_norm": 0.26851851851851855,
"acc_norm_stderr": 0.04284467968052192
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2147239263803681,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.2147239263803681,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.25,
"acc_stderr": 0.04109974682633932,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04109974682633932
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2264957264957265,
"acc_stderr": 0.027421007295392912,
"acc_norm": 0.2264957264957265,
"acc_norm_stderr": 0.027421007295392912
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.26181353767560667,
"acc_stderr": 0.015720838678445266,
"acc_norm": 0.26181353767560667,
"acc_norm_stderr": 0.015720838678445266
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2514450867052023,
"acc_stderr": 0.02335736578587404,
"acc_norm": 0.2514450867052023,
"acc_norm_stderr": 0.02335736578587404
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2581699346405229,
"acc_stderr": 0.025058503316958157,
"acc_norm": 0.2581699346405229,
"acc_norm_stderr": 0.025058503316958157
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2829581993569132,
"acc_stderr": 0.02558306248998483,
"acc_norm": 0.2829581993569132,
"acc_norm_stderr": 0.02558306248998483
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2191358024691358,
"acc_stderr": 0.0230167056402622,
"acc_norm": 0.2191358024691358,
"acc_norm_stderr": 0.0230167056402622
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.21631205673758866,
"acc_stderr": 0.0245617205605628,
"acc_norm": 0.21631205673758866,
"acc_norm_stderr": 0.0245617205605628
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.258148631029987,
"acc_stderr": 0.011176923719313402,
"acc_norm": 0.258148631029987,
"acc_norm_stderr": 0.011176923719313402
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.1875,
"acc_stderr": 0.023709788253811766,
"acc_norm": 0.1875,
"acc_norm_stderr": 0.023709788253811766
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.01740181671142766,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.01740181671142766
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.2,
"acc_stderr": 0.03831305140884601,
"acc_norm": 0.2,
"acc_norm_stderr": 0.03831305140884601
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.2163265306122449,
"acc_stderr": 0.026358916334904035,
"acc_norm": 0.2163265306122449,
"acc_norm_stderr": 0.026358916334904035
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.030360490154014652,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.030360490154014652
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.28313253012048195,
"acc_stderr": 0.03507295431370518,
"acc_norm": 0.28313253012048195,
"acc_norm_stderr": 0.03507295431370518
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3391812865497076,
"acc_stderr": 0.03631053496488905,
"acc_norm": 0.3391812865497076,
"acc_norm_stderr": 0.03631053496488905
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24724602203182375,
"mc1_stderr": 0.01510240479735965,
"mc2": 0.4721732134354149,
"mc2_stderr": 0.01570223334109098
},
"harness|winogrande|5": {
"acc": 0.5201262825572218,
"acc_stderr": 0.014041096664344329
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
worldboss/bank-of-ghana-treasury-bills | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- conversational
- text-generation
- summarization
- question-answering
- text-classification
- text-retrieval
- translation
pretty_name: No Robots
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Issue_Date
dtype: string
- name: Tender
dtype: int64
- name: Security_Type
dtype: string
- name: Discount_Rate
dtype: float64
- name: Interest_Rate
dtype: float64
splits:
- name: train
num_bytes: 50338
num_examples: 958
download_size: 23906
dataset_size: 50338
tags:
- ghana
- news
- ghana-news
- bank-of-ghana
- exchange-rates
- ghana data
- bank of ghana
---
### Description 🙅♂️🤖
Bank of Ghana historical and real-time treasury bills data. [Bank of Ghana](https://www.bog.gov.gh/treasury-and-the-markets/treasury-bill-rates/)
Click Here: [](https://colab.research.google.com/drive/1zZUIyp9zBhwL5CqHS3Ggf5vJCr_yTYw0?usp=sharing)
### Data Format
```shell
{
"issue_date": "...",
"tender": "...",
"security_type": "...",
"discount_rate": "...",
"interest_rate": "..."
}
```
### Load Dataset
```shell
pip install datasets
```
```python
from datasets import load_dataset
treasury = load_dataset("worldboss/bank-of-ghana-treasury-bills", split="train")
pd.DataFrame(treasury).head()
```
### Author
The data was constructed by Theophilus Siameh (theodondre@gmail.com). |
CyberHarem/tsuchimiya_kagura_gareizero | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Tsuchimiya Kagura (Ga-Rei: Zero)
This is the dataset of Tsuchimiya Kagura (Ga-Rei: Zero), containing 337 images and their tags.
The core tags of this character are `short_hair, black_hair, blue_eyes, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 337 | 268.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsuchimiya_kagura_gareizero/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 337 | 201.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsuchimiya_kagura_gareizero/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 619 | 348.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsuchimiya_kagura_gareizero/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 337 | 268.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsuchimiya_kagura_gareizero/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 619 | 448.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsuchimiya_kagura_gareizero/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tsuchimiya_kagura_gareizero',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, serafuku, skirt, solo, katana |
| 1 | 5 |  |  |  |  |  | 1girl, serafuku, solo, parody, katana, anime_coloring, open_mouth |
| 2 | 6 |  |  |  |  |  | 1girl, anime_coloring, profile, serafuku, solo |
| 3 | 13 |  |  |  |  |  | 1girl, anime_coloring, parody, solo, open_mouth, serafuku, tree |
| 4 | 8 |  |  |  |  |  | 1girl, parody, solo, anime_coloring, open_mouth |
| 5 | 7 |  |  |  |  |  | 1girl, solo, randoseru, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | serafuku | skirt | solo | katana | parody | anime_coloring | open_mouth | profile | tree | randoseru | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:-------|:---------|:---------|:-----------------|:-------------|:----------|:-------|:------------|:-------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | X | X | X | X | X | | | | |
| 2 | 6 |  |  |  |  |  | X | X | | X | | | X | | X | | | |
| 3 | 13 |  |  |  |  |  | X | X | | X | | X | X | X | | X | | |
| 4 | 8 |  |  |  |  |  | X | | | X | | X | X | X | | | | |
| 5 | 7 |  |  |  |  |  | X | | | X | | | | | | | X | X |
|
106306046derrick/name_of_your_dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 11058568885.924
num_examples: 1390436
- name: validation
num_bytes: 834808939.688
num_examples: 158152
download_size: 11785641447
dataset_size: 11893377825.612
---
# Dataset Card for "name_of_your_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/saejima_kiyomi_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of saejima_kiyomi/冴島清美 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of saejima_kiyomi/冴島清美 (THE iDOLM@STER: Cinderella Girls), containing 59 images and their tags.
The core tags of this character are `black_hair, short_hair, black_eyes, bangs, glasses`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 59 | 65.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saejima_kiyomi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 59 | 40.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saejima_kiyomi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 131 | 77.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saejima_kiyomi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 59 | 60.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saejima_kiyomi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 131 | 107.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saejima_kiyomi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/saejima_kiyomi_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, open_mouth, smile, solo, looking_at_viewer, armband, bow, microphone, skirt, blush, dress, idol, thighhighs, wrist_cuffs, frills, hair_ornament, hairband, one_eye_closed |
| 1 | 11 |  |  |  |  |  | 1girl, solo, armband, blush, looking_at_viewer, white_shirt, open_mouth, school_uniform, short_sleeves, short_twintails, simple_background, smile, white_background, brown_dress, brown_hair, collared_shirt, low_twintails, parted_bangs, red_bowtie |
| 2 | 6 |  |  |  |  |  | 1girl, smile, solo, bracelet, dress, earrings, elbow_gloves, hairband, necklace, one_eye_closed, black_gloves |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | smile | solo | looking_at_viewer | armband | bow | microphone | skirt | blush | dress | idol | thighhighs | wrist_cuffs | frills | hair_ornament | hairband | one_eye_closed | white_shirt | school_uniform | short_sleeves | short_twintails | simple_background | white_background | brown_dress | brown_hair | collared_shirt | low_twintails | parted_bangs | red_bowtie | bracelet | earrings | elbow_gloves | necklace | black_gloves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------|:-------|:--------------------|:----------|:------|:-------------|:--------|:--------|:--------|:-------|:-------------|:--------------|:---------|:----------------|:-----------|:-----------------|:--------------|:-----------------|:----------------|:------------------|:--------------------|:-------------------|:--------------|:-------------|:-----------------|:----------------|:---------------|:-------------|:-----------|:-----------|:---------------|:-----------|:---------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | X | X | X | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | X | | | | | | | X | | | | | | X | X | | | | | | | | | | | | | X | X | X | X | X |
|
qkrwnstj/asdaasdzxcsd | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1557071013.093
num_examples: 11647
download_size: 1546169486
dataset_size: 1557071013.093
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/nisshin_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nisshin (Kantai Collection)
This is the dataset of nisshin (Kantai Collection), containing 206 images and their tags.
The core tags of this character are `black_hair, long_hair, brown_eyes, very_long_hair, high_ponytail, short_eyebrows, thick_eyebrows, multi-tied_hair, red_ribbon, ribbon, headgear, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 206 | 190.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nisshin_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 206 | 137.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nisshin_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 498 | 283.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nisshin_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 206 | 180.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nisshin_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 498 | 351.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nisshin_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nisshin_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, alternate_costume, long_sleeves, solo, hair_tie, looking_at_viewer, white_shirt, ascot, white_background, collared_shirt, food, dated, holding, simple_background, bag, black_pantyhose, blush, open_mouth, red_footwear, red_skirt, shoes, smile, suspender_skirt |
| 1 | 23 |  |  |  |  |  | 1girl, long_sleeves, miko, red_hakama, solo, wide_sleeves, kariginu, ribbon-trimmed_sleeves, hair_tie, hakama_skirt, looking_at_viewer, open_mouth, folding_fan, hair_ribbon, smile, simple_background, holding, white_background |
| 2 | 11 |  |  |  |  |  | 1girl, solo, blush, looking_at_viewer, cowboy_shot, navel, open_mouth, small_breasts, outdoors, red_bikini, day, smile, sky, holding, side-tie_bikini_bottom, beach, collarbone, fang, palm_tree, wet |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | alternate_costume | long_sleeves | solo | hair_tie | looking_at_viewer | white_shirt | ascot | white_background | collared_shirt | food | dated | holding | simple_background | bag | black_pantyhose | blush | open_mouth | red_footwear | red_skirt | shoes | smile | suspender_skirt | miko | red_hakama | wide_sleeves | kariginu | ribbon-trimmed_sleeves | hakama_skirt | folding_fan | hair_ribbon | cowboy_shot | navel | small_breasts | outdoors | red_bikini | day | sky | side-tie_bikini_bottom | beach | collarbone | fang | palm_tree | wet |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:---------------|:-------|:-----------|:--------------------|:--------------|:--------|:-------------------|:-----------------|:-------|:--------|:----------|:--------------------|:------|:------------------|:--------|:-------------|:---------------|:------------|:--------|:--------|:------------------|:-------|:-------------|:---------------|:-----------|:-------------------------|:---------------|:--------------|:--------------|:--------------|:--------|:----------------|:-----------|:-------------|:------|:------|:-------------------------|:--------|:-------------|:-------|:------------|:------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 23 |  |  |  |  |  | X | | X | X | X | X | | | X | | | | X | X | | | | X | | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 2 | 11 |  |  |  |  |  | X | | | X | | X | | | | | | | X | | | | X | X | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
P1ayer-1/askscience | ---
dataset_info:
features:
- name: title
sequence: string
- name: over_18
sequence: bool
- name: post_content
dtype: string
- name: C1
sequence: string
- name: C2
sequence: string
- name: C3
sequence: string
splits:
- name: train
num_bytes: 235856987.50364494
num_examples: 117709
- name: test
num_bytes: 12413103.82031179
num_examples: 6195
- name: validation
num_bytes: 12415107.549742026
num_examples: 6196
download_size: 172214121
dataset_size: 260685198.87369874
---
# Dataset Card for "askscience"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/ning_hai_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ning_hai/寧海/宁海 (Azur Lane)
This is the dataset of ning_hai/寧海/宁海 (Azur Lane), containing 208 images and their tags.
The core tags of this character are `breasts, ahoge, bangs, purple_eyes, hair_rings, hairband, long_hair, twintails, medium_breasts, black_hair, hair_ornament, purple_hair, white_hairband, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 208 | 238.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ning_hai_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 208 | 155.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ning_hai_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 474 | 311.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ning_hai_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 208 | 217.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ning_hai_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 474 | 411.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ning_hai_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ning_hai_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | purple_bikini, choker, cleavage, fur_trim, looking_at_viewer, blush, jacket, navel, 1girl, collarbone, off_shoulder, open_clothes, solo, eyewear_on_head, purple_gloves, bare_shoulders, simple_background, cowboy_shot, open_mouth, long_sleeves, sunglasses, white_background |
| 1 | 39 |  |  |  |  |  | china_dress, 1girl, looking_at_viewer, fur_trim, solo, purple_dress, jacket, long_sleeves, black_thighhighs, blush, cleavage_cutout, pelvic_curtain, simple_background, white_panties, open_mouth, white_background, side-tie_panties, hairpods, panda, holding, open_coat, v-shaped_eyebrows |
| 2 | 5 |  |  |  |  |  | 1girl, cleavage, fur_trim, short_shorts, solo, closed_mouth, full_body, looking_at_viewer, official_alternate_costume, smile, standing, ass, black_thighhighs, blush, denim_shorts, hood_down, looking_back, white_footwear, white_jacket, blue_shorts, boots, depth_of_field, food, holding, oil-paper_umbrella, puffy_long_sleeves, white_background |
| 3 | 21 |  |  |  |  |  | blush, rabbit_ears, looking_at_viewer, 1girl, double_bun, bare_shoulders, playboy_bunny, purple_leotard, solo, fake_animal_ears, fur_trim, rabbit_tail, open_mouth, cleavage, high_heels, hair_ribbon, holding, purple_footwear, ass, brown_pantyhose, covered_navel, jacket, long_sleeves, sidelocks, simple_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | purple_bikini | choker | cleavage | fur_trim | looking_at_viewer | blush | jacket | navel | 1girl | collarbone | off_shoulder | open_clothes | solo | eyewear_on_head | purple_gloves | bare_shoulders | simple_background | cowboy_shot | open_mouth | long_sleeves | sunglasses | white_background | china_dress | purple_dress | black_thighhighs | cleavage_cutout | pelvic_curtain | white_panties | side-tie_panties | hairpods | panda | holding | open_coat | v-shaped_eyebrows | short_shorts | closed_mouth | full_body | official_alternate_costume | smile | standing | ass | denim_shorts | hood_down | looking_back | white_footwear | white_jacket | blue_shorts | boots | depth_of_field | food | oil-paper_umbrella | puffy_long_sleeves | rabbit_ears | double_bun | playboy_bunny | purple_leotard | fake_animal_ears | rabbit_tail | high_heels | hair_ribbon | purple_footwear | brown_pantyhose | covered_navel | sidelocks |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------|:---------|:-----------|:-----------|:--------------------|:--------|:---------|:--------|:--------|:-------------|:---------------|:---------------|:-------|:------------------|:----------------|:-----------------|:--------------------|:--------------|:-------------|:---------------|:-------------|:-------------------|:--------------|:---------------|:-------------------|:------------------|:-----------------|:----------------|:-------------------|:-----------|:--------|:----------|:------------|:--------------------|:---------------|:---------------|:------------|:-----------------------------|:--------|:-----------|:------|:---------------|:------------|:---------------|:-----------------|:---------------|:--------------|:--------|:-----------------|:-------|:---------------------|:---------------------|:--------------|:-------------|:----------------|:-----------------|:-------------------|:--------------|:-------------|:--------------|:------------------|:------------------|:----------------|:------------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 39 |  |  |  |  |  | | | | X | X | X | X | | X | | | | X | | | | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | | | X | X | X | X | | | X | | | | X | | | | | | | | | X | | | X | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 3 | 21 |  |  |  |  |  | | | X | X | X | X | X | | X | | | | X | | | X | X | | X | X | | X | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
Chillax2641/embedded_faqs_medicare | ---
license: mit
---
|
PotatoGod/testing | ---
license: afl-3.0
---
|
CyberHarem/kisaragi_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kisaragi/如月/如月 (Kantai Collection)
This is the dataset of kisaragi/如月/如月 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `long_hair, hair_ornament, brown_hair, purple_eyes, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 514.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 316.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1209 | 693.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 464.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1209 | 934.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kisaragi_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 22 |  |  |  |  |  | 1girl, green_sailor_collar, green_skirt, pleated_skirt, red_neckerchief, serafuku, solo, crescent_pin, looking_at_viewer, smile, white_background, cowboy_shot, black_jacket, black_thighhighs, simple_background |
| 1 | 11 |  |  |  |  |  | 1girl, blush, long_sleeves, red_neckerchief, serafuku, simple_background, solo, crescent_pin, green_skirt, jacket, pleated_skirt, smile, white_background, closed_mouth, green_sailor_collar, looking_at_viewer, twitter_username, purple_hair, one-hour_drawing_challenge, black_thighhighs, cowboy_shot |
| 2 | 12 |  |  |  |  |  | 1girl, green_sailor_collar, red_neckerchief, serafuku, smile, solo, upper_body, looking_at_viewer, blue_jacket, crescent_pin, white_background, simple_background |
| 3 | 8 |  |  |  |  |  | 1girl, serafuku, smile, solo, looking_at_viewer, pleated_skirt, black_thighhighs, brown_eyes |
| 4 | 16 |  |  |  |  |  | 1girl, solo, white_shirt, blue_skirt, looking_at_viewer, smile, blush, short_sleeves, alternate_costume, collarbone, earrings, off-shoulder_shirt, white_background, open_mouth, simple_background, cowboy_shot, striped_skirt |
| 5 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, solo, twitter_username, blush, navel, smile, collarbone, cowboy_shot, simple_background, small_breasts, brown_eyes, cleavage, medium_breasts, one-hour_drawing_challenge, white_background, white_bikini, purple_hair |
| 6 | 7 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, medium_breasts, solo, blush, collarbone, navel, outdoors, smile, blue_sky, cloud, cowboy_shot, day, bangs, ocean, white_bikini, closed_mouth, crescent_pin, frilled_bikini, front-tie_bikini_top, jacket, open_mouth, purple_hair, side-tie_bikini_bottom |
| 7 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, enmaided, maid_apron, white_apron, frilled_apron, maid_headdress, open_mouth, simple_background, white_background, black_dress, blush, breasts, brown_eyes, cowboy_shot, heart, puffy_sleeves, ribbon, short_sleeves, waist_apron, wrist_cuffs |
| 8 | 12 |  |  |  |  |  | kimono, 1girl, obi, alternate_costume, smile, solo, floral_print, looking_at_viewer, blush, purple_hair, wide_sleeves, long_sleeves, open_mouth |
| 9 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, red_dress, smile, solo, alternate_costume, christmas, black_thighhighs, blush, red_capelet, santa_costume, cowboy_shot, open_mouth, frilled_dress, fur-trimmed_dress, holding, bangs, brown_gloves, gift, pom_pom_(clothes), purple_hair, sack, standing, yellow_bow |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | green_sailor_collar | green_skirt | pleated_skirt | red_neckerchief | serafuku | solo | crescent_pin | looking_at_viewer | smile | white_background | cowboy_shot | black_jacket | black_thighhighs | simple_background | blush | long_sleeves | jacket | closed_mouth | twitter_username | purple_hair | one-hour_drawing_challenge | upper_body | blue_jacket | brown_eyes | white_shirt | blue_skirt | short_sleeves | alternate_costume | collarbone | earrings | off-shoulder_shirt | open_mouth | striped_skirt | navel | small_breasts | cleavage | medium_breasts | white_bikini | outdoors | blue_sky | cloud | day | bangs | ocean | frilled_bikini | front-tie_bikini_top | side-tie_bikini_bottom | enmaided | maid_apron | white_apron | frilled_apron | maid_headdress | black_dress | breasts | heart | puffy_sleeves | ribbon | waist_apron | wrist_cuffs | kimono | obi | floral_print | wide_sleeves | red_dress | christmas | red_capelet | santa_costume | frilled_dress | fur-trimmed_dress | holding | brown_gloves | gift | pom_pom_(clothes) | sack | standing | yellow_bow |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------|:--------------|:----------------|:------------------|:-----------|:-------|:---------------|:--------------------|:--------|:-------------------|:--------------|:---------------|:-------------------|:--------------------|:--------|:---------------|:---------|:---------------|:-------------------|:--------------|:-----------------------------|:-------------|:--------------|:-------------|:--------------|:-------------|:----------------|:--------------------|:-------------|:-----------|:---------------------|:-------------|:----------------|:--------|:----------------|:-----------|:-----------------|:---------------|:-----------|:-----------|:--------|:------|:--------|:--------|:-----------------|:-----------------------|:-------------------------|:-----------|:-------------|:--------------|:----------------|:-----------------|:--------------|:----------|:--------|:----------------|:---------|:--------------|:--------------|:---------|:------|:---------------|:---------------|:------------|:------------|:--------------|:----------------|:----------------|:--------------------|:----------|:---------------|:-------|:--------------------|:-------|:-----------|:-------------|
| 0 | 22 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 12 |  |  |  |  |  | X | X | | | X | X | X | X | X | X | X | | | | X | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | | X | | X | X | | X | X | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 16 |  |  |  |  |  | X | | | | | | X | | X | X | X | X | | | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | | | | | X | | X | X | X | X | | | X | X | | | | X | X | X | | | X | | | | | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | | | | | X | X | X | X | | X | | | | X | | X | X | | X | | | | | | | | | X | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | | | | X | | X | X | X | X | | | X | X | | | | | | | | | X | | | X | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 8 | 12 |  |  |  |  |  | X | | | | | | X | | X | X | | | | | | X | X | | | | X | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | |
| 9 | 8 |  |  |  |  |  | X | | | | | | X | | X | X | | X | | X | | X | | | | | X | | | | | | | | X | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
yzhuang/metatree_delta_ailerons | ---
dataset_info:
features:
- name: id
dtype: int64
- name: X
sequence: float64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 297780
num_examples: 4963
- name: validation
num_bytes: 129960
num_examples: 2166
download_size: 103663
dataset_size: 427740
---
# Dataset Card for "metatree_delta_ailerons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OmarAboBakr/AIC_dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 269824602
num_examples: 60878
- name: validation
num_bytes: 33236129
num_examples: 7610
- name: test
num_bytes: 34086738
num_examples: 7610
download_size: 168569802
dataset_size: 337147469
---
# Dataset Card for "AIC_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sirabhop/nl2sql_food_fieldname | ---
task_categories:
- text2text-generation
language:
- en
tags:
- code
size_categories:
- n<1K
pretty_name: nl2sql-sample
--- |
Fmfawaz32/mitre-attack | ---
license: apache-2.0
---
|
Rams901/sql-create-context-10k | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4688784
num_examples: 10000
download_size: 2097435
dataset_size: 4688784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sql-create-context-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distilled-one-sec-cv12-each-chunk-uniq/chunk_143 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1145775452.0
num_examples: 223261
download_size: 1173532232
dataset_size: 1145775452.0
---
# Dataset Card for "chunk_143"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_SLAM-group__NewHope | ---
pretty_name: Evaluation run of SLAM-group/NewHope
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [SLAM-group/NewHope](https://huggingface.co/SLAM-group/NewHope) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SLAM-group__NewHope\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-08-02T16:20:26.294433](https://huggingface.co/datasets/open-llm-leaderboard/details_SLAM-group__NewHope/blob/main/results_2023-08-02T16%3A20%3A26.294433.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5588691829632426,\n\
\ \"acc_stderr\": 0.03433115773924322,\n \"acc_norm\": 0.5628652703397449,\n\
\ \"acc_norm_stderr\": 0.03430877590228174,\n \"mc1\": 0.3243574051407589,\n\
\ \"mc1_stderr\": 0.016387976779647935,\n \"mc2\": 0.44868368066946906,\n\
\ \"mc2_stderr\": 0.015140951474620613\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5767918088737202,\n \"acc_stderr\": 0.014438036220848022,\n\
\ \"acc_norm\": 0.6092150170648464,\n \"acc_norm_stderr\": 0.014258563880513782\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6366261700856403,\n\
\ \"acc_stderr\": 0.004799882248494812,\n \"acc_norm\": 0.8399721171081458,\n\
\ \"acc_norm_stderr\": 0.003658826208101608\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.43703703703703706,\n\
\ \"acc_stderr\": 0.04284958639753399,\n \"acc_norm\": 0.43703703703703706,\n\
\ \"acc_norm_stderr\": 0.04284958639753399\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5328947368421053,\n \"acc_stderr\": 0.040601270352363966,\n\
\ \"acc_norm\": 0.5328947368421053,\n \"acc_norm_stderr\": 0.040601270352363966\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n\
\ \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5811320754716981,\n \"acc_stderr\": 0.03036505082911521,\n\
\ \"acc_norm\": 0.5811320754716981,\n \"acc_norm_stderr\": 0.03036505082911521\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5763888888888888,\n\
\ \"acc_stderr\": 0.04132125019723369,\n \"acc_norm\": 0.5763888888888888,\n\
\ \"acc_norm_stderr\": 0.04132125019723369\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n\
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952344,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952344\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5260115606936416,\n\
\ \"acc_stderr\": 0.038073017265045125,\n \"acc_norm\": 0.5260115606936416,\n\
\ \"acc_norm_stderr\": 0.038073017265045125\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.0433643270799318,\n\
\ \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.0433643270799318\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.67,\n \"acc_stderr\": 0.047258156262526094,\n \"acc_norm\": 0.67,\n\
\ \"acc_norm_stderr\": 0.047258156262526094\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4595744680851064,\n \"acc_stderr\": 0.03257901482099835,\n\
\ \"acc_norm\": 0.4595744680851064,\n \"acc_norm_stderr\": 0.03257901482099835\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.04142439719489361,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.04142439719489361\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n\
\ \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3148148148148148,\n \"acc_stderr\": 0.023919984164047732,\n \"\
acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.023919984164047732\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.042857142857142816,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.042857142857142816\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6870967741935484,\n\
\ \"acc_stderr\": 0.02637756702864586,\n \"acc_norm\": 0.6870967741935484,\n\
\ \"acc_norm_stderr\": 0.02637756702864586\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.39901477832512317,\n \"acc_stderr\": 0.03445487686264715,\n\
\ \"acc_norm\": 0.39901477832512317,\n \"acc_norm_stderr\": 0.03445487686264715\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\"\
: 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6606060606060606,\n \"acc_stderr\": 0.03697442205031595,\n\
\ \"acc_norm\": 0.6606060606060606,\n \"acc_norm_stderr\": 0.03697442205031595\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7070707070707071,\n \"acc_stderr\": 0.032424979581788166,\n \"\
acc_norm\": 0.7070707070707071,\n \"acc_norm_stderr\": 0.032424979581788166\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\
\ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5256410256410257,\n \"acc_stderr\": 0.025317649726448656,\n\
\ \"acc_norm\": 0.5256410256410257,\n \"acc_norm_stderr\": 0.025317649726448656\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547307,\n \
\ \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547307\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6134453781512605,\n \"acc_stderr\": 0.03163145807552379,\n \
\ \"acc_norm\": 0.6134453781512605,\n \"acc_norm_stderr\": 0.03163145807552379\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7211009174311926,\n \"acc_stderr\": 0.01922746887646351,\n \"\
acc_norm\": 0.7211009174311926,\n \"acc_norm_stderr\": 0.01922746887646351\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7745098039215687,\n \"acc_stderr\": 0.029331162294251735,\n \"\
acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.029331162294251735\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7426160337552743,\n \"acc_stderr\": 0.028458820991460295,\n \
\ \"acc_norm\": 0.7426160337552743,\n \"acc_norm_stderr\": 0.028458820991460295\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6547085201793722,\n\
\ \"acc_stderr\": 0.03191100192835794,\n \"acc_norm\": 0.6547085201793722,\n\
\ \"acc_norm_stderr\": 0.03191100192835794\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6259541984732825,\n \"acc_stderr\": 0.042438692422305246,\n\
\ \"acc_norm\": 0.6259541984732825,\n \"acc_norm_stderr\": 0.042438692422305246\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7107438016528925,\n \"acc_stderr\": 0.04139112727635463,\n \"\
acc_norm\": 0.7107438016528925,\n \"acc_norm_stderr\": 0.04139112727635463\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6944444444444444,\n\
\ \"acc_stderr\": 0.04453197507374983,\n \"acc_norm\": 0.6944444444444444,\n\
\ \"acc_norm_stderr\": 0.04453197507374983\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6380368098159509,\n \"acc_stderr\": 0.037757007291414416,\n\
\ \"acc_norm\": 0.6380368098159509,\n \"acc_norm_stderr\": 0.037757007291414416\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.2767857142857143,\n\
\ \"acc_stderr\": 0.04246624336697625,\n \"acc_norm\": 0.2767857142857143,\n\
\ \"acc_norm_stderr\": 0.04246624336697625\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7991452991452992,\n\
\ \"acc_stderr\": 0.026246772946890467,\n \"acc_norm\": 0.7991452991452992,\n\
\ \"acc_norm_stderr\": 0.026246772946890467\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7496807151979565,\n\
\ \"acc_stderr\": 0.015491088951494569,\n \"acc_norm\": 0.7496807151979565,\n\
\ \"acc_norm_stderr\": 0.015491088951494569\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.025906632631016127,\n\
\ \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.025906632631016127\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38100558659217876,\n\
\ \"acc_stderr\": 0.016242028834053613,\n \"acc_norm\": 0.38100558659217876,\n\
\ \"acc_norm_stderr\": 0.016242028834053613\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6405228758169934,\n \"acc_stderr\": 0.027475969910660952,\n\
\ \"acc_norm\": 0.6405228758169934,\n \"acc_norm_stderr\": 0.027475969910660952\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6366559485530546,\n\
\ \"acc_stderr\": 0.027316847674192714,\n \"acc_norm\": 0.6366559485530546,\n\
\ \"acc_norm_stderr\": 0.027316847674192714\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.026725868809100793,\n\
\ \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.026725868809100793\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.425531914893617,\n \"acc_stderr\": 0.02949482760014438,\n \
\ \"acc_norm\": 0.425531914893617,\n \"acc_norm_stderr\": 0.02949482760014438\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44589308996088656,\n\
\ \"acc_stderr\": 0.012695244711379776,\n \"acc_norm\": 0.44589308996088656,\n\
\ \"acc_norm_stderr\": 0.012695244711379776\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.03016191193076711,\n\
\ \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.03016191193076711\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5473856209150327,\n \"acc_stderr\": 0.020136790918492523,\n \
\ \"acc_norm\": 0.5473856209150327,\n \"acc_norm_stderr\": 0.020136790918492523\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\
\ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\
\ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6285714285714286,\n \"acc_stderr\": 0.03093285879278985,\n\
\ \"acc_norm\": 0.6285714285714286,\n \"acc_norm_stderr\": 0.03093285879278985\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.736318407960199,\n\
\ \"acc_stderr\": 0.031157150869355558,\n \"acc_norm\": 0.736318407960199,\n\
\ \"acc_norm_stderr\": 0.031157150869355558\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.035887028128263686,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.035887028128263686\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n\
\ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n\
\ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.03158149539338734,\n\
\ \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.03158149539338734\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3243574051407589,\n\
\ \"mc1_stderr\": 0.016387976779647935,\n \"mc2\": 0.44868368066946906,\n\
\ \"mc2_stderr\": 0.015140951474620613\n }\n}\n```"
repo_url: https://huggingface.co/SLAM-group/NewHope
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|arc:challenge|25_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hellaswag|10_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T16:20:26.294433.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T16:20:26.294433.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-02T16:20:26.294433.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-02T16:20:26.294433.parquet'
- config_name: results
data_files:
- split: 2023_08_02T16_20_26.294433
path:
- results_2023-08-02T16:20:26.294433.parquet
- split: latest
path:
- results_2023-08-02T16:20:26.294433.parquet
---
# Dataset Card for Evaluation run of SLAM-group/NewHope
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/SLAM-group/NewHope
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [SLAM-group/NewHope](https://huggingface.co/SLAM-group/NewHope) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_SLAM-group__NewHope",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-02T16:20:26.294433](https://huggingface.co/datasets/open-llm-leaderboard/details_SLAM-group__NewHope/blob/main/results_2023-08-02T16%3A20%3A26.294433.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5588691829632426,
"acc_stderr": 0.03433115773924322,
"acc_norm": 0.5628652703397449,
"acc_norm_stderr": 0.03430877590228174,
"mc1": 0.3243574051407589,
"mc1_stderr": 0.016387976779647935,
"mc2": 0.44868368066946906,
"mc2_stderr": 0.015140951474620613
},
"harness|arc:challenge|25": {
"acc": 0.5767918088737202,
"acc_stderr": 0.014438036220848022,
"acc_norm": 0.6092150170648464,
"acc_norm_stderr": 0.014258563880513782
},
"harness|hellaswag|10": {
"acc": 0.6366261700856403,
"acc_stderr": 0.004799882248494812,
"acc_norm": 0.8399721171081458,
"acc_norm_stderr": 0.003658826208101608
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.43703703703703706,
"acc_stderr": 0.04284958639753399,
"acc_norm": 0.43703703703703706,
"acc_norm_stderr": 0.04284958639753399
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5328947368421053,
"acc_stderr": 0.040601270352363966,
"acc_norm": 0.5328947368421053,
"acc_norm_stderr": 0.040601270352363966
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5811320754716981,
"acc_stderr": 0.03036505082911521,
"acc_norm": 0.5811320754716981,
"acc_norm_stderr": 0.03036505082911521
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5763888888888888,
"acc_stderr": 0.04132125019723369,
"acc_norm": 0.5763888888888888,
"acc_norm_stderr": 0.04132125019723369
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952344,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952344
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5260115606936416,
"acc_stderr": 0.038073017265045125,
"acc_norm": 0.5260115606936416,
"acc_norm_stderr": 0.038073017265045125
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.0433643270799318,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.0433643270799318
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526094,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526094
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4595744680851064,
"acc_stderr": 0.03257901482099835,
"acc_norm": 0.4595744680851064,
"acc_norm_stderr": 0.03257901482099835
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.04142439719489361,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.04142439719489361
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4689655172413793,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.4689655172413793,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.023919984164047732,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.023919984164047732
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.042857142857142816,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.042857142857142816
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6870967741935484,
"acc_stderr": 0.02637756702864586,
"acc_norm": 0.6870967741935484,
"acc_norm_stderr": 0.02637756702864586
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.39901477832512317,
"acc_stderr": 0.03445487686264715,
"acc_norm": 0.39901477832512317,
"acc_norm_stderr": 0.03445487686264715
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6606060606060606,
"acc_stderr": 0.03697442205031595,
"acc_norm": 0.6606060606060606,
"acc_norm_stderr": 0.03697442205031595
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7070707070707071,
"acc_stderr": 0.032424979581788166,
"acc_norm": 0.7070707070707071,
"acc_norm_stderr": 0.032424979581788166
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5256410256410257,
"acc_stderr": 0.025317649726448656,
"acc_norm": 0.5256410256410257,
"acc_norm_stderr": 0.025317649726448656
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.02857834836547307,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.02857834836547307
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6134453781512605,
"acc_stderr": 0.03163145807552379,
"acc_norm": 0.6134453781512605,
"acc_norm_stderr": 0.03163145807552379
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.03822746937658753,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.03822746937658753
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7211009174311926,
"acc_stderr": 0.01922746887646351,
"acc_norm": 0.7211009174311926,
"acc_norm_stderr": 0.01922746887646351
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7745098039215687,
"acc_stderr": 0.029331162294251735,
"acc_norm": 0.7745098039215687,
"acc_norm_stderr": 0.029331162294251735
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7426160337552743,
"acc_stderr": 0.028458820991460295,
"acc_norm": 0.7426160337552743,
"acc_norm_stderr": 0.028458820991460295
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6547085201793722,
"acc_stderr": 0.03191100192835794,
"acc_norm": 0.6547085201793722,
"acc_norm_stderr": 0.03191100192835794
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6259541984732825,
"acc_stderr": 0.042438692422305246,
"acc_norm": 0.6259541984732825,
"acc_norm_stderr": 0.042438692422305246
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7107438016528925,
"acc_stderr": 0.04139112727635463,
"acc_norm": 0.7107438016528925,
"acc_norm_stderr": 0.04139112727635463
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.04453197507374983,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.04453197507374983
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6380368098159509,
"acc_stderr": 0.037757007291414416,
"acc_norm": 0.6380368098159509,
"acc_norm_stderr": 0.037757007291414416
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.2767857142857143,
"acc_stderr": 0.04246624336697625,
"acc_norm": 0.2767857142857143,
"acc_norm_stderr": 0.04246624336697625
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7991452991452992,
"acc_stderr": 0.026246772946890467,
"acc_norm": 0.7991452991452992,
"acc_norm_stderr": 0.026246772946890467
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7496807151979565,
"acc_stderr": 0.015491088951494569,
"acc_norm": 0.7496807151979565,
"acc_norm_stderr": 0.015491088951494569
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.025906632631016127,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.025906632631016127
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38100558659217876,
"acc_stderr": 0.016242028834053613,
"acc_norm": 0.38100558659217876,
"acc_norm_stderr": 0.016242028834053613
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6405228758169934,
"acc_stderr": 0.027475969910660952,
"acc_norm": 0.6405228758169934,
"acc_norm_stderr": 0.027475969910660952
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6366559485530546,
"acc_stderr": 0.027316847674192714,
"acc_norm": 0.6366559485530546,
"acc_norm_stderr": 0.027316847674192714
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.026725868809100793,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.026725868809100793
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.425531914893617,
"acc_stderr": 0.02949482760014438,
"acc_norm": 0.425531914893617,
"acc_norm_stderr": 0.02949482760014438
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.44589308996088656,
"acc_stderr": 0.012695244711379776,
"acc_norm": 0.44589308996088656,
"acc_norm_stderr": 0.012695244711379776
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.03016191193076711,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.03016191193076711
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5473856209150327,
"acc_stderr": 0.020136790918492523,
"acc_norm": 0.5473856209150327,
"acc_norm_stderr": 0.020136790918492523
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6285714285714286,
"acc_stderr": 0.03093285879278985,
"acc_norm": 0.6285714285714286,
"acc_norm_stderr": 0.03093285879278985
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.736318407960199,
"acc_stderr": 0.031157150869355558,
"acc_norm": 0.736318407960199,
"acc_norm_stderr": 0.031157150869355558
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.035887028128263686,
"acc_norm": 0.85,
"acc_norm_stderr": 0.035887028128263686
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.783625730994152,
"acc_stderr": 0.03158149539338734,
"acc_norm": 0.783625730994152,
"acc_norm_stderr": 0.03158149539338734
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3243574051407589,
"mc1_stderr": 0.016387976779647935,
"mc2": 0.44868368066946906,
"mc2_stderr": 0.015140951474620613
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Weni/Zeroshot_Test-Gold-1K_other | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: texto
dtype: string
- name: true_class
dtype: string
- name: BERT
dtype: string
splits:
- name: test
num_bytes: 186270
num_examples: 1000
download_size: 55029
dataset_size: 186270
---
# Dataset Card for "Zeroshot_Gold_Test-1K_other"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'other' in Brazilian Portuguese.
Prompt:
```
"Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'other' \\n\\nTweet: frase \\n\\nLabel:
```
## How to load and use this dataset:
```
from datasets import load_dataset
dataset = load_dataset("Weni/Zeroshot_Gold_Test-1K_other")
dataset
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
temasarkisov/SolidLogosID_converted_processed_V2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1031221.0
num_examples: 48
download_size: 1031152
dataset_size: 1031221.0
---
# Dataset Card for "SolidLogosID_converted_processed_V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
christopherthompson81/quant_exploration | ---
license: gpl-2.0
task_categories:
- text-generation
language:
- en
pretty_name: Examining LLM Quantization Impact
size_categories:
- n<1K
---
# Examining LLM Quantization Impact
This document is a comparative analysis of qualitative performance degradation across Llama.cpp quantization within a single 2x7B model. My hope is that it will help people unfamiliar with quant impacts get a sense of how quantization will affect output.
## Headings
1. [Quants](#quants)
2. [Test Set-Up](#test-set-up)
3. [Interpretation](#interpretation)
---
## Quants
The two metrics associated with LLM quantization that a model-user will be concerned with are "perplexity" and "compression". A large-language model with a low perplexity is more likely to predict the next token (word) in a stream of text correctly. Quantizing a model results in an increase in perplexity and a reduction in size. Any increase in perplexity will compoundingly degrade writing quality with each token generated, so it's highly preferable to minimize change in perplexity (Δppl). That being said, smaller models run faster and with fewer resources.
I have ordered the quants by ascending Δppl. Δppl is copied from llama.cpp's quantize tool and shows the change to LLaMA-v1-7B. Compression shows the percentage reduced compared to f16. Calculating perplexity directly on the quants I produced is too costly for me because I am GPU poor (it would take about 5 days per quant on my computer).
| Quant | Δppl | Compression |
| --- | --- | --- |
| [F16/F32](#f16) | N/A | N/A |
| [Q8_0](#q8_0) | +0.0004 | 46.87% |
| [Q6_K](#q6_k) | +0.0008 | 58.98% |
| [Q5_K_M](#q5_k_m) | +0.0122 | 64.55% |
| [Q5_1](#q5_1) | +0.0349 | 62.46% |
| [Q5_K_S](#q5_k_s) | +0.0400 | 65.55% |
| [Q4_K_M](#q4_k_m) | +0.0532 | 69.79% |
| [Q5_0](#q5_0) | +0.0683 | 65.44% |
| [Q4_K_S](#q4_k_s) | +0.0992 | 71.50% |
| [Q4_1](#q4_1) | +0.1585 | 68.64% |
| [Q3_K_L](#q3_k_l) | +0.1764 | 73.88% |
| [Q4_0](#q4_0) | +0.2166 | 71.62% |
| [Q3_K_M](#q3_k_m) | +0.2496 | 75.91% |
| [Q3_K_S](#q3_k_s) | +0.5551 | 78.31% |
| [Q3_K_XS](#q3_k_xs) | 3-bit extra small quantization | 79.69% |
| [IQ3_XXS](#iq3_xxs) | 3.06 bpw quantization | 80.35% |
| [Q2_K](#q2_k) | +0.6717 | 81.52% |
| [Q2_K_S](#q2_k_s) | +9.0634 | 82.79% |
| [IQ2_XS](#iq2_xs) | 2.31 bpw quantization | 85.10% |
| [IQ2_XXS](#iq2_xxs) | 2.06 bpw quantization | 86.56% |
## Test Set-Up
### Model
https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B
### Quantization
1. Used llama.cpp:convert.py to convert the model to an f16 GGUF
2. Calculated an importance matrix using wikitext-2-raw/wiki.train.raw
3. Used llama.cpp:quantize to quantize the f16 model to to the various quants with the calculated importance matrix.
### Input
I created a file: prompts/logic_precidence_inference.txt
```
# Instructions
The following is an interaction between an AI model and a user where the user is posing a question to test the ability of the AI to make assumumptions by making inferences from the context. These are potentially riddles and to answer correctly, the AI should infer the majority case when faced with ambiguity.
# Prompt
Assistant: How can I help you today?
User: Alex is Charlie's father. Which one of them was born later?
Assistant:
```
This input was run many times to attempt to see how the answers vary. Different classifications of answers were noted while functionally equivalent answers were not.
A secondary case with 10 runs includes an additional instruction: "Furthermore, the AI is to answer tersely, not explaining it's rationale. Single word answers are optimal." *Note: 10 runs may not be enough*
### Command
```
./build/bin/main \
--n-gpu-layers -6 \
-m ~/models/FusionNet_7Bx2_MoE_14B_GGUF/TomGrc_FusionNet_7Bx2_MoE_14B_Q5_K_S.gguf \
-f prompts/logic_precidence_inference.txt \
-c 1024 \
-n 1024 \
--repeat_penalty 1.0
```
---
## f16
Non-Quanted Model
Size: 24G (25759553440)
### INCORRECT
> Considering the typical generational order between a father and his child, we can infer that Charlie was most likely born before Alex became a father. Therefore, Alex was probably born after Charlie. So, Alex was born later in comparison to the given context.
### CORRECT
> Considering the usual family structure and age dynamics between parents and their children, it is more likely that Charlie was born later than Alex. This assumption holds true unless there is any specific context given to suggest otherwise.
### Bare-word Results
```
[CORRECT] 9 - *,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 1 - * (Alex)
```
---
## Q8_0
Performs roughly as well as the non-quantized model
Size: 13G (13685856224)
### CORRECT
> Considering the traditional family structure where parents are older than their children, it is likely that Charlie was born later than Alex.
### RIGHT ANSWER - BAD LOGIC (partially)
> Considering the given information, it's logical to assume that Charlie was born after Alex since Alex would have been Charlie's age when Charlie was born. So in most cases, Alex would have been older when Charlie was born, implying Charlie was born later relatively in comparison. But it's important to note that this assumption does not hold for all circumstances (e.g., if Alex himself was born later in life), but it's the majority case based on typical generational gaps.
### Bare-word Results
```
[CORRECT] 9 - *,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 1 - * (Alex)
```
---
## Q6_K
Best performing, even ahead of the non-quanted model, oddly. No wrong answers or refusals were generated in long-form.
Size: 9.9G (10566817760)
### CORRECT
> Considering the usual family dynamics and age progression, it is most likely that Charlie was born later than his father Alex. This assumption is made due to limited context and the general understanding that parents are older than their children. However, there may be exceptions to this rule in certain circumstances.
### Bare-word Results
```
[CORRECT] 10 - *,*,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 0 - Nil (Alex)
```
---
## Q5_K_M
Gave refusal and incorrect long-form answers. Bare-word performance was perfect.
Size: 8.6G (9132259296)
### REFUSAL
> Considering the given information only states a parent-child relationship between Alex and Charlie, it cannot be definitively concluded who was born later. Typically, parents are older than their children but without a specific age mentioned, we cannot make that assumption. So, in this context, it is ambiguous.
### INCORRECT
> Considering the normal course of events where parents give birth before having children themselves, it is likely that Charlie was born earlier than Alex. Therefore, Alex was born later compared to Charlie.
### CORRECT
> Considering the common order of events in life, it's usually assumed that parents are older than their children. So in this case, it can be inferred that Charlie was born later than when Alex became a father, making Charlie the one born after a later point in time compared to Alex. Hence, Charlie was likely born later.
### Bare-word Results
```
[CORRECT] 10 - *,*,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 0 - Nil (Alex)
```
---
## Q5_1
Long form writing is slightly weakened. Bare-word results are normal.
Size: 9.1G (9670506464)
### CORRECT
> Considering the given information, it's logical to assume that when Alex is Charlie's father, Alex was born before Charlie. Fatherhood typically occurs after the child's birth. Therefore, in most cases, Charlie was born later compared to Alex.
> However, it's important to note that this assumption relies on typical life events and might not hold in every scenario.
### INCORRECT
> Considering the typical generational order, it's generally assumed that parents are older than their children. Therefore, in this case, Charlie was likely born before Alex became a father, implying Charlie was born earlier but Alex started his parental role later. So, Alex was born later relatively when compared to being Charlie's age at conception.
### Bare-word Results
```
[CORRECT] 9 - *,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 1 - * (Alex)
```
---
## Q5_K_S
The long-form writing has become unreliable and gives predominantly non-correct answers, but is fine when correct. Bare-word results are good.
Size: 8.3G (8873785312)
### CORRECT
> Considering the typical generational relationship between a father and child, it is likely that Alex was born earlier than Charlie. So, Charlie would have been born later relatively when compared to his father Alex.
### INCORRECT
> Considering the typical parent-child relationship where parents are older than their children, it is most likely that Charlie was born earlier than Alex became a father. Hence, Alex was born later relatively compared to Charlie.
### ANSWERING A PROXIMAL QUESTION
> Considering the given information where Alex is Charlie's father, it's generally assumed that a parent is older than their child. Thus, we can infer that Alex was probably born earlier than Charlie.
### Bare-word Results
```
[CORRECT] 9 - *,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 1 - * (Alex)
```
---
## Q4_K_M
Predominantly correct long-form answers. Perfect bare-word results. No Refusals.
Size: 7.3G (7782086624)
### CORRECT
> Considering the given information where Alex is Charlie's father, it is generally assumed that a father is older than his child. Therefore, it can be inferred that Alex was born earlier than Charlie. Hence, Charlie would have been born later compared to Alex.
### INCORRECT
> Considering the given information, it is generally assumed that parents are older than their children when they have them. Therefore, in most cases, Charlie would have been born before Alex if Alex is Charlie's father. Hence, Alex (as the father) was born later relatively compared to Charlie."
### ANSWERING_A_PROXIMAL_QUESTION
> Considering the typical order of events in a family structure, when Alex is Charlie's father, it is generally assumed that Alex was born earlier than Charlie."
### Bare-word Results
```
[CORRECT] 10 - *,*,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 0 - Nil (Alex)
```
---
## Q5_0
Both long-form and bare-word have lost a little reliability.
Size: 8.3G (8903145440)
### CORRECT
> Considering the typical generational order where parents are older than their children, it is more likely that Charlie was born later than Alex.
### INCORRECT
> Considering the typical order of events where a child is born after their parents, it is likely that Charlie was born before Alex became his father, implying Charlie was born later relatively compared to Alex. However, without specific birth dates, this answer is subject to change if unusual circumstances exist. But generally speaking, Charlie would have been born later than when Alex became his father.
### CONFUSED
> Considering the typical family structure where a parent is older than their child, it is most likely that Charlie was born earlier than Alex became a father, implying Charlie was born later relatively but Alex was born before. So in this scenario, Alex was born earlier but Charlie was born comparatively later in their lives.
### Bare-word Results
```
[CORRECT] 8 - *,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 2 - *,* (Alex)
```
---
## Q4_K_S
Good long-form behaviour. Perfect bare-word performance. Low variability observed.
Size: 6.9G (7341160416)
### CORRECT
> Considering the given context where Alex is Charlie's father, it is generally assumed that an older person becomes a parent to a younger one. Thus, we can infer that Charlie was born later than Alex.
### INCORRECT
> Considering the typical family dynamic where parents are older than their children, it is more likely that Charlie was born earlier than Alex. Hence, Alex was born later compared to Charlie in most scenarios.
### Bare-word Results
```
[CORRECT] 10 - *,*,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 0 - Nil (Alex)
```
---
## Q4_1
In long-form, the model frequently answered a proximal question (who was born earlier) rather than the exact question. No incorrect answers observed in long-form.
Size: 7.6G (8077064160)
### CORRECT
> Considering the typical generation gap between a parent and their child, it's more likely that Alex (Charlie's father) was born earlier than Charlie. Therefore, Charlie was born later in this scenario.
### REFUSAL
> Considering the given information only states the relationship between Alex and Charlie as father and son, it does not provide enough context to determine when they were born or who was born later. Generally speaking, parents are born before their children but without a specific timeline, we cannot assume the exact order of birth here.
### RIGHT_ANSWER_BAD_LOGIC
> Considering the given information, it's most likely that Charlie was born after Alex since Alex would have been Charlie's age when Charlie was born. However, direct birth order cannot be definitively concluded without more context.
### ANSWERING_A_PROXIMAL_QUESTION
> Considering the given information where Alex is Charlie's father, it would typically imply a generational relationship with Alex being older than Charlie. Therefore, Alex was most likely born before Charlie.
### Bare-word Results
```
[CORRECT] 9 - *,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 1 - * (Alex)
```
---
## Q3_K_L
No incorrect or refusal long-form answers. Bare-word performance was degraded somewhat.
Size: 6.3G (6729209824)
### CORRECT
> In general, when discussing parent-child relationships, it's assumed that the parent was born earlier than the child. Therefore, Charlie was likely born later than his father Alex.\n\nHowever, to provide a definitive answer, we would need more context or information about their birth dates. But given the typical parent-child relationship dynamic, it's safe to infer this scenario.
### ANSWERING_A_PROXIMAL_QUESTION
> In the majority case when considering age differences between a father and child, it's assumed that the parent (Alex in this scenario) is older than their offspring (Charlie). Therefore, Alex was most likely born earlier than Charlie.
### Bare-word Results
```
[CORRECT] 8 - *,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 2 - *,* (Alex)
```
---
## Q4_0
Long-form answers were evenly distributed among classifications, meaning it did not reliably give correct answers. Bare-word performance was perfect.
Size: 6.9G (7309703136)
### CORRECT
> Considering the typical order of events, when a person becomes a parent (as Alex is to Charlie), it usually implies that Alex was born earlier. Therefore, it's more likely that Charlie was born later in this context.
### INCORRECT
> Considering the normal life progression where parents are older than their children, it can be inferred that Charlie was born earlier, thus making Alex, his father, born later. So, Alex was born later compared to Charlie.
### ANSWERING_A_PROXIMAL_QUESTION
> Considering the general progression of life, parents are typically older than their children when they give birth. Hence, it is more likely that Alex was born earlier than Charlie in this scenario.
### Bare-word Results
```
[CORRECT] 10 - *,*,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 0 - Nil (Alex)
```
---
## Q3_K_M
Good bare-word performance, even if it wasn't all correct answers.
5.8G (6205970400)
### CORRECT
> In most cases, when a parent gives birth to a child, it's assumed that the parent was born earlier than their child. So, in this scenario where Alex is Charlie's father, it's likely that Alex was born before Charlie. Hence, Charlie was born later compared to Alex.
### ANSWERING A PROXIMAL QUESTION
> In most cases, when a person becomes a parent, it means they have already lived through their own childhood and adolescence before having a child. Therefore, it can be inferred that Alex (Charlie's father) was born earlier than Charlie.
### Bare-word Results
```
[CORRECT] 7 - *,*,*,*,*,*,* (Charlie)
[INCORRECT] 3 - *,*,* (Alex)
```
---
## Q3_K_S
Mostly correct output in long form. Mostly incorrect output in bare-word.
Size: 5.3G (5587310560)
### CORRECT
> In most cases, when a person is someone's father, it implies that the father was born before the child. Therefore, Charlie would have been born later than Alex in the majority of situations.
> However, it's important to note that exceptions can exist (for example, in cases of assisted reproduction, where the age difference is compressed, or if they are related but not biologically father and son). In general discourse and without further context, we would assume the majority case.
> So, in the assumed majority case, Charlie was born later than Alex.
### ANSWERING A PROXIMAL QUESTION
> In this scenario, it is generally assumed that when someone is referred to as a father, they are the older parent. Therefore, it is likely that Alex, being Charlie's father, was born earlier than Charlie.
> However, without further context or information, it is impossible to know for certain if there are any exceptional circumstances such as age reversal treatments or other factors that may have changed their relative ages. So, in the majority case, Alex was born earlier than Charlie.
### Bare-word Results
```
[CORRECT] 2 - *,* (Charlie)
[INCORRECT] 7 - *,*,*,*,*,*,* (Alex)
[INCORRECT+RATIONALE] 1 - *
```
---
## Q3_K_XS
No wrong answers or refusals were generated in long-form. One case where the AI attempted to answer both ways for a single individual. Bad bare-word performance.
Size: 4.9G (5230532576)
### CORRECT
> Considering the typical generational order between a parent and child, it is more likely that Alex, being Charlie's father, was born earlier than Charlie. Thus, Charlie was born later in comparison to Alex.
### SLIGHTLY HEDGED
> It is more likely that Alex was born earlier since he is Charlie's father. Generally speaking, a parent is older than their child. Therefore, we can assume that Alex was born later as a child but earlier as a parent when compared to Charlie.
### Bare-word Results
```
[CORRECT] 2 - *,* (Charlie)
[INCORRECT] 5 - *,*,*,*,* (Alex)
[INCORRECT+RATIONALE] 3 - *,*,*
```
---
## IQ3_XXS
No incorrect or refusal long-form answers. Good Bare-word performance.
Size: 4.8G (5062604768)
### CORRECT
> In this scenario, since Alex is Charlie's father, it's more likely that Alex was born earlier. Therefore, Charlie would have been born later than his father Alex.
### PROXIMAL_QUESTION
> Considering the given information only states the relationship between Alex and Charlie as father and son respectively, it is assumed that Alex is older since he is the parent of Charlie. Typically, parents are older than their children. However, this assumption depends on other factors, like if Charlie had Alex late in life or if Alex had a late start to fatherhood due to infertility treatments, that might change the scenario. But in a general case, Alex would have been born earlier than Charlie.
### bare_word_results
```
[CORRECT] 9 - *,*,*,*,*,*,*,*,* (Charlie)
[INCORRECT] 1 - * (Alex)
```
---
## Q2_K
No wrong answers or refusals were generated in long-form. Bad bare-word performance.
Size: 4.5G (4760303584)
### CORRECT
> Considering that parenthood generally happens before a child's birth, it's more likely that Charlie was born later than Alex.
### Bare-word Results
```
[CORRECT] 3 - *,*,* (Charlie)
[INCORRECT] 2 - *,* (Alex)
[INCORRECT+RATIONALE] 1 - *
[BAD_ANSWER] 4 - *,*,*,* (CHANTLEE,Chartie,Chainler)
```
---
## Q2_K_S
Answers were typically mixed-language (French, code, no spaces) and refusal. Never gave a correct response.
Size: 4.2G (4434196448)
### REFUSAL (AND FRENCH)
> même si Alex est le père de Charlie, cela ne nous donne pas d'information sur quand ils sont nés pour déterminer qui est né plus tard. On devrait savoir leur âge ou la période de leur naissance pour résoudre ce riddles.
> # Translation
> Even though Alex is Charlie's father, this does not provide us with information about when they were born to determine who was born later. In order to solve this riddle, we need to know their age or the period of their birth.
### INCORRECT
> opponentially but considering the given information but assuming Charlie is an adult with age comparable to or older than Alex's age range as parents, we can infer that Charlie was born earlier than Alex since Alex is Charlie's father.
### Bare-word Results
```
[CORRECT] 0 - Nil (Charlie)
[INCORRECT] 0 - Nil (Alex)
[INCORRECT+RATIONALE] 2 - *,*
[BAD_ANSWER+RATIONALE] 8 - *,*,*,*,*,*,*,*
```
---
## IQ2_XS
This is much better than the incoherent results of IQ2_XXS.
Size: 3.6G (3837450208)
### INCORRECT
> In this case, it is more likely assumed that Charlie was born earlier than Alex since Alex is his father, implying an age relationship where a father is usually older than his child.
### CORRECT
> In this scenario, it is generally assumed that since Alex is Charlie's father, Charlie would be Alex's younger counterpart as father-child relationships inherently involve an older parent and a younger offspring.
### Bare-word Results
```
[CORRECT] 3 - *,*,* (Charlie)
[INCORRECT] 0 - Nil (Alex)
[CORRECT+RATIONALE] 5 - *,*,*,*,*
[INCORRECT+RATIONALE] 1 - *
[BAD_ANSWER+RATIONALE] 1 - *
```
---
## IQ2_XXS
Not Recommended. There were many possible answer classifications, and it was not predictable which the quant would output.
Size: 3.3G (3462060000)
### INCORRECT
> To infer the answer correctly, we need more information about their ages, as having only the relationship between them doesn't provide enough context. However, if it is the majority case that older people tend to have children younger than themselves, we can assume that Charlie was born earlier and Alex was born later. But without more contextual details, this remains uncertain.
> Glastring ambiguity without clear ages.
### CORRECT
> To solve this riddle, we need more specific information about the age gap or birth dates between Alex and Charlie. However, by default assuming equality between their age differences as average across society today, it's more common that Charlie, being younger than Alex who is his father, was born later.
### INCOHERENT
> To assume age equality between Alex and Charlie initially remploys ambiguity but considering age limitations and generational shifts Alex (born older) would presumably be older still whereas Charlie being his son would have been born later relatively recently compared to Alex's age range.
### RIGHT ANSWER - BAD LOGIC
> To solve this riddle efficiently, we should consider the age difference and the most common scenario of generations in families. Typically, younger generations tend to have children rather than the opposite. Therefore, it's more likely that Charlie was born later than Alex. However, without specific birth dates or ages available, we cannot definitively confirm this assumption.
> Answer Late: Charlie
### ANSWERING A PROXIMAL QUESTION
> To assume age equality between Alex and Charlie when they first became father and son would make Alex older when they reunite for this query. ⠀
> conde: ❶ However capabilities of AI limiting precise knowledge of generation dates make assumptions necessary when extracting age relations between Alex and Charlie from context. ⠀
> condeconde: ₂ Peroften when a father acknowleges their child as mentioned here with "Alex is Charlie's father", it implies Alex is older as typically younger Charlie would not have gotten pregnantouver nor given birth yet. ⠀
> condecondefinal: ₂ENDORSED_STRONG_**Most likely**, Alex (the father) was born earlier than Charlie.**/code>
> In summary: Most likely Alex (the father) was born earlier than Charlie.
### NONSENSE
> TDMerstHOUTSmixedup=%removedmix%)typen=% havetypes=% havetypes=% havetypes=% havetypes=% havetypes=% notenostmix%)typen=% notenostmix%)typen=% notenestonnievey=%typenoestonniestr=%typen=% bekanfact=% bekanfact=% bekanfact=ThinkingaboutCharliesagebrained-%LMVRevertleader=%thinkwas=% thinkwas=% thinkwas=% thinking=% thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%thinkwas>%finalanswer=% thinkingaboutCharliesagebeingolderwhenhewasbornafterCharliesagehadapositive=% finalanswermostlikelyCharliesagehadapositiveyearswhenhehimself=% finalanswermostlikelyAlexwasbornafterCharliesagehadapositive=% finalanswermostlikelyAlexgotolderwhenhewasbornafterthatyear:%RM
...and so on
### Bare-word Results
```
[CORRECT] 0 - Nil (Charlie)
[INCORRECT] 0 - Nil (Alex)
[NONSENSE] 10 - *,*,*,*,*,*,*,*,*,* (random tokens)
```
## Interpretation
* Q6_K is an excellent quant choice to make.
* IQ3_XXS is the smallest quant that has very low degradation.
* Down to Q3_K_M, the model does not become particularly broken.
* Q3_K_S and below, the quants make the model unreliable.
* K-means quants are usually superior to non-K-means quants
|
chrisgru/commonsense-dialogues | ---
dataset_info:
features:
- name: system
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 13583083
num_examples: 20176
- name: test
num_bytes: 1415597
num_examples: 2158
- name: validation
num_bytes: 1443693
num_examples: 2157
download_size: 6854218
dataset_size: 16442373
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for "commonsense-dialogues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.