datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
lckr/OASST-DE-sharegpt | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversation
list:
- name: role
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 8016292
num_examples: 3721
download_size: 4326435
dataset_size: 8016292
---
|
jmichaelov/inverse_scaling_prize-memo_trap | ---
license: cc-by-4.0
---
|
grizzlybearbee/T | ---
license: apache-2.0
---
|
andersonbcdefg/synthetic_nli_part2_with_margins | ---
dataset_info:
features:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
- name: source
dtype: string
- name: qp_sim
dtype: float32
- name: qn_sim
dtype: float32
- name: pn_sim
dtype: float32
- name: margin
dtype: float64
splits:
- name: train
num_bytes: 75465048.3371383
num_examples: 82938
download_size: 16959493
dataset_size: 75465048.3371383
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_cola_bare_perfect | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 6919
num_examples: 81
- name: test
num_bytes: 7070
num_examples: 87
- name: train
num_bytes: 58846
num_examples: 759
download_size: 39384
dataset_size: 72835
---
# Dataset Card for "MULTI_VALUE_cola_bare_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tamzid9315/aaa | ---
license: c-uda
---
|
lkh9908/ywcFilteredCombinedHub2 | ---
dataset_info:
features:
- name: id
dtype: string
- name: abstract
dtype: string
- name: highlight
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 35897229
num_examples: 21085
download_size: 19325472
dataset_size: 35897229
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jiandong/crimson-attck-vectors | ---
dataset_info:
features:
- name: id
dtype: string
- name: attck_id
dtype: string
- name: attck_name
dtype: string
- name: description
dtype: string
- name: kill_chain_phases
sequence: string
- name: domains
sequence: string
- name: tactic_type
sequence: string
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 3897164
num_examples: 820
download_size: 4234040
dataset_size: 3897164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
InstaDeepAI/genomics-long-range-benchmark | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- biology
- genomics
pretty_name: Genomics Long Range Benchmark
viewer: false
---
## Summary
The motivation of the genomics long range benchmark (LRB) is to compile a set of biologically relevant genomic tasks requiring long-range dependencies which will act as a robust evaluation tool for genomic language models.
While serving as a strong basis of evaluation, the benchmark must also be efficient and user-friendly.
To achieve this we strike a balance between task complexity and computational cost through strategic decisions, such as down-sampling or combining datasets.
## Dataset Tasks
The Genomics LRB is a collection of tasks which can be loaded by passing in the corresponding `task_name` into the `load_dataset` function. All of the following datasets
allow the user to specify an arbitrarily long sequence length, giving more context to the task, by passing `sequence_length` kwarg to `load_dataset`. Additional task specific kwargs, if applicable,
are mentioend in the sections below.<br>
*Note that as you increase the context length to very large numbers you may start to reduce the size of the dataset since a large context size may
cause indexing outside the boundaries of chromosomes.
| Task | `task_name` | Sample Output | # Train Seqs | # Test Seqs |
| --------- | ---------- | ------ | ------------ | ----------- |
| CAGE Prediction | `cage_prediction`| {sequence, labels, chromosome} | 36086 | 1922 |
| Bulk RNA Expression | `bulk_rna_expression` | {sequence, labels, chromosome} | 22827 | 990 |
| Variant Effect Gene Expression | `variant_effect_gene_expression` | {ref sequence, alt sequence, label, tissue, chromosome, distance to nearest TSS} | 89060 | 8862 |
## Usage Example
```python
from datasets import load_dataset
# Use this parameter to download sequences of arbitrary length (see docs below for edge cases)
sequence_length=2048
# One of ["cage_prediction", "bulk_rna_expression", "variant_effect_gene_expression"]
task_name = "variant_effect_gene_expression"
dataset = load_dataset(
"InstaDeepAI/genomics-long-range-benchmark",
task_name=task_name,
sequence_length=sequence_length,
)
```
### 1. CAGE Prediction
Cap Analysis Gene Expression(CAGE) is a biological assay used to measure the level of mRNA production rather than steady state values, taking into account both production and
degradation. Being able to accurately predict mRNA levels as measured by CAGE is essential for deciphering tissue-specific expression patterns, transcriptional networks, and
identifying differentially expressed genes with functional significance.
#### Source
Original CAGE data comes from FANTOM5. We used processed labeled data obtained from the [Basenji paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5932613/) which also used to train Enformer and is located [here](https://console.cloud.google.com/storage/browser/basenji_barnyard/data/human?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false).
Sequence data originates from the GRCh38 genome assembly.
#### Data Processing
The original dataset from the Basenji paper includes labels for 638 CAGE total tracks over 896 bins (each bin corresponding to 128 base pairs)
totaling over ~70 GB. In the interest of dataset size and user friendliness, only a subset of the labels are selected.
From the 638 CAGE tracks, 50 of these tracks are selected with the following criteria:
1. Only select one cell line
2. Only keep mock treated and remove other treatments
3. Only select one donor
The [896 bins, 50 tracks] labels total in at ~7 GB. A description of the 50 included CAGE tracks can be found here `cage_prediction/label_mapping.csv`.
#### Task Structure
Type: Multi-variable regression<br>
Because this task involves predicting expression levels for 128bp bins and there are 896 total bins in the dataset, there are in essence labels for 896 * 128 = 114,688 basepair sequences. If
you request a sequence length smaller than 114,688 bps than the labels will be subsetted.
Task Args:<br>
`sequence_length`: an interger type, the desired final sequence length, *must be a multiple of 128 given the binned nature of labels<br>
Input: a genomic nucleotide sequence centered around the labeled region of the gene transcription start site<br>
Output: a variable length vector depending on the requested sequence length [requested_sequence_length / 128, 50]
#### Splits
Train/Test splits were maintained from Basenji and Enformer where randomly sampling was used to generate the splits. Note that for this dataset a validation set is also returned. In practice we merged the validation
set with the train set and use cross validation to select a new train and validation set from this combined set.
#### Metrics
Mean Pearson correlation across tracks - compute Pearson correlation for a track using all positions for all genes in the test set, then mean over all tracks <br>
Mean Pearson correlation across genes - compute Pearson correlation for a gene using all positions and all tracks, then mean over all genes in the test set <br>
R<sup>2</sup>
---
### 2. Bulk RNA Expression
In comparison to CAGE, bulk RNA sequencing assays measure the steady state level (both transription and degradation) of mRNA in a population of cells.
#### Source
Original data comes from GTEx. We use processed data files from the [ExPecto paper](https://www.nature.com/articles/s41588-018-0160-6) found
[here](https://github.com/FunctionLab/ExPecto/tree/master/resources). Sequence data originates from the GRCh37/hg19 genome assembly.
#### Data Processing
The continuous labels were log(1+x) transformed and standardized. A list of names of tissues corresponding to the labels can be found here: `bulk_rna_expression/label_mapping.csv`.
#### Task Structure
Type: Multi-variable regression<br>
Task Args:<br>
`sequence_length`: an interger type, the desired final sequence length<br>
Input: a genomic nucleotide sequence centered around the CAGE representative trancription start site<br>
Output: a 218 length vector of continuous values corresponding to the bulk RNA expression levels in 218 different tissue types
#### Splits
Train: chromosomes 1-7,9-22,X,Y<br>
Test: chromosome 8
#### Metrics
Mean Spearman correlation across tissues <br>
Mean Spearman correlation across genes <br>
R<sup>2</sup>
---
### 3. Variant Effect Gene Expression
In genomics, a key objective is to predict how genetic variants affect gene expression in specific cell types.
#### Source
Original data comes from GTEx. However, we used processed data files from the [Enformer paper](https://www.nature.com/articles/s41592-021-01252-x) located [here](https://console.cloud.google.com/storage/browser/dm-enformer/data/gtex_fine/vcf?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false).
Sequence data originates from the GRCh38 genome assembly.
#### Data Processing
In Enformer the datasets were partitioned in 48 different sets based on the tissue types. In our framing of the task we combine all samples across all tissues into one set
and provide the tissue type along with each sample.
As data files were used from Enformer, the labels were constructed according to their methodology - variants were labeled as 1 if their posterior inclusion probability was greater than 0.9 as
assigned by the population-based fine-mapping tool SuSiE, while a matched set of negative variants was built with posterior inclusion probabilities of less than .01.
#### Task Structure
Type: Binary classification<br>
Task Args:<br>
`sequence_length`: an interger type, the desired final sequence length<br>
Input: a genomic nucleotide sequence centered on the SNP with the reference allele at the SNP location, a genomic nucleotide sequence centered on the SNP with the alternative allele at the SNP location, and tissue type<br>
Output: a binary value refering to whether the variant has an effect on gene expression
#### Splits
Train: chromosomes 1-8, 11-22, X, Y<br>
Test: chromosomes 9,10
#### Metrics
Accuracy<br>
AUROC<br>
AUPRC |
DBQ/Farfetch.Product.prices.United.Kingdom | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: United Kingdom - Farfetch - Product-level price list
tags:
- webscraping
- ecommerce
- Farfetch
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: int64
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 229625994
num_examples: 613571
download_size: 80532862
dataset_size: 229625994
---
# Farfetch web scraped data
## About the website
Farfetch operates in the dynamic and rapidly evolving **E-commerce industry** in the **EMEA**, particularly in the **United Kingdom**. This sector is marked by intense digital transformation with a growing shift towards online shopping. Notably, the fashion and lifestyle segment of e-commerce is witnessing massive growth. The **UK E-commerce sector** is marked by high internet penetration rates, favourable consumer attitudes, and advances in technology. This has resulted in a significant increase in online transactions, specifically within the fashion industry. The dataset under review contains **Ecommerce product-list page (PLP) data** on **Farfetch** in the United Kingdom, indicating a comprehensive overview of the company’s digital profile in the UK market.
## Link to **dataset**
[United Kingdom - Farfetch - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Farfetch%20Product-prices%20United%20Kingdom/r/rec4fnXBKT4UpoaXk)
|
Nexdata/10_Hours_Chaozhou_Dialect_Speech_Synthesis_Corpus_Female | ---
license: cc-by-nc-nd-4.0
---
## Description
10 Hours - Chaozhou Dialect Speech Synthesis Corpus - Female. It is recorded by Chaozhou-Shantou Pronunciation. the phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1410?source=Huggingface
## Format
48,000Hz, 24bit, uncompressed wav, mono channel;
## Recording environment
professional recording studio;
## Recording content
general corpus;
## Speaker
professional Character Voice, 20-30 years old, Shantou dialect in Chaoshan;
## Device
microphone;
## Language
chaozhou;
## Annotation
word and phoneme transcription, prosodic boundary annotation;
## Application scenarios
speech synthesis.
# Licensing Information
Commercial License
|
open-llm-leaderboard/details_fangloveskari__Platypus_QLoRA_LLaMA_70b | ---
pretty_name: Evaluation run of fangloveskari/Platypus_QLoRA_LLaMA_70b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [fangloveskari/Platypus_QLoRA_LLaMA_70b](https://huggingface.co/fangloveskari/Platypus_QLoRA_LLaMA_70b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fangloveskari__Platypus_QLoRA_LLaMA_70b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T21:04:30.246280](https://huggingface.co/datasets/open-llm-leaderboard/details_fangloveskari__Platypus_QLoRA_LLaMA_70b/blob/main/results_2023-09-17T21-04-30.246280.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3960780201342282,\n\
\ \"em_stderr\": 0.005008647185447735,\n \"f1\": 0.5245239093959767,\n\
\ \"f1_stderr\": 0.00450887492882971,\n \"acc\": 0.5682691139696489,\n\
\ \"acc_stderr\": 0.011651409152443089\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.3960780201342282,\n \"em_stderr\": 0.005008647185447735,\n\
\ \"f1\": 0.5245239093959767,\n \"f1_stderr\": 0.00450887492882971\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3078089461713419,\n \
\ \"acc_stderr\": 0.012714401009923652\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8287292817679558,\n \"acc_stderr\": 0.010588417294962526\n\
\ }\n}\n```"
repo_url: https://huggingface.co/fangloveskari/Platypus_QLoRA_LLaMA_70b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|arc:challenge|25_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T21_04_30.246280
path:
- '**/details_harness|drop|3_2023-09-17T21-04-30.246280.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T21-04-30.246280.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T21_04_30.246280
path:
- '**/details_harness|gsm8k|5_2023-09-17T21-04-30.246280.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T21-04-30.246280.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hellaswag|10_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T08:45:40.863548.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-29T08:45:40.863548.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-29T08:45:40.863548.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T21_04_30.246280
path:
- '**/details_harness|winogrande|5_2023-09-17T21-04-30.246280.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T21-04-30.246280.parquet'
- config_name: results
data_files:
- split: 2023_08_29T08_45_40.863548
path:
- results_2023-08-29T08:45:40.863548.parquet
- split: 2023_09_17T21_04_30.246280
path:
- results_2023-09-17T21-04-30.246280.parquet
- split: latest
path:
- results_2023-09-17T21-04-30.246280.parquet
---
# Dataset Card for Evaluation run of fangloveskari/Platypus_QLoRA_LLaMA_70b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/fangloveskari/Platypus_QLoRA_LLaMA_70b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [fangloveskari/Platypus_QLoRA_LLaMA_70b](https://huggingface.co/fangloveskari/Platypus_QLoRA_LLaMA_70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fangloveskari__Platypus_QLoRA_LLaMA_70b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T21:04:30.246280](https://huggingface.co/datasets/open-llm-leaderboard/details_fangloveskari__Platypus_QLoRA_LLaMA_70b/blob/main/results_2023-09-17T21-04-30.246280.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3960780201342282,
"em_stderr": 0.005008647185447735,
"f1": 0.5245239093959767,
"f1_stderr": 0.00450887492882971,
"acc": 0.5682691139696489,
"acc_stderr": 0.011651409152443089
},
"harness|drop|3": {
"em": 0.3960780201342282,
"em_stderr": 0.005008647185447735,
"f1": 0.5245239093959767,
"f1_stderr": 0.00450887492882971
},
"harness|gsm8k|5": {
"acc": 0.3078089461713419,
"acc_stderr": 0.012714401009923652
},
"harness|winogrande|5": {
"acc": 0.8287292817679558,
"acc_stderr": 0.010588417294962526
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/kikyou_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kikyou/桐生キキョウ/桔梗 (Blue Archive)
This is the dataset of kikyou/桐生キキョウ/桔梗 (Blue Archive), containing 500 images and their tags.
The core tags of this character are `animal_ears, black_hair, cat_ears, short_hair, halo, black_eyes, blue_halo, tail, cat_tail, multiple_tails, two_tails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 877.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikyou_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 721.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikyou_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1304 | 1.44 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kikyou_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kikyou_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, completely_nude, blush, nipples, 1boy, medium_breasts, collarbone, hetero, solo_focus, looking_at_viewer, sweat, heart, navel, open_mouth, pubic_hair, sex, simple_background |
| 1 | 7 |  |  |  |  |  | 1boy, 1girl, black_choker, blush, from_side, hetero, solo_focus, white_shirt, erection, sailor_collar, saliva, uncensored, animal_ear_fluff, school_uniform, veiny_penis, testicles, blue_neckerchief, closed_eyes, crying, cum_in_mouth, huge_penis, irrumatio, large_penis, nude, tears |
| 2 | 7 |  |  |  |  |  | 1boy, 1girl, blue_neckerchief, blush, hetero, long_sleeves, serafuku, white_shirt, black_nails, black_sailor_collar, censored, erection, nail_polish, solo_focus, fingernails, ribbon_choker, black_choker, animal_ear_fluff, breasts, closed_mouth, cum, handjob, looking_at_penis, simple_background |
| 3 | 25 |  |  |  |  |  | 1girl, black_sailor_collar, black_skirt, blue_neckerchief, fingernails, long_sleeves, pleated_skirt, serafuku, solo, nail_polish, white_background, black_nails, looking_at_viewer, simple_background, haori, closed_mouth, blush |
| 4 | 5 |  |  |  |  |  | 1girl, black_nails, black_sailor_collar, blue_neckerchief, haori, long_sleeves, looking_at_viewer, nail_polish, serafuku, solo, upper_body, choker, closed_mouth, fingernails, makeup |
| 5 | 5 |  |  |  |  |  | 1girl, black_sailor_collar, blue_neckerchief, choker, haori, looking_at_viewer, serafuku, simple_background, solo, upper_body, white_background, closed_mouth, collarbone, cropped_torso, long_sleeves |
| 6 | 11 |  |  |  |  |  | 1girl, barefoot, black_nails, black_sailor_collar, blue_neckerchief, long_sleeves, looking_at_viewer, pleated_skirt, serafuku, solo, toenail_polish, toes, black_skirt, closed_mouth, indoors, sitting, fingernails, white_shirt, holding_book, soles, bare_legs, choker, foot_focus, knees_up |
| 7 | 13 |  |  |  |  |  | 1girl, looking_at_viewer, solo, closed_mouth, collarbone, navel, alternate_costume, medium_breasts, simple_background, blush, cleavage, white_background, black_bikini, black_nails, nail_polish, stomach, cowboy_shot, fingernails, side-tie_bikini_bottom |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | completely_nude | blush | nipples | 1boy | medium_breasts | collarbone | hetero | solo_focus | looking_at_viewer | sweat | heart | navel | open_mouth | pubic_hair | sex | simple_background | black_choker | from_side | white_shirt | erection | sailor_collar | saliva | uncensored | animal_ear_fluff | school_uniform | veiny_penis | testicles | blue_neckerchief | closed_eyes | crying | cum_in_mouth | huge_penis | irrumatio | large_penis | nude | tears | long_sleeves | serafuku | black_nails | black_sailor_collar | censored | nail_polish | fingernails | ribbon_choker | breasts | closed_mouth | cum | handjob | looking_at_penis | black_skirt | pleated_skirt | solo | white_background | haori | upper_body | choker | makeup | cropped_torso | barefoot | toenail_polish | toes | indoors | sitting | holding_book | soles | bare_legs | foot_focus | knees_up | alternate_costume | cleavage | black_bikini | stomach | cowboy_shot | side-tie_bikini_bottom |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------------|:--------|:----------|:-------|:-----------------|:-------------|:---------|:-------------|:--------------------|:--------|:--------|:--------|:-------------|:-------------|:------|:--------------------|:---------------|:------------|:--------------|:-----------|:----------------|:---------|:-------------|:-------------------|:-----------------|:--------------|:------------|:-------------------|:--------------|:---------|:---------------|:-------------|:------------|:--------------|:-------|:--------|:---------------|:-----------|:--------------|:----------------------|:-----------|:--------------|:--------------|:----------------|:----------|:---------------|:------|:----------|:-------------------|:--------------|:----------------|:-------|:-------------------|:--------|:-------------|:---------|:---------|:----------------|:-----------|:-----------------|:-------|:----------|:----------|:---------------|:--------|:------------|:-------------|:-----------|:--------------------|:-----------|:---------------|:----------|:--------------|:-------------------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | X | | X | | | X | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | X | | X | | | X | X | | | | | | | | X | X | | X | X | | | | X | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 25 |  |  |  |  |  | X | | X | | | | | | | X | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | | X | X | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | | X | X | | | X | | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | | | X | | | X | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | X | X | | X | | | | | | X | | | | | | X | X | X | X | X | | X | | | | | | | | | | | | | | | | |
| 6 | 11 |  |  |  |  |  | X | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | X | X | X | X | | | X | | | X | | | | X | X | X | | | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 7 | 13 |  |  |  |  |  | X | | X | | | X | X | | | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | | | X | | | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X |
|
vihargagan024/categoryfraud | ---
license: unknown
---
|
trickrascunho/Minhavozz | ---
license: apache-2.0
---
|
ccccrrrr/github-issues-augment | ---
dataset_info:
features:
- name: html_url
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: body
dtype: string
- name: comment_length
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11433769
num_examples: 2175
download_size: 2558965
dataset_size: 11433769
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_rte_present_for_exp_perfect | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 267218
num_examples: 623
- name: train
num_bytes: 231287
num_examples: 497
download_size: 327918
dataset_size: 498505
---
# Dataset Card for "MULTI_VALUE_rte_present_for_exp_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Imran1/pashtuclassifcation | ---
dataset_info:
features:
- name: file
dtype: image
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 85162834.0
num_examples: 42000
download_size: 30369641
dataset_size: 85162834.0
---
# Dataset Card for "pashtuclassifcation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iamroot/mnli-mock-contrastive-axes | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: text_a
dtype: string
- name: text_b
dtype: string
- name: prompt
dtype: string
- name: text_a_embedding
sequence: float32
- name: text_b_embedding
sequence: float32
- name: prompt_embedding
sequence: float32
splits:
- name: train
num_bytes: 2892040066
num_examples: 304513
download_size: 0
dataset_size: 2892040066
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mnli-mock-contrastive-axes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-econometrics-dev | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: dev
num_bytes: 3709
num_examples: 5
download_size: 0
dataset_size: 3709
---
# Dataset Card for "mmlu-econometrics-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/makihara_shiho_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of makihara_shiho/槙原志保 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of makihara_shiho/槙原志保 (THE iDOLM@STER: Cinderella Girls), containing 51 images and their tags.
The core tags of this character are `brown_hair, long_hair, green_eyes, breasts, bow, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 51 | 40.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makihara_shiho_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 51 | 31.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makihara_shiho_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 105 | 58.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makihara_shiho_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 51 | 39.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makihara_shiho_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 105 | 69.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makihara_shiho_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/makihara_shiho_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------|
| 0 | 26 |  |  |  |  |  | 1girl, smile, solo, blush, looking_at_viewer, open_mouth, food, dress, apron, earrings, tray, frills, parfait, waitress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | blush | looking_at_viewer | open_mouth | food | dress | apron | earrings | tray | frills | parfait | waitress |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------|:--------------------|:-------------|:-------|:--------|:--------|:-----------|:-------|:---------|:----------|:-----------|
| 0 | 26 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
squad_es | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|squad
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: squad-es
pretty_name: SQuAD-es
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: v1.1.0
splits:
- name: train
num_bytes: 83680438
num_examples: 87595
- name: validation
num_bytes: 10955800
num_examples: 10570
download_size: 39291362
dataset_size: 94636238
---
# Dataset Card for "squad_es"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/ccasimiro88/TranslateAlignRetrieve](https://github.com/ccasimiro88/TranslateAlignRetrieve)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 39.29 MB
- **Size of the generated dataset:** 94.63 MB
- **Total amount of disk used:** 133.92 MB
### Dataset Summary
Automatic translation of the Stanford Question Answering Dataset (SQuAD) v2 into Spanish
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### v1.1.0
- **Size of downloaded dataset files:** 39.29 MB
- **Size of the generated dataset:** 94.63 MB
- **Total amount of disk used:** 133.92 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [404, 356, 356],
"text": ["Santa Clara, California", "Levi 's Stadium", "Levi 's Stadium en la Bahía de San Francisco en Santa Clara, California."]
},
"context": "\"El Super Bowl 50 fue un partido de fútbol americano para determinar al campeón de la NFL para la temporada 2015. El campeón de ...",
"id": "56be4db0acb8001400a502ee",
"question": "¿Dónde tuvo lugar el Super Bowl 50?",
"title": "Super Bowl _ 50"
}
```
### Data Fields
The data fields are the same among all splits.
#### v1.1.0
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|------|----:|---------:|
|v1.1.0|87595| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The SQuAD-es dataset is licensed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```
@article{2016arXiv160605250R,
author = {Casimiro Pio , Carrino and Marta R. , Costa-jussa and Jose A. R. , Fonollosa},
title = "{Automatic Spanish Translation of the SQuAD Dataset for Multilingual
Question Answering}",
journal = {arXiv e-prints},
year = 2019,
eid = {arXiv:1912.05200v1},
pages = {arXiv:1912.05200v1},
archivePrefix = {arXiv},
eprint = {1912.05200v2},
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun) for adding this dataset. |
freshpearYoon/train_free_24 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 9604542920
num_examples: 10000
download_size: 1295017852
dataset_size: 9604542920
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
irodkin/test_dataset_for_SD | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 998479.0
num_examples: 3
download_size: 983584
dataset_size: 998479.0
---
# Dataset Card for "test_dataset_for_SD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EleutherAI/quirky_nli_alice_hard | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: id
dtype: string
- name: choices
sequence: string
- name: bob_label
dtype: int64
- name: difficulty
dtype: float64
- name: statement
dtype: string
- name: character
dtype: string
- name: alice_label
dtype: int64
splits:
- name: train
num_bytes: 331205.67582760775
num_examples: 1401
- name: validation
num_bytes: 114536.40525
num_examples: 477
- name: test
num_bytes: 117796.156
num_examples: 496
download_size: 220319
dataset_size: 563538.2370776078
---
# Dataset Card for "quirky_nli_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jlbaker361/league_faces_captioned | ---
dataset_info:
features:
- name: splash
dtype: image
- name: tile
dtype: image
- name: label
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 33207414.0
num_examples: 378
download_size: 32569001
dataset_size: 33207414.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mohanrajv27/Finetuned-text-to-sql | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 215198580.9182748
num_examples: 235987
- name: test
num_bytes: 23911156.081725195
num_examples: 26221
download_size: 85588612
dataset_size: 239109737.0
---
# Dataset Card for "Finetuned-text-to-sql"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713225188 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 23900
num_examples: 69
download_size: 20315
dataset_size: 23900
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713225188"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Qdrant/google-landmark-geo | ---
language:
- en
pretty_name: Geo Coordinate Augmented Google-Landmarks
task_categories:
- image-classification
source_datasets:
- Google Landmarks V2
size_categories:
- < 50K
license: cc-by-4.0
---
# Dataset Card for Geo Coordinate Augmented Google-Landmarks
Geo coordinates were added as data to a tar file's worth of images from the [Google Landmark V2](https://github.com/cvdfoundation/google-landmark). Not all of the
images could be geo-tagged due to lack of coordinates on the image's wikimedia page.
## Dataset Details
### Dataset Description
Geo coordinates were added as data to a tar file's worth of images from the [Google Landmark V2](https://github.com/cvdfoundation/google-landmark). There were many more images that could have
been downloaded but this dataset was found to be a good balance of data size and sample size.
The intended use for the dataset was to demonstrate using a geo-filter in Qdrant along with a image similarity search.
Not all of the images couuld be geo-tagged due to lack of coordinates on the image's wikimedia page.
We provide the raw geotagged file as a geojson document, train_attribution_geo.json.
We also provide a json file that includes the data above along with embedding vectors for the images, id_payload_vector.json.
Thingsvision was used as the library for creating the image embeddings with the following ThingVision model:
```python
model_name = 'clip'
model_parameters = {
'variant': 'ViT-B/32'
}
```
The code directory contains the Python code used to geotag the images as well as generated the vectors. It can also be used to upload the embeddings to a Qdrant DB instance. This code is NOT for production and was
more focused on quickly and correctly get the coordinates and embed the images.
The license for this data and code match the license of the original Google Landmarks V2 Dataset: CC BY 4.0 license.
## Uses
### Direct Use
The primary use is case is image similarity search with geographic filtering.
|
naorm/malware-text-db-cyner-512 | ---
dataset_info:
features:
- name: Type
dtype: string
- name: Text
dtype: string
- name: Fixed Text
dtype: string
- name: Score
dtype: float64
- name: Original Sentence ID
dtype: int64
- name: Original Sentence
dtype: string
- name: Decoded Sentence
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 13704893
num_examples: 4899
download_size: 1226412
dataset_size: 13704893
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
larrylawl/alpaca-cleaned-indon | ---
license: apache-2.0
---
This dataset contains the indonesian translation of [`alpaca-cleaned`](https://huggingface.co/datasets/yahma/alpaca-cleaned). I translated using [`facebook/nllb-200-distilled-1.3B`](https://huggingface.co/docs/transformers/model_doc/nllb). |
emaeon/train3 | ---
dataset_info:
features:
- name: code1
dtype: string
- name: code2
dtype: string
- name: similar
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 9016079240
num_examples: 5000000
download_size: 4018276134
dataset_size: 9016079240
---
# Dataset Card for "train3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rishitunu/ecc_crackdetector_dataset_exhaustive | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 13168386.682
num_examples: 1289
download_size: 11961853
dataset_size: 13168386.682
---
# Dataset Card for "ecc_crackdetector_dataset_exhaustive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hodginson/Living-pa-rag | ---
license: mit
---
|
open-llm-leaderboard/details_arvindanand__Deepseek-Wizard-33B-slerp | ---
pretty_name: Evaluation run of arvindanand/Deepseek-Wizard-33B-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [arvindanand/Deepseek-Wizard-33B-slerp](https://huggingface.co/arvindanand/Deepseek-Wizard-33B-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_arvindanand__Deepseek-Wizard-33B-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-10T11:55:40.376566](https://huggingface.co/datasets/open-llm-leaderboard/details_arvindanand__Deepseek-Wizard-33B-slerp/blob/main/results_2024-04-10T11-55-40.376566.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3718561973371085,\n\
\ \"acc_stderr\": 0.03386409948633655,\n \"acc_norm\": 0.3767523316002052,\n\
\ \"acc_norm_stderr\": 0.03478643489608982,\n \"mc1\": 0.25091799265605874,\n\
\ \"mc1_stderr\": 0.015176985027707693,\n \"mc2\": 0.4481064242625741,\n\
\ \"mc2_stderr\": 0.016867993246611538\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.2764505119453925,\n \"acc_stderr\": 0.013069662474252428,\n\
\ \"acc_norm\": 0.31399317406143346,\n \"acc_norm_stderr\": 0.013562691224726284\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.31398127862975506,\n\
\ \"acc_stderr\": 0.00463160353975196,\n \"acc_norm\": 0.3693487353116909,\n\
\ \"acc_norm_stderr\": 0.004816421208654089\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.35555555555555557,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.35555555555555557,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.3881578947368421,\n \"acc_stderr\": 0.03965842097512744,\n\
\ \"acc_norm\": 0.3881578947368421,\n \"acc_norm_stderr\": 0.03965842097512744\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.36,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.3622641509433962,\n \"acc_stderr\": 0.029582245128384296,\n\
\ \"acc_norm\": 0.3622641509433962,\n \"acc_norm_stderr\": 0.029582245128384296\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2847222222222222,\n\
\ \"acc_stderr\": 0.03773809990686935,\n \"acc_norm\": 0.2847222222222222,\n\
\ \"acc_norm_stderr\": 0.03773809990686935\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\"\
: 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3352601156069364,\n\
\ \"acc_stderr\": 0.03599586301247078,\n \"acc_norm\": 0.3352601156069364,\n\
\ \"acc_norm_stderr\": 0.03599586301247078\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.04440521906179326,\n\
\ \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.04440521906179326\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n\
\ \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.3659574468085106,\n \"acc_stderr\": 0.031489558297455304,\n\
\ \"acc_norm\": 0.3659574468085106,\n \"acc_norm_stderr\": 0.031489558297455304\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.32456140350877194,\n\
\ \"acc_stderr\": 0.04404556157374767,\n \"acc_norm\": 0.32456140350877194,\n\
\ \"acc_norm_stderr\": 0.04404556157374767\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.38620689655172413,\n \"acc_stderr\": 0.04057324734419034,\n\
\ \"acc_norm\": 0.38620689655172413,\n \"acc_norm_stderr\": 0.04057324734419034\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.36507936507936506,\n \"acc_stderr\": 0.024796060602699965,\n \"\
acc_norm\": 0.36507936507936506,\n \"acc_norm_stderr\": 0.024796060602699965\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3253968253968254,\n\
\ \"acc_stderr\": 0.041905964388711366,\n \"acc_norm\": 0.3253968253968254,\n\
\ \"acc_norm_stderr\": 0.041905964388711366\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4483870967741935,\n\
\ \"acc_stderr\": 0.028292056830112725,\n \"acc_norm\": 0.4483870967741935,\n\
\ \"acc_norm_stderr\": 0.028292056830112725\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.28078817733990147,\n \"acc_stderr\": 0.0316185633535861,\n\
\ \"acc_norm\": 0.28078817733990147,\n \"acc_norm_stderr\": 0.0316185633535861\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.4909090909090909,\n \"acc_stderr\": 0.0390369864774844,\n\
\ \"acc_norm\": 0.4909090909090909,\n \"acc_norm_stderr\": 0.0390369864774844\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.3838383838383838,\n \"acc_stderr\": 0.03464881675016338,\n \"\
acc_norm\": 0.3838383838383838,\n \"acc_norm_stderr\": 0.03464881675016338\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.35233160621761656,\n \"acc_stderr\": 0.03447478286414358,\n\
\ \"acc_norm\": 0.35233160621761656,\n \"acc_norm_stderr\": 0.03447478286414358\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.30256410256410254,\n \"acc_stderr\": 0.023290888053772742,\n\
\ \"acc_norm\": 0.30256410256410254,\n \"acc_norm_stderr\": 0.023290888053772742\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3037037037037037,\n \"acc_stderr\": 0.028037929969114996,\n \
\ \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.028037929969114996\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.35714285714285715,\n \"acc_stderr\": 0.031124619309328177,\n\
\ \"acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.031124619309328177\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"\
acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.45504587155963305,\n \"acc_stderr\": 0.021350503090925163,\n \"\
acc_norm\": 0.45504587155963305,\n \"acc_norm_stderr\": 0.021350503090925163\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.3101851851851852,\n \"acc_stderr\": 0.031546962856566295,\n \"\
acc_norm\": 0.3101851851851852,\n \"acc_norm_stderr\": 0.031546962856566295\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.35294117647058826,\n \"acc_stderr\": 0.033540924375915195,\n \"\
acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.033540924375915195\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.43037974683544306,\n \"acc_stderr\": 0.032230171959375976,\n \
\ \"acc_norm\": 0.43037974683544306,\n \"acc_norm_stderr\": 0.032230171959375976\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.45739910313901344,\n\
\ \"acc_stderr\": 0.03343577705583065,\n \"acc_norm\": 0.45739910313901344,\n\
\ \"acc_norm_stderr\": 0.03343577705583065\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.32061068702290074,\n \"acc_stderr\": 0.04093329229834278,\n\
\ \"acc_norm\": 0.32061068702290074,\n \"acc_norm_stderr\": 0.04093329229834278\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.5371900826446281,\n \"acc_stderr\": 0.04551711196104218,\n \"\
acc_norm\": 0.5371900826446281,\n \"acc_norm_stderr\": 0.04551711196104218\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.39814814814814814,\n\
\ \"acc_stderr\": 0.04732332615978815,\n \"acc_norm\": 0.39814814814814814,\n\
\ \"acc_norm_stderr\": 0.04732332615978815\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.4233128834355828,\n \"acc_stderr\": 0.03881891213334384,\n\
\ \"acc_norm\": 0.4233128834355828,\n \"acc_norm_stderr\": 0.03881891213334384\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n\
\ \"acc_stderr\": 0.04521829902833586,\n \"acc_norm\": 0.3482142857142857,\n\
\ \"acc_norm_stderr\": 0.04521829902833586\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.46601941747572817,\n \"acc_stderr\": 0.0493929144727348,\n\
\ \"acc_norm\": 0.46601941747572817,\n \"acc_norm_stderr\": 0.0493929144727348\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6153846153846154,\n\
\ \"acc_stderr\": 0.03187195347942466,\n \"acc_norm\": 0.6153846153846154,\n\
\ \"acc_norm_stderr\": 0.03187195347942466\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.44699872286079184,\n\
\ \"acc_stderr\": 0.017779225233394216,\n \"acc_norm\": 0.44699872286079184,\n\
\ \"acc_norm_stderr\": 0.017779225233394216\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.4508670520231214,\n \"acc_stderr\": 0.026788811931562757,\n\
\ \"acc_norm\": 0.4508670520231214,\n \"acc_norm_stderr\": 0.026788811931562757\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23687150837988827,\n\
\ \"acc_stderr\": 0.014219570788103982,\n \"acc_norm\": 0.23687150837988827,\n\
\ \"acc_norm_stderr\": 0.014219570788103982\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.3790849673202614,\n \"acc_stderr\": 0.027780141207023344,\n\
\ \"acc_norm\": 0.3790849673202614,\n \"acc_norm_stderr\": 0.027780141207023344\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4340836012861736,\n\
\ \"acc_stderr\": 0.0281502322445356,\n \"acc_norm\": 0.4340836012861736,\n\
\ \"acc_norm_stderr\": 0.0281502322445356\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.3487654320987654,\n \"acc_stderr\": 0.02651759772446501,\n\
\ \"acc_norm\": 0.3487654320987654,\n \"acc_norm_stderr\": 0.02651759772446501\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.33687943262411346,\n \"acc_stderr\": 0.028195534873966734,\n \
\ \"acc_norm\": 0.33687943262411346,\n \"acc_norm_stderr\": 0.028195534873966734\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.32659713168187743,\n\
\ \"acc_stderr\": 0.011977676704715993,\n \"acc_norm\": 0.32659713168187743,\n\
\ \"acc_norm_stderr\": 0.011977676704715993\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.25735294117647056,\n \"acc_stderr\": 0.026556519470041513,\n\
\ \"acc_norm\": 0.25735294117647056,\n \"acc_norm_stderr\": 0.026556519470041513\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.3415032679738562,\n \"acc_stderr\": 0.01918463932809249,\n \
\ \"acc_norm\": 0.3415032679738562,\n \"acc_norm_stderr\": 0.01918463932809249\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04789131426105757,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04789131426105757\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.45714285714285713,\n \"acc_stderr\": 0.03189141832421397,\n\
\ \"acc_norm\": 0.45714285714285713,\n \"acc_norm_stderr\": 0.03189141832421397\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.43283582089552236,\n\
\ \"acc_stderr\": 0.03503490923673282,\n \"acc_norm\": 0.43283582089552236,\n\
\ \"acc_norm_stderr\": 0.03503490923673282\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2891566265060241,\n\
\ \"acc_stderr\": 0.035294868015111155,\n \"acc_norm\": 0.2891566265060241,\n\
\ \"acc_norm_stderr\": 0.035294868015111155\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.3567251461988304,\n \"acc_stderr\": 0.03674013002860954,\n\
\ \"acc_norm\": 0.3567251461988304,\n \"acc_norm_stderr\": 0.03674013002860954\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.25091799265605874,\n\
\ \"mc1_stderr\": 0.015176985027707693,\n \"mc2\": 0.4481064242625741,\n\
\ \"mc2_stderr\": 0.016867993246611538\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5477505919494869,\n \"acc_stderr\": 0.01398825621660601\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/arvindanand/Deepseek-Wizard-33B-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|arc:challenge|25_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|gsm8k|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hellaswag|10_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T11-55-40.376566.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-10T11-55-40.376566.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- '**/details_harness|winogrande|5_2024-04-10T11-55-40.376566.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-10T11-55-40.376566.parquet'
- config_name: results
data_files:
- split: 2024_04_10T11_55_40.376566
path:
- results_2024-04-10T11-55-40.376566.parquet
- split: latest
path:
- results_2024-04-10T11-55-40.376566.parquet
---
# Dataset Card for Evaluation run of arvindanand/Deepseek-Wizard-33B-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [arvindanand/Deepseek-Wizard-33B-slerp](https://huggingface.co/arvindanand/Deepseek-Wizard-33B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_arvindanand__Deepseek-Wizard-33B-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-10T11:55:40.376566](https://huggingface.co/datasets/open-llm-leaderboard/details_arvindanand__Deepseek-Wizard-33B-slerp/blob/main/results_2024-04-10T11-55-40.376566.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3718561973371085,
"acc_stderr": 0.03386409948633655,
"acc_norm": 0.3767523316002052,
"acc_norm_stderr": 0.03478643489608982,
"mc1": 0.25091799265605874,
"mc1_stderr": 0.015176985027707693,
"mc2": 0.4481064242625741,
"mc2_stderr": 0.016867993246611538
},
"harness|arc:challenge|25": {
"acc": 0.2764505119453925,
"acc_stderr": 0.013069662474252428,
"acc_norm": 0.31399317406143346,
"acc_norm_stderr": 0.013562691224726284
},
"harness|hellaswag|10": {
"acc": 0.31398127862975506,
"acc_stderr": 0.00463160353975196,
"acc_norm": 0.3693487353116909,
"acc_norm_stderr": 0.004816421208654089
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.24,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.35555555555555557,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3881578947368421,
"acc_stderr": 0.03965842097512744,
"acc_norm": 0.3881578947368421,
"acc_norm_stderr": 0.03965842097512744
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.3622641509433962,
"acc_stderr": 0.029582245128384296,
"acc_norm": 0.3622641509433962,
"acc_norm_stderr": 0.029582245128384296
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2847222222222222,
"acc_stderr": 0.03773809990686935,
"acc_norm": 0.2847222222222222,
"acc_norm_stderr": 0.03773809990686935
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3352601156069364,
"acc_stderr": 0.03599586301247078,
"acc_norm": 0.3352601156069364,
"acc_norm_stderr": 0.03599586301247078
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.04440521906179326,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.04440521906179326
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3659574468085106,
"acc_stderr": 0.031489558297455304,
"acc_norm": 0.3659574468085106,
"acc_norm_stderr": 0.031489558297455304
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.32456140350877194,
"acc_stderr": 0.04404556157374767,
"acc_norm": 0.32456140350877194,
"acc_norm_stderr": 0.04404556157374767
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.38620689655172413,
"acc_stderr": 0.04057324734419034,
"acc_norm": 0.38620689655172413,
"acc_norm_stderr": 0.04057324734419034
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.024796060602699965,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.024796060602699965
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3253968253968254,
"acc_stderr": 0.041905964388711366,
"acc_norm": 0.3253968253968254,
"acc_norm_stderr": 0.041905964388711366
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4483870967741935,
"acc_stderr": 0.028292056830112725,
"acc_norm": 0.4483870967741935,
"acc_norm_stderr": 0.028292056830112725
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.28078817733990147,
"acc_stderr": 0.0316185633535861,
"acc_norm": 0.28078817733990147,
"acc_norm_stderr": 0.0316185633535861
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.4909090909090909,
"acc_stderr": 0.0390369864774844,
"acc_norm": 0.4909090909090909,
"acc_norm_stderr": 0.0390369864774844
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.3838383838383838,
"acc_stderr": 0.03464881675016338,
"acc_norm": 0.3838383838383838,
"acc_norm_stderr": 0.03464881675016338
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.35233160621761656,
"acc_stderr": 0.03447478286414358,
"acc_norm": 0.35233160621761656,
"acc_norm_stderr": 0.03447478286414358
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.30256410256410254,
"acc_stderr": 0.023290888053772742,
"acc_norm": 0.30256410256410254,
"acc_norm_stderr": 0.023290888053772742
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.028037929969114996,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.028037929969114996
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.031124619309328177,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.031124619309328177
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.45504587155963305,
"acc_stderr": 0.021350503090925163,
"acc_norm": 0.45504587155963305,
"acc_norm_stderr": 0.021350503090925163
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3101851851851852,
"acc_stderr": 0.031546962856566295,
"acc_norm": 0.3101851851851852,
"acc_norm_stderr": 0.031546962856566295
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.033540924375915195,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.033540924375915195
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.43037974683544306,
"acc_stderr": 0.032230171959375976,
"acc_norm": 0.43037974683544306,
"acc_norm_stderr": 0.032230171959375976
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.45739910313901344,
"acc_stderr": 0.03343577705583065,
"acc_norm": 0.45739910313901344,
"acc_norm_stderr": 0.03343577705583065
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.32061068702290074,
"acc_stderr": 0.04093329229834278,
"acc_norm": 0.32061068702290074,
"acc_norm_stderr": 0.04093329229834278
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5371900826446281,
"acc_stderr": 0.04551711196104218,
"acc_norm": 0.5371900826446281,
"acc_norm_stderr": 0.04551711196104218
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.39814814814814814,
"acc_stderr": 0.04732332615978815,
"acc_norm": 0.39814814814814814,
"acc_norm_stderr": 0.04732332615978815
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.4233128834355828,
"acc_stderr": 0.03881891213334384,
"acc_norm": 0.4233128834355828,
"acc_norm_stderr": 0.03881891213334384
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.04521829902833586,
"acc_norm": 0.3482142857142857,
"acc_norm_stderr": 0.04521829902833586
},
"harness|hendrycksTest-management|5": {
"acc": 0.46601941747572817,
"acc_stderr": 0.0493929144727348,
"acc_norm": 0.46601941747572817,
"acc_norm_stderr": 0.0493929144727348
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6153846153846154,
"acc_stderr": 0.03187195347942466,
"acc_norm": 0.6153846153846154,
"acc_norm_stderr": 0.03187195347942466
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.44699872286079184,
"acc_stderr": 0.017779225233394216,
"acc_norm": 0.44699872286079184,
"acc_norm_stderr": 0.017779225233394216
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4508670520231214,
"acc_stderr": 0.026788811931562757,
"acc_norm": 0.4508670520231214,
"acc_norm_stderr": 0.026788811931562757
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23687150837988827,
"acc_stderr": 0.014219570788103982,
"acc_norm": 0.23687150837988827,
"acc_norm_stderr": 0.014219570788103982
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.3790849673202614,
"acc_stderr": 0.027780141207023344,
"acc_norm": 0.3790849673202614,
"acc_norm_stderr": 0.027780141207023344
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4340836012861736,
"acc_stderr": 0.0281502322445356,
"acc_norm": 0.4340836012861736,
"acc_norm_stderr": 0.0281502322445356
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.3487654320987654,
"acc_stderr": 0.02651759772446501,
"acc_norm": 0.3487654320987654,
"acc_norm_stderr": 0.02651759772446501
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.33687943262411346,
"acc_stderr": 0.028195534873966734,
"acc_norm": 0.33687943262411346,
"acc_norm_stderr": 0.028195534873966734
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.32659713168187743,
"acc_stderr": 0.011977676704715993,
"acc_norm": 0.32659713168187743,
"acc_norm_stderr": 0.011977676704715993
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.25735294117647056,
"acc_stderr": 0.026556519470041513,
"acc_norm": 0.25735294117647056,
"acc_norm_stderr": 0.026556519470041513
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.3415032679738562,
"acc_stderr": 0.01918463932809249,
"acc_norm": 0.3415032679738562,
"acc_norm_stderr": 0.01918463932809249
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5,
"acc_stderr": 0.04789131426105757,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04789131426105757
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.45714285714285713,
"acc_stderr": 0.03189141832421397,
"acc_norm": 0.45714285714285713,
"acc_norm_stderr": 0.03189141832421397
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.43283582089552236,
"acc_stderr": 0.03503490923673282,
"acc_norm": 0.43283582089552236,
"acc_norm_stderr": 0.03503490923673282
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2891566265060241,
"acc_stderr": 0.035294868015111155,
"acc_norm": 0.2891566265060241,
"acc_norm_stderr": 0.035294868015111155
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3567251461988304,
"acc_stderr": 0.03674013002860954,
"acc_norm": 0.3567251461988304,
"acc_norm_stderr": 0.03674013002860954
},
"harness|truthfulqa:mc|0": {
"mc1": 0.25091799265605874,
"mc1_stderr": 0.015176985027707693,
"mc2": 0.4481064242625741,
"mc2_stderr": 0.016867993246611538
},
"harness|winogrande|5": {
"acc": 0.5477505919494869,
"acc_stderr": 0.01398825621660601
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
boda/kaneko_data | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: corrections
list:
- name: correct
dtype: string
- name: end
dtype: int64
- name: error
dtype: string
- name: explanation
dtype: string
- name: start
dtype: int64
- name: incorrect_sentence
dtype: string
- name: correct_sentence
dtype: string
splits:
- name: train
num_bytes: 882615.983310153
num_examples: 1294
- name: test
num_bytes: 98220.016689847
num_examples: 144
download_size: 456989
dataset_size: 980836.0
---
# Dataset Card for "kaneko_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Impe/Stuff | ---
license: afl-3.0
---
|
nielsr/datacomp_small_english_captions | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: uid
dtype: string
- name: url
dtype: string
- name: text
dtype: string
- name: original_width
dtype: int64
- name: original_height
dtype: int64
- name: clip_b32_similarity_score
dtype: float32
- name: clip_l14_similarity_score
dtype: float32
- name: face_bboxes
sequence:
sequence: float64
- name: sha256
dtype: string
- name: detected_language
dtype: string
splits:
- name: train
num_bytes: 1172007917.4476998
num_examples: 3651302
download_size: 936181679
dataset_size: 1172007917.4476998
---
# Dataset Card for "datacomp_small_english_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lenML/oaast_rm_full_jieba | ---
license: apache-2.0
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
tags:
- human-feedback
size_categories:
- 10K<n<100K
---
尝试解决"llm repetition problem",使用分词模型对oaast语料进行“结巴化”数据增强,提供更强的重复内容拒绝效果。
Attempts to solve the "llm repetition problem" by using a segmentation model to enhance the oaast corpus with "stuttering" data to provide stronger rejection of duplicate content.
其次,还过滤掉了所有自我认知的微调样本。
Second, it also filters out all the fine-tuned samples of self-cognition.
files:
- oaast_rm_full_jieba.jsonl : word level repeat
- oaast_rm_full_sent_jieba.jsonl : sentence level repeat |
umair-ahmad/test-segformer | ---
language:
- en
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/matsuura_kanan_lovelivesunshine | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of matsuura_kanan/松浦果南/마츠우라카난 (Love Live! Sunshine!!)
This is the dataset of matsuura_kanan/松浦果南/마츠우라카난 (Love Live! Sunshine!!), containing 500 images and their tags.
The core tags of this character are `blue_hair, purple_eyes, long_hair, ponytail, bangs, sidelocks, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 714.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuura_kanan_lovelivesunshine/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 371.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuura_kanan_lovelivesunshine/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1238 | 830.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuura_kanan_lovelivesunshine/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 614.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuura_kanan_lovelivesunshine/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1238 | 1.21 GiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuura_kanan_lovelivesunshine/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/matsuura_kanan_lovelivesunshine',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 31 |  |  |  |  |  | 1girl, solo, looking_at_viewer, striped_bikini, cleavage, blush, smile, wetsuit, medium_breasts, large_breasts, collarbone, front-tie_bikini_top, navel, day, bikini_top_only, ocean, open_bodysuit, sky, cloud, outdoors, high_ponytail, open_mouth, unzipped |
| 1 | 9 |  |  |  |  |  | 1girl, blue_sky, blush, cloud, day, looking_at_viewer, outdoors, smile, solo, white_dress, collarbone, ocean, bare_shoulders, sundress, sun_hat |
| 2 | 12 |  |  |  |  |  | 1girl, happy_birthday, looking_at_viewer, smile, solo, character_name, dated, english_text, blush, upper_body, jewelry, one_eye_closed |
| 3 | 27 |  |  |  |  |  | 1girl, looking_at_viewer, serafuku, solo, uranohoshi_school_uniform, blush, neckerchief, smile, short_sleeves, simple_background, upper_body, pleated_skirt, white_background, grey_skirt, open_mouth, sailor_collar |
| 4 | 8 |  |  |  |  |  | 1girl, midriff, navel, solo, white_gloves, hair_ornament, looking_at_viewer, open_mouth, skirt, smile, thighhighs, blush, earrings, fish, high_ponytail, medium_breasts, one_eye_closed, scrunchie |
| 5 | 12 |  |  |  |  |  | 1girl, looking_at_viewer, short_sleeves, solo, white_gloves, blush, epaulettes, hat_feather, open_mouth, :d, blue_headwear, whistle, earrings, feathers, red_ascot, white_background, red_skirt, simple_background |
| 6 | 6 |  |  |  |  |  | 1girl, bracelet, smile, solo, blush, collarbone, holding, hairclip, looking_at_viewer, shirt |
| 7 | 8 |  |  |  |  |  | 1girl, heart_hair_ornament, looking_at_viewer, plaid_skirt, solo, red_necktie, blush, brown_jacket, brown_skirt, pleated_skirt, long_sleeves, polka_dot_scrunchie, white_shirt, bag, bracelet, miniskirt, red_scrunchie, smile, wrist_scrunchie, collared_shirt, holding, medium_breasts, open_jacket, open_mouth, simple_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | striped_bikini | cleavage | blush | smile | wetsuit | medium_breasts | large_breasts | collarbone | front-tie_bikini_top | navel | day | bikini_top_only | ocean | open_bodysuit | sky | cloud | outdoors | high_ponytail | open_mouth | unzipped | blue_sky | white_dress | bare_shoulders | sundress | sun_hat | happy_birthday | character_name | dated | english_text | upper_body | jewelry | one_eye_closed | serafuku | uranohoshi_school_uniform | neckerchief | short_sleeves | simple_background | pleated_skirt | white_background | grey_skirt | sailor_collar | midriff | white_gloves | hair_ornament | skirt | thighhighs | earrings | fish | scrunchie | epaulettes | hat_feather | :d | blue_headwear | whistle | feathers | red_ascot | red_skirt | bracelet | holding | hairclip | shirt | heart_hair_ornament | plaid_skirt | red_necktie | brown_jacket | brown_skirt | long_sleeves | polka_dot_scrunchie | white_shirt | bag | miniskirt | red_scrunchie | wrist_scrunchie | collared_shirt | open_jacket |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:-----------------|:-----------|:--------|:--------|:----------|:-----------------|:----------------|:-------------|:-----------------------|:--------|:------|:------------------|:--------|:----------------|:------|:--------|:-----------|:----------------|:-------------|:-----------|:-----------|:--------------|:-----------------|:-----------|:----------|:-----------------|:-----------------|:--------|:---------------|:-------------|:----------|:-----------------|:-----------|:----------------------------|:--------------|:----------------|:--------------------|:----------------|:-------------------|:-------------|:----------------|:----------|:---------------|:----------------|:--------|:-------------|:-----------|:-------|:------------|:-------------|:--------------|:-----|:----------------|:----------|:-----------|:------------|:------------|:-----------|:----------|:-----------|:--------|:----------------------|:--------------|:--------------|:---------------|:--------------|:---------------|:----------------------|:--------------|:------|:------------|:----------------|:------------------|:-----------------|:--------------|
| 0 | 31 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | | | X | X | | | | X | | | X | | X | | | X | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 12 |  |  |  |  |  | X | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 27 |  |  |  |  |  | X | X | X | | | X | X | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | X | | | X | X | | X | | | | X | | | | | | | | X | X | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 12 |  |  |  |  |  | X | X | X | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | | X | | | | X | | | | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | X | X | | | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | X | X | | | X | X | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
chansung/eval_test99 | ---
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
dataset_info:
features:
- name: instructions
dtype: string
- name: target_responses
dtype: string
- name: candidate_responses
dtype: string
- name: eval_prompts
dtype: string
- name: similarity_scores
dtype: int64
- name: precision_scores
dtype: int64
splits:
- name: eval
num_bytes: 70744
num_examples: 16
download_size: 62215
dataset_size: 70744
---
# Dataset Card for "eval_test99"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955856 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-1.3b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
armahlovis/BlackWriterOnFreedom | ---
license: mit
---
This is a more than 1 million word token dataset consist of Historical black writers who wrote about black emancipation. Include in this datasets are
Collected Articles of Frederick Douglass(8000 word tokens),THREE ADDRESSES BY Fred Douglas(28K word token), Why is the Negro
Lynched?(15K word token) by FREDERICK DOUGLASS, MY BONDAGE and MY FREEDOM(135Kword token), Narrative of the Life of Frederick Douglass(40K word tokens)
darkwater by W. E.(67K word tokens), GIFT _of_ BLACK FOLK(77K word tokens), John Brown (101K word token), Negro problem(36K word tokens), THE CONSERVATION OF RACES(5k word token),
The Negro(57K word token), The quest of the Fleece(109k), THE SUPPRESSION OF THE AFRICAN SLAVE-TRADE(123K word tokens) by W. E. BURGHARDT DU BOIS,
UP FROM SLAVERY AN AUTOBIOGRAPHY BY Booker T Washington(77K word tokens).
The evaluation data set consist of The Underground Railroad, by William Still(400K word token) |
Lollitor/PocketDataset | ---
dataset_info:
features:
- name: -logKd/Ki
dtype: float64
- name: inputs
dtype: string
splits:
- name: train
num_bytes: 4918269
num_examples: 18926
download_size: 1980562
dataset_size: 4918269
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "PocketDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yusufagung29/pengadilan_dataset_mp3_aug_preparedaa | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: input_length
dtype: float64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 230528064
num_examples: 240
- name: test
num_bytes: 57631400
num_examples: 60
download_size: 49103627
dataset_size: 288159464
---
# Dataset Card for "pengadilan_dataset_mp3_aug_preparedaa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Priyash/natural_language | ---
dataset_info:
features:
- name: review
dtype: string
- name: Length
dtype: int64
splits:
- name: train
num_bytes: 4742.1
num_examples: 9
- name: validation
num_bytes: 1154
num_examples: 1
download_size: 0
dataset_size: 5896.1
---
# Dataset Card for "natural_language"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EagleConsortium/MrEagle-2126 | ---
license: wtfpl
---
A dataset of 2,126 1-turn conversations artificially generated using GPT-4, designed to fit the tone of the Discord bot Mr. Eagle.
This dataset was used to train MrEagle-LoRA.
|
TheAIchemist13/hindi_asr_dataset_accent | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 60408191.0
num_examples: 175
- name: test
num_bytes: 3850439.0
num_examples: 5
download_size: 59683824
dataset_size: 64258630.0
---
# Dataset Card for "hindi_asr_dataset_accent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amintalukder/emotion_bn | ---
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: ID
dtype: int64
- name: Data
dtype: string
- name: Love
dtype: int64
- name: Joy
dtype: int64
- name: Surprise
dtype: int64
- name: Anger
dtype: int64
- name: Sadness
dtype: int64
- name: Fear
dtype: int64
- name: Topic
dtype: string
- name: Domain
dtype: string
- name: is_admin
dtype: bool
splits:
- name: val
num_bytes: 503282
num_examples: 2047
- name: test
num_bytes: 545033
num_examples: 2272
- name: train
num_bytes: 4408992
num_examples: 18420
download_size: 1882715
dataset_size: 5457307
---
# Dataset Card for "emotion_bn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mikewang/vaw | ---
pretty_name: 'Visual Attributes in the Wild (VAW)'
language:
- en
---
# Dataset Card for Visual Attributes in the Wild (VAW)
## Dataset Description
**Homepage:** http://vawdataset.com/
**Repository:** https://github.com/adobe-research/vaw_dataset;
- The raw dataset files will be downloaded from: https://github.com/adobe-research/vaw_dataset/tree/main/data, where one can also find additional metadata files such as attribute types.
- The train split loaded from this hf dataset is a concatenation of the train_part1.json and train_part2.json.
- The image_id field corresponds to respective image ids in the v1.4 Visual Genome dataset.
**LICENSE:** https://github.com/adobe-research/vaw_dataset/blob/main/LICENSE.md
**Paper Citation:**
```
@InProceedings{Pham_2021_CVPR,
author = {Pham, Khoi and Kafle, Kushal and Lin, Zhe and Ding, Zhihong and Cohen, Scott and Tran, Quan and Shrivastava, Abhinav},
title = {Learning To Predict Visual Attributes in the Wild},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {13018-13028}
}
```
## Dataset Summary
A large scale visual attributes dataset with explicitly labelled positive and negative attributes.
- 620 Unique Attributes including color, shape, texture, posture and many others
- 260,895 Instances of different objects
- 2260 Unique Objects observed in the wild
- 72,274 Images from the Visual Genome Dataset
- 4 different evaluation metrics for measuring multi-faceted performance metrics |
hiepdaoquang704/test | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245925
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
saberai/RedPajama_OpenHermes | ---
license: apache-2.0
---
|
renumics/cifar100-outlier | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-80-Million-Tiny-Images
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-100
pretty_name: Cifar100
dataset_info:
features:
- name: img
dtype: image
- name: fine_label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: coarse_label
dtype:
class_label:
names:
'0': aquatic_mammals
'1': fish
'2': flowers
'3': food_containers
'4': fruit_and_vegetables
'5': household_electrical_devices
'6': household_furniture
'7': insects
'8': large_carnivores
'9': large_man-made_outdoor_things
'10': large_natural_outdoor_scenes
'11': large_omnivores_and_herbivores
'12': medium_mammals
'13': non-insect_invertebrates
'14': people
'15': reptiles
'16': small_mammals
'17': trees
'18': vehicles_1
'19': vehicles_2
- name: embedding_foundation
sequence: float32
- name: embedding_ft
sequence: float32
- name: outlier_score_ft
dtype: float64
- name: outlier_score_foundation
dtype: float64
- name: nn_image
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
splits:
- name: train
num_bytes: 583557742.0
num_examples: 50000
download_size: 643988234
dataset_size: 583557742.0
---
# Dataset Card for "cifar100-outlier"
📚 This dataset is an enriched version of the [CIFAR-100 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
The workflow is described in the medium article: [Changes of Embeddings during Fine-Tuning of Transformers](https://medium.com/@markus.stoll/changes-of-embeddings-during-fine-tuning-c22aa1615921).
## Explore the Dataset
The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) allows you to explorer this dataset. You can find a Hugging Face Space running Spotlight with this dataset here: <https://huggingface.co/spaces/renumics/cifar100-outlier>.

Or you can explorer it locally:
```python
!pip install renumics-spotlight datasets
from renumics import spotlight
import datasets
ds = datasets.load_dataset("renumics/cifar100-outlier", split="train")
df = ds.rename_columns({"img": "image", "fine_label": "labels"}).to_pandas()
df["label_str"] = df["labels"].apply(lambda x: ds.features["fine_label"].int2str(x))
dtypes = {
"nn_image": spotlight.Image,
"image": spotlight.Image,
"embedding_ft": spotlight.Embedding,
"embedding_foundation": spotlight.Embedding,
}
spotlight.show(
df,
dtype=dtypes,
layout="https://spotlight.renumics.com/resources/layout_pre_post_ft.json",
)
``` |
mj96/subject_lionel_messi_resized | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3698784.0
num_examples: 14
download_size: 3700180
dataset_size: 3698784.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_0_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 612917
num_examples: 825
download_size: 278480
dataset_size: 612917
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python-code-instructions-18k-alpaca-standardized_cluster_0_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Langame/conversation-starters | ---
dataset_info:
features:
- name: topics
sequence: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2079285
num_examples: 17470
download_size: 966258
dataset_size: 2079285
---
# Dataset Card for "conversation-starters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
datahrvoje/twitter_dataset_1713151023 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 19645
num_examples: 45
download_size: 11163
dataset_size: 19645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/a4d60c08 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1338
dataset_size: 180
---
# Dataset Card for "a4d60c08"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/kar98k_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kar98k/Kar98k/Kar98k (Girls' Frontline)
This is the dataset of kar98k/Kar98k/Kar98k (Girls' Frontline), containing 388 images and their tags.
The core tags of this character are `long_hair, red_eyes, white_hair, breasts, very_long_hair, bangs, hair_between_eyes, hat, peaked_cap, large_breasts, black_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 388 | 612.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kar98k_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 388 | 310.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kar98k_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 974 | 671.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kar98k_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 388 | 532.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kar98k_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 974 | 998.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kar98k_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kar98k_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, looking_at_viewer, solo, closed_mouth, fur_trim, iron_cross, black_choker, upper_body, cleavage, coat, cross_choker, simple_background, smile, white_background, long_sleeves, medium_breasts, belt, collarbone, jacket |
| 1 | 10 |  |  |  |  |  | 1girl, black_footwear, bolt_action, fur_trim, holding_gun, iron_cross, looking_at_viewer, solo, thigh_boots, thighhighs, cleavage, jacket_on_shoulders, aiguillette, belt, coat, dress, simple_background, white_background, cross_choker, closed_mouth, long_sleeves, uniform, armband, black_choker, full_body, medium_breasts |
| 2 | 8 |  |  |  |  |  | 1girl, cleavage, iron_cross, looking_at_viewer, solo, thighhighs, choker, fur_trim, thigh_boots, blush, belt, medium_breasts, black_footwear, smile |
| 3 | 17 |  |  |  |  |  | cleavage, white_dress, official_alternate_costume, 1girl, looking_at_viewer, wedding_dress, bridal_veil, bare_shoulders, solo, choker, closed_mouth, rose, collarbone, tiara, white_background, petals, simple_background, smile, holding_bouquet, blush, red_flower, ribbon, cross_necklace, off-shoulder_dress, wedding_ring |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | closed_mouth | fur_trim | iron_cross | black_choker | upper_body | cleavage | coat | cross_choker | simple_background | smile | white_background | long_sleeves | medium_breasts | belt | collarbone | jacket | black_footwear | bolt_action | holding_gun | thigh_boots | thighhighs | jacket_on_shoulders | aiguillette | dress | uniform | armband | full_body | choker | blush | white_dress | official_alternate_costume | wedding_dress | bridal_veil | bare_shoulders | rose | tiara | petals | holding_bouquet | red_flower | ribbon | cross_necklace | off-shoulder_dress | wedding_ring |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:---------------|:-----------|:-------------|:---------------|:-------------|:-----------|:-------|:---------------|:--------------------|:--------|:-------------------|:---------------|:-----------------|:-------|:-------------|:---------|:-----------------|:--------------|:--------------|:--------------|:-------------|:----------------------|:--------------|:--------|:----------|:----------|:------------|:---------|:--------|:--------------|:-----------------------------|:----------------|:--------------|:-----------------|:-------|:--------|:---------|:------------------|:-------------|:---------|:-----------------|:---------------------|:---------------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | | X | X | X | X | | X | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | | X | X | | | X | | | | X | | | X | X | | | X | | | X | X | | | | | | | X | X | | | | | | | | | | | | | | |
| 3 | 17 |  |  |  |  |  | X | X | X | X | | | | | X | | | X | X | X | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Ediudo/colmanetti | ---
license: openrail
---
|
result-kand2-sdxl-wuerst-karlo/e395fcfb | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 152
num_examples: 10
download_size: 1308
dataset_size: 152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e395fcfb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
izzy-lazerson/rakeffet | ---
pretty_name: Rakeffet
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
---
# Dataset Card for Rakeffet
|
maximalmargin/mitchell | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 20892005.0
num_examples: 11
download_size: 20893901
dataset_size: 20892005.0
---
# Dataset Card for "mitchell"
11 Joan Mitchell's work and descriptions (image-text pairs).
Texts are from the Collection in the [Foundation Louis Vuitton](https://www.fondationlouisvuitton.fr/en/collection/artists/joan-mitchell).
Images are from the [Joan Mitchell Foundation](https://www.joanmitchellfoundation.org/joan-mitchell/artwork). |
SUSTech/mt_bench_ppl_small | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: category
dtype: string
- name: turn
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reference
sequence: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: finished
dtype: bool
- name: score
dtype: float64
splits:
- name: train
num_bytes: 192360
num_examples: 80
download_size: 95096
dataset_size: 192360
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
modelloosrvcc/Hazu | ---
license: openrail
---
|
Joshua8966/blog-writer_training-data-v30-8-2023 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: article
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 72881118
num_examples: 12174
download_size: 46279297
dataset_size: 72881118
---
# Dataset Card for "blog-writer_training-data-v30-8-2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mahdibaghbanzadeh/GUE_EMP_H3K79me3 | ---
dataset_info:
features:
- name: sequence
dtype: string
- name: labels
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 11811087
num_examples: 23069
- name: val
num_bytes: 1476608
num_examples: 2884
- name: test
num_bytes: 1476608
num_examples: 2884
download_size: 6963928
dataset_size: 14764303
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
hezarai/persian-license-plate-v1 | ---
task_categories:
- image-to-text
language:
- fa
pretty_name: PersianLicensePlate
---
> Dataset is downloaded from [here](https://ceit.aut.ac.ir/~keyvanrad/download/ML971/project/) which was provided at Amirkabir University of Technology.
> The datas then labeled by the authors.
> Experimental results show that the fine-tuned model works well in Persian License Plate.
|
autoevaluate/autoeval-eval-samsum-samsum-8c5714-39885103812 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: 0ys/mt5-small-finetuned-amazon-en-es
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: 0ys/mt5-small-finetuned-amazon-en-es
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@raviteja2](https://huggingface.co/raviteja2) for evaluating this model. |
open-llm-leaderboard/details_Xenon1__Xenon-4 | ---
pretty_name: Evaluation run of Xenon1/Xenon-4
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Xenon1/Xenon-4](https://huggingface.co/Xenon1/Xenon-4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Xenon1__Xenon-4\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-04T06:47:30.573744](https://huggingface.co/datasets/open-llm-leaderboard/details_Xenon1__Xenon-4/blob/main/results_2024-02-04T06-47-30.573744.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5969673597339272,\n\
\ \"acc_stderr\": 0.0332252879660183,\n \"acc_norm\": 0.6047382841391643,\n\
\ \"acc_norm_stderr\": 0.033940427206963365,\n \"mc1\": 0.4283965728274174,\n\
\ \"mc1_stderr\": 0.017323088597314754,\n \"mc2\": 0.613129800259979,\n\
\ \"mc2_stderr\": 0.016329535721420842\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5435153583617748,\n \"acc_stderr\": 0.014555949760496442,\n\
\ \"acc_norm\": 0.6015358361774744,\n \"acc_norm_stderr\": 0.014306946052735569\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6468830910177256,\n\
\ \"acc_stderr\": 0.004769618829196511,\n \"acc_norm\": 0.8307110137422824,\n\
\ \"acc_norm_stderr\": 0.0037424055874098806\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n\
\ \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n\
\ \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6381578947368421,\n \"acc_stderr\": 0.03910525752849724,\n\
\ \"acc_norm\": 0.6381578947368421,\n \"acc_norm_stderr\": 0.03910525752849724\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.49,\n\
\ \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.028815615713432115,\n\
\ \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.028815615713432115\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n\
\ \"acc_stderr\": 0.038990736873573344,\n \"acc_norm\": 0.6805555555555556,\n\
\ \"acc_norm_stderr\": 0.038990736873573344\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939098,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939098\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5606936416184971,\n\
\ \"acc_stderr\": 0.03784271932887468,\n \"acc_norm\": 0.5606936416184971,\n\
\ \"acc_norm_stderr\": 0.03784271932887468\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.502127659574468,\n \"acc_stderr\": 0.03268572658667492,\n\
\ \"acc_norm\": 0.502127659574468,\n \"acc_norm_stderr\": 0.03268572658667492\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6275862068965518,\n \"acc_stderr\": 0.04028731532947559,\n\
\ \"acc_norm\": 0.6275862068965518,\n \"acc_norm_stderr\": 0.04028731532947559\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3862433862433862,\n \"acc_stderr\": 0.025075981767601677,\n \"\
acc_norm\": 0.3862433862433862,\n \"acc_norm_stderr\": 0.025075981767601677\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6580645161290323,\n\
\ \"acc_stderr\": 0.026985289576552742,\n \"acc_norm\": 0.6580645161290323,\n\
\ \"acc_norm_stderr\": 0.026985289576552742\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.03514528562175008,\n\
\ \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.03514528562175008\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\"\
: 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.703030303030303,\n \"acc_stderr\": 0.0356796977226805,\n\
\ \"acc_norm\": 0.703030303030303,\n \"acc_norm_stderr\": 0.0356796977226805\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7474747474747475,\n \"acc_stderr\": 0.03095405547036589,\n \"\
acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.03095405547036589\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.026499057701397457,\n\
\ \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.026499057701397457\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6,\n \"acc_stderr\": 0.02483881198803316,\n \"acc_norm\"\
: 0.6,\n \"acc_norm_stderr\": 0.02483881198803316\n },\n \"harness|hendrycksTest-high_school_mathematics|5\"\
: {\n \"acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n\
\ \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.031041941304059278,\n\
\ \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.031041941304059278\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.03802039760107903,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.03802039760107903\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7798165137614679,\n \"acc_stderr\": 0.01776597865232753,\n \"\
acc_norm\": 0.7798165137614679,\n \"acc_norm_stderr\": 0.01776597865232753\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4351851851851852,\n \"acc_stderr\": 0.03381200005643525,\n \"\
acc_norm\": 0.4351851851851852,\n \"acc_norm_stderr\": 0.03381200005643525\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849316,\n \"\
acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849316\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.729957805907173,\n \"acc_stderr\": 0.028900721906293433,\n \
\ \"acc_norm\": 0.729957805907173,\n \"acc_norm_stderr\": 0.028900721906293433\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6233183856502242,\n\
\ \"acc_stderr\": 0.032521134899291884,\n \"acc_norm\": 0.6233183856502242,\n\
\ \"acc_norm_stderr\": 0.032521134899291884\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6717557251908397,\n \"acc_stderr\": 0.041184385658062976,\n\
\ \"acc_norm\": 0.6717557251908397,\n \"acc_norm_stderr\": 0.041184385658062976\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n\
\ \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n\
\ \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6932515337423313,\n \"acc_stderr\": 0.03623089915724146,\n\
\ \"acc_norm\": 0.6932515337423313,\n \"acc_norm_stderr\": 0.03623089915724146\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.046840993210771065,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.046840993210771065\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n\
\ \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n\
\ \"acc_stderr\": 0.023365051491753715,\n \"acc_norm\": 0.8504273504273504,\n\
\ \"acc_norm_stderr\": 0.023365051491753715\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768077,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768077\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7841634738186463,\n\
\ \"acc_stderr\": 0.014711684386139963,\n \"acc_norm\": 0.7841634738186463,\n\
\ \"acc_norm_stderr\": 0.014711684386139963\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.025070713719153183,\n\
\ \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.025070713719153183\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.25921787709497207,\n\
\ \"acc_stderr\": 0.01465578083749774,\n \"acc_norm\": 0.25921787709497207,\n\
\ \"acc_norm_stderr\": 0.01465578083749774\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.02664327847450875,\n\
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.02664327847450875\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n\
\ \"acc_stderr\": 0.02631185807185416,\n \"acc_norm\": 0.6881028938906752,\n\
\ \"acc_norm_stderr\": 0.02631185807185416\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6975308641975309,\n \"acc_stderr\": 0.02555765398186805,\n\
\ \"acc_norm\": 0.6975308641975309,\n \"acc_norm_stderr\": 0.02555765398186805\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.425531914893617,\n \"acc_stderr\": 0.029494827600144373,\n \
\ \"acc_norm\": 0.425531914893617,\n \"acc_norm_stderr\": 0.029494827600144373\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4132985658409387,\n\
\ \"acc_stderr\": 0.012576779494860083,\n \"acc_norm\": 0.4132985658409387,\n\
\ \"acc_norm_stderr\": 0.012576779494860083\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n\
\ \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6111111111111112,\n \"acc_stderr\": 0.019722058939618068,\n \
\ \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.019722058939618068\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6775510204081633,\n \"acc_stderr\": 0.029923100563683906,\n\
\ \"acc_norm\": 0.6775510204081633,\n \"acc_norm_stderr\": 0.029923100563683906\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7910447761194029,\n\
\ \"acc_stderr\": 0.028748298931728655,\n \"acc_norm\": 0.7910447761194029,\n\
\ \"acc_norm_stderr\": 0.028748298931728655\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n\
\ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n\
\ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4283965728274174,\n\
\ \"mc1_stderr\": 0.017323088597314754,\n \"mc2\": 0.613129800259979,\n\
\ \"mc2_stderr\": 0.016329535721420842\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7703235990528808,\n \"acc_stderr\": 0.011821645601838232\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.20697498104624715,\n \
\ \"acc_stderr\": 0.011159498164891772\n }\n}\n```"
repo_url: https://huggingface.co/Xenon1/Xenon-4
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|arc:challenge|25_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|gsm8k|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hellaswag|10_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T06-47-30.573744.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-04T06-47-30.573744.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- '**/details_harness|winogrande|5_2024-02-04T06-47-30.573744.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-04T06-47-30.573744.parquet'
- config_name: results
data_files:
- split: 2024_02_04T06_47_30.573744
path:
- results_2024-02-04T06-47-30.573744.parquet
- split: latest
path:
- results_2024-02-04T06-47-30.573744.parquet
---
# Dataset Card for Evaluation run of Xenon1/Xenon-4
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Xenon1/Xenon-4](https://huggingface.co/Xenon1/Xenon-4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Xenon1__Xenon-4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-04T06:47:30.573744](https://huggingface.co/datasets/open-llm-leaderboard/details_Xenon1__Xenon-4/blob/main/results_2024-02-04T06-47-30.573744.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5969673597339272,
"acc_stderr": 0.0332252879660183,
"acc_norm": 0.6047382841391643,
"acc_norm_stderr": 0.033940427206963365,
"mc1": 0.4283965728274174,
"mc1_stderr": 0.017323088597314754,
"mc2": 0.613129800259979,
"mc2_stderr": 0.016329535721420842
},
"harness|arc:challenge|25": {
"acc": 0.5435153583617748,
"acc_stderr": 0.014555949760496442,
"acc_norm": 0.6015358361774744,
"acc_norm_stderr": 0.014306946052735569
},
"harness|hellaswag|10": {
"acc": 0.6468830910177256,
"acc_stderr": 0.004769618829196511,
"acc_norm": 0.8307110137422824,
"acc_norm_stderr": 0.0037424055874098806
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.04256193767901408,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.04256193767901408
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6381578947368421,
"acc_stderr": 0.03910525752849724,
"acc_norm": 0.6381578947368421,
"acc_norm_stderr": 0.03910525752849724
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.028815615713432115,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.028815615713432115
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.038990736873573344,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.038990736873573344
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939098,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939098
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5606936416184971,
"acc_stderr": 0.03784271932887468,
"acc_norm": 0.5606936416184971,
"acc_norm_stderr": 0.03784271932887468
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.502127659574468,
"acc_stderr": 0.03268572658667492,
"acc_norm": 0.502127659574468,
"acc_norm_stderr": 0.03268572658667492
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6275862068965518,
"acc_stderr": 0.04028731532947559,
"acc_norm": 0.6275862068965518,
"acc_norm_stderr": 0.04028731532947559
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3862433862433862,
"acc_stderr": 0.025075981767601677,
"acc_norm": 0.3862433862433862,
"acc_norm_stderr": 0.025075981767601677
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6580645161290323,
"acc_stderr": 0.026985289576552742,
"acc_norm": 0.6580645161290323,
"acc_norm_stderr": 0.026985289576552742
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.03514528562175008,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.03514528562175008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.703030303030303,
"acc_stderr": 0.0356796977226805,
"acc_norm": 0.703030303030303,
"acc_norm_stderr": 0.0356796977226805
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7474747474747475,
"acc_stderr": 0.03095405547036589,
"acc_norm": 0.7474747474747475,
"acc_norm_stderr": 0.03095405547036589
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.026499057701397457,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.026499057701397457
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6,
"acc_stderr": 0.02483881198803316,
"acc_norm": 0.6,
"acc_norm_stderr": 0.02483881198803316
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.027634907264178544,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.027634907264178544
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.031041941304059278,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.031041941304059278
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.03802039760107903,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.03802039760107903
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7798165137614679,
"acc_stderr": 0.01776597865232753,
"acc_norm": 0.7798165137614679,
"acc_norm_stderr": 0.01776597865232753
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4351851851851852,
"acc_stderr": 0.03381200005643525,
"acc_norm": 0.4351851851851852,
"acc_norm_stderr": 0.03381200005643525
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849316,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.729957805907173,
"acc_stderr": 0.028900721906293433,
"acc_norm": 0.729957805907173,
"acc_norm_stderr": 0.028900721906293433
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6233183856502242,
"acc_stderr": 0.032521134899291884,
"acc_norm": 0.6233183856502242,
"acc_norm_stderr": 0.032521134899291884
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6717557251908397,
"acc_stderr": 0.041184385658062976,
"acc_norm": 0.6717557251908397,
"acc_norm_stderr": 0.041184385658062976
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6932515337423313,
"acc_stderr": 0.03623089915724146,
"acc_norm": 0.6932515337423313,
"acc_norm_stderr": 0.03623089915724146
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.046840993210771065,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.046840993210771065
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8504273504273504,
"acc_stderr": 0.023365051491753715,
"acc_norm": 0.8504273504273504,
"acc_norm_stderr": 0.023365051491753715
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768077,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768077
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7841634738186463,
"acc_stderr": 0.014711684386139963,
"acc_norm": 0.7841634738186463,
"acc_norm_stderr": 0.014711684386139963
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.025070713719153183,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.025070713719153183
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.25921787709497207,
"acc_stderr": 0.01465578083749774,
"acc_norm": 0.25921787709497207,
"acc_norm_stderr": 0.01465578083749774
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.02664327847450875,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.02664327847450875
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6881028938906752,
"acc_stderr": 0.02631185807185416,
"acc_norm": 0.6881028938906752,
"acc_norm_stderr": 0.02631185807185416
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6975308641975309,
"acc_stderr": 0.02555765398186805,
"acc_norm": 0.6975308641975309,
"acc_norm_stderr": 0.02555765398186805
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.425531914893617,
"acc_stderr": 0.029494827600144373,
"acc_norm": 0.425531914893617,
"acc_norm_stderr": 0.029494827600144373
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4132985658409387,
"acc_stderr": 0.012576779494860083,
"acc_norm": 0.4132985658409387,
"acc_norm_stderr": 0.012576779494860083
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.019722058939618068,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.019722058939618068
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6775510204081633,
"acc_stderr": 0.029923100563683906,
"acc_norm": 0.6775510204081633,
"acc_norm_stderr": 0.029923100563683906
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7910447761194029,
"acc_stderr": 0.028748298931728655,
"acc_norm": 0.7910447761194029,
"acc_norm_stderr": 0.028748298931728655
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4283965728274174,
"mc1_stderr": 0.017323088597314754,
"mc2": 0.613129800259979,
"mc2_stderr": 0.016329535721420842
},
"harness|winogrande|5": {
"acc": 0.7703235990528808,
"acc_stderr": 0.011821645601838232
},
"harness|gsm8k|5": {
"acc": 0.20697498104624715,
"acc_stderr": 0.011159498164891772
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
DamarJati/NSFW-filter-DecentScan | ---
task_categories:
- image-classification
license: openrail
tags:
- not-for-all-audiences
- NSFW
pretty_name: Decent Scan
size_categories:
- 1K<n<10K
interfrance: true
--- |
distilled-from-one-sec-cv12/chunk_120 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1440362516
num_examples: 280663
download_size: 1472792189
dataset_size: 1440362516
---
# Dataset Card for "chunk_120"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
do11/test2 | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for test2
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.cfg`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("do11/test2")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("do11/test2")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| category | Task category | TextField | True | False |
| instruction | Instruction | TextField | True | False |
| context | Input | TextField | True | False |
| response | Response | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| new-instruction | Final instruction: | TextQuestion | True | Write the final version of the instruction, making sure that it matches the task category. If the original instruction is ok, copy and paste it here. | N/A |
| new-input | Final input: | TextQuestion | True | Write the final version of the input, making sure that it makes sense with the task category. If the original input is ok, copy and paste it here. If an input is not needed, leave this empty. | N/A |
| new-response | Final response: | TextQuestion | True | Write the final version of the response, making sure that it matches the task category and makes sense for the instruction (and input) provided. If the original response is ok, copy and paste it here. | N/A |
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "11",
"fields": {
"category": "closed_qa",
"context": "Van Zyl joined the Eastern Province Kings Academy, where he played for the Eastern Province U19 side in the 2010 Under-19 Provincial Championship. He was a key player for the Eastern Province U21 side in the 2012 Under-21 Provincial Championship, scoring 71 points in eight appearances. Van Zyl was under the Top SARU Performers, scoring the most tries at 6 in the 2012 Provincial Under 21 in the Rugby Junior Provincials.\n\nThis included a record and a remarkable personal haul in their opening match, when he scored 36 of his team\u0027s points in a 61\u20133 victory over Boland U21, consisting of four tries and eight conversions and was awarded Man of the Match.",
"instruction": "Who was Kyle Van Zyl playing against when he scored 36 of hisa teams 61 points?",
"response": "Kyle Van Zyl was playing against Boland U21 when he scored 36 points, leading his team to victory in a 61-3 win."
},
"metadata": null,
"responses": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"category": "closed_qa",
"context": "Van Zyl joined the Eastern Province Kings Academy, where he played for the Eastern Province U19 side in the 2010 Under-19 Provincial Championship. He was a key player for the Eastern Province U21 side in the 2012 Under-21 Provincial Championship, scoring 71 points in eight appearances. Van Zyl was under the Top SARU Performers, scoring the most tries at 6 in the 2012 Provincial Under 21 in the Rugby Junior Provincials.\n\nThis included a record and a remarkable personal haul in their opening match, when he scored 36 of his team\u0027s points in a 61\u20133 victory over Boland U21, consisting of four tries and eight conversions and was awarded Man of the Match.",
"external_id": "11",
"instruction": "Who was Kyle Van Zyl playing against when he scored 36 of hisa teams 61 points?",
"metadata": null,
"new-input": null,
"new-instruction": null,
"new-response": null,
"response": "Kyle Van Zyl was playing against Boland U21 when he scored 36 points, leading his team to victory in a 61-3 win."
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **category** is of type `TextField`.
* **instruction** is of type `TextField`.
* (optional) **context** is of type `TextField`.
* **response** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
* **new-instruction** is of type `TextQuestion`, and description "Write the final version of the instruction, making sure that it matches the task category. If the original instruction is ok, copy and paste it here.".
* (optional) **new-input** is of type `TextQuestion`, and description "Write the final version of the input, making sure that it makes sense with the task category. If the original input is ok, copy and paste it here. If an input is not needed, leave this empty.".
* **new-response** is of type `TextQuestion`, and description "Write the final version of the response, making sure that it matches the task category and makes sense for the instruction (and input) provided. If the original response is ok, copy and paste it here.".
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
In this dataset, you will find a collection of records that show a category, an instruction, an input and a response to that instruction. The aim of the project is to correct the instructions, intput and responses to make sure they are of the highest quality and that they match the task category that they belong to. All three texts should be clear and include real information. In addition, the response should be as complete but concise as possible.
To curate the dataset, you will need to provide an answer to the following text fields:
1 - Final instruction:
The final version of the instruction field. You may copy it using the copy icon in the instruction field. Leave it as it is if it's ok or apply any necessary corrections. Remember to change the instruction if it doesn't represent well the task category of the record.
2 - Final input:
The final version of the instruction field. You may copy it using the copy icon in the input field. Leave it as it is if it's ok or apply any necessary corrections. If the task category and instruction don't need of an input to be completed, leave this question blank.
3 - Final response:
The final version of the response field. You may copy it using the copy icon in the response field. Leave it as it is if it's ok or apply any necessary corrections. Check that the response makes sense given all the fields above.
You will need to provide at least an instruction and a response for all records. If you are not sure about a record and you prefer not to provide a response, click Discard.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
falbanese/US_Trump_2020_social_media | ---
license: mit
---
|
open-llm-leaderboard/details_Azure99__blossom-v5-9b | ---
pretty_name: Evaluation run of Azure99/blossom-v5-9b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Azure99/blossom-v5-9b](https://huggingface.co/Azure99/blossom-v5-9b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Azure99__blossom-v5-9b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-21T15:37:03.039241](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v5-9b/blob/main/results_2024-03-21T15-37-03.039241.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6922699632968013,\n\
\ \"acc_stderr\": 0.03083140719752146,\n \"acc_norm\": 0.6983143460865201,\n\
\ \"acc_norm_stderr\": 0.03142412981235352,\n \"mc1\": 0.35862913096695226,\n\
\ \"mc1_stderr\": 0.016789289499502025,\n \"mc2\": 0.5278235105912508,\n\
\ \"mc2_stderr\": 0.015439131046987332\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5972696245733788,\n \"acc_stderr\": 0.014332236306790145,\n\
\ \"acc_norm\": 0.6245733788395904,\n \"acc_norm_stderr\": 0.014150631435111728\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5924118701453893,\n\
\ \"acc_stderr\": 0.004903815885983279,\n \"acc_norm\": 0.784106751643099,\n\
\ \"acc_norm_stderr\": 0.004105997149954855\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n\
\ \"acc_stderr\": 0.042561937679014075,\n \"acc_norm\": 0.5851851851851851,\n\
\ \"acc_norm_stderr\": 0.042561937679014075\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7828947368421053,\n \"acc_stderr\": 0.03355045304882924,\n\
\ \"acc_norm\": 0.7828947368421053,\n \"acc_norm_stderr\": 0.03355045304882924\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.77,\n\
\ \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \
\ \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7245283018867924,\n \"acc_stderr\": 0.027495663683724057,\n\
\ \"acc_norm\": 0.7245283018867924,\n \"acc_norm_stderr\": 0.027495663683724057\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n\
\ \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6994219653179191,\n\
\ \"acc_stderr\": 0.0349610148119118,\n \"acc_norm\": 0.6994219653179191,\n\
\ \"acc_norm_stderr\": 0.0349610148119118\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.042295258468165065,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.042295258468165065\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6978723404255319,\n \"acc_stderr\": 0.030017554471880557,\n\
\ \"acc_norm\": 0.6978723404255319,\n \"acc_norm_stderr\": 0.030017554471880557\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5701754385964912,\n\
\ \"acc_stderr\": 0.046570472605949646,\n \"acc_norm\": 0.5701754385964912,\n\
\ \"acc_norm_stderr\": 0.046570472605949646\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6896551724137931,\n \"acc_stderr\": 0.03855289616378949,\n\
\ \"acc_norm\": 0.6896551724137931,\n \"acc_norm_stderr\": 0.03855289616378949\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.5687830687830688,\n \"acc_stderr\": 0.025506481698138215,\n \"\
acc_norm\": 0.5687830687830688,\n \"acc_norm_stderr\": 0.025506481698138215\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.6111111111111112,\n\
\ \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.6111111111111112,\n\
\ \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8387096774193549,\n \"acc_stderr\": 0.020923327006423294,\n \"\
acc_norm\": 0.8387096774193549,\n \"acc_norm_stderr\": 0.020923327006423294\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5517241379310345,\n \"acc_stderr\": 0.03499113137676744,\n \"\
acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.03499113137676744\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774707,\n \"acc_norm\"\
: 0.84,\n \"acc_norm_stderr\": 0.03684529491774707\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8303030303030303,\n \"acc_stderr\": 0.029311188674983106,\n\
\ \"acc_norm\": 0.8303030303030303,\n \"acc_norm_stderr\": 0.029311188674983106\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8787878787878788,\n \"acc_stderr\": 0.023253157951942084,\n \"\
acc_norm\": 0.8787878787878788,\n \"acc_norm_stderr\": 0.023253157951942084\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9222797927461139,\n \"acc_stderr\": 0.019321805557223147,\n\
\ \"acc_norm\": 0.9222797927461139,\n \"acc_norm_stderr\": 0.019321805557223147\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.764102564102564,\n \"acc_stderr\": 0.021525965407408726,\n \
\ \"acc_norm\": 0.764102564102564,\n \"acc_norm_stderr\": 0.021525965407408726\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.4666666666666667,\n \"acc_stderr\": 0.03041771696171748,\n \
\ \"acc_norm\": 0.4666666666666667,\n \"acc_norm_stderr\": 0.03041771696171748\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.024762902678057922,\n\
\ \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.024762902678057922\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.39072847682119205,\n \"acc_stderr\": 0.039837983066598075,\n \"\
acc_norm\": 0.39072847682119205,\n \"acc_norm_stderr\": 0.039837983066598075\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8715596330275229,\n \"acc_stderr\": 0.014344977542914318,\n \"\
acc_norm\": 0.8715596330275229,\n \"acc_norm_stderr\": 0.014344977542914318\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6527777777777778,\n \"acc_stderr\": 0.03246887243637649,\n \"\
acc_norm\": 0.6527777777777778,\n \"acc_norm_stderr\": 0.03246887243637649\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8725490196078431,\n \"acc_stderr\": 0.023405530480846315,\n \"\
acc_norm\": 0.8725490196078431,\n \"acc_norm_stderr\": 0.023405530480846315\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8270042194092827,\n \"acc_stderr\": 0.024621562866768424,\n \
\ \"acc_norm\": 0.8270042194092827,\n \"acc_norm_stderr\": 0.024621562866768424\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7219730941704036,\n\
\ \"acc_stderr\": 0.030069584874494043,\n \"acc_norm\": 0.7219730941704036,\n\
\ \"acc_norm_stderr\": 0.030069584874494043\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n\
\ \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.036401182719909456,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.036401182719909456\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709225,\n\
\ \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709225\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5625,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.5625,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.03675668832233188,\n\
\ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.03675668832233188\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n\
\ \"acc_stderr\": 0.020237149008990936,\n \"acc_norm\": 0.8931623931623932,\n\
\ \"acc_norm_stderr\": 0.020237149008990936\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8416347381864623,\n\
\ \"acc_stderr\": 0.013055346753516734,\n \"acc_norm\": 0.8416347381864623,\n\
\ \"acc_norm_stderr\": 0.013055346753516734\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7774566473988439,\n \"acc_stderr\": 0.02239421566194282,\n\
\ \"acc_norm\": 0.7774566473988439,\n \"acc_norm_stderr\": 0.02239421566194282\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4301675977653631,\n\
\ \"acc_stderr\": 0.01655860163604104,\n \"acc_norm\": 0.4301675977653631,\n\
\ \"acc_norm_stderr\": 0.01655860163604104\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.02440439492808787,\n\
\ \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.02440439492808787\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.752411575562701,\n\
\ \"acc_stderr\": 0.024513879973621967,\n \"acc_norm\": 0.752411575562701,\n\
\ \"acc_norm_stderr\": 0.024513879973621967\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.023132376234543336,\n\
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.023132376234543336\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.549645390070922,\n \"acc_stderr\": 0.02968010556502904,\n \
\ \"acc_norm\": 0.549645390070922,\n \"acc_norm_stderr\": 0.02968010556502904\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4817470664928292,\n\
\ \"acc_stderr\": 0.012761723960595472,\n \"acc_norm\": 0.4817470664928292,\n\
\ \"acc_norm_stderr\": 0.012761723960595472\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7279411764705882,\n \"acc_stderr\": 0.02703304115168146,\n\
\ \"acc_norm\": 0.7279411764705882,\n \"acc_norm_stderr\": 0.02703304115168146\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6944444444444444,\n \"acc_stderr\": 0.018635594034423976,\n \
\ \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.018635594034423976\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7272727272727273,\n\
\ \"acc_stderr\": 0.04265792110940589,\n \"acc_norm\": 0.7272727272727273,\n\
\ \"acc_norm_stderr\": 0.04265792110940589\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7795918367346939,\n \"acc_stderr\": 0.026537045312145298,\n\
\ \"acc_norm\": 0.7795918367346939,\n \"acc_norm_stderr\": 0.026537045312145298\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n\
\ \"acc_stderr\": 0.02519692987482705,\n \"acc_norm\": 0.8507462686567164,\n\
\ \"acc_norm_stderr\": 0.02519692987482705\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\
\ \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n\
\ \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.028782108105401705,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.028782108105401705\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.35862913096695226,\n\
\ \"mc1_stderr\": 0.016789289499502025,\n \"mc2\": 0.5278235105912508,\n\
\ \"mc2_stderr\": 0.015439131046987332\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7632202052091555,\n \"acc_stderr\": 0.011947592365207385\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4836997725549659,\n \
\ \"acc_stderr\": 0.013765164147036952\n }\n}\n```"
repo_url: https://huggingface.co/Azure99/blossom-v5-9b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|arc:challenge|25_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|gsm8k|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hellaswag|10_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T15-37-03.039241.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T15-37-03.039241.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- '**/details_harness|winogrande|5_2024-03-21T15-37-03.039241.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-21T15-37-03.039241.parquet'
- config_name: results
data_files:
- split: 2024_03_21T15_37_03.039241
path:
- results_2024-03-21T15-37-03.039241.parquet
- split: latest
path:
- results_2024-03-21T15-37-03.039241.parquet
---
# Dataset Card for Evaluation run of Azure99/blossom-v5-9b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Azure99/blossom-v5-9b](https://huggingface.co/Azure99/blossom-v5-9b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Azure99__blossom-v5-9b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-21T15:37:03.039241](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v5-9b/blob/main/results_2024-03-21T15-37-03.039241.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6922699632968013,
"acc_stderr": 0.03083140719752146,
"acc_norm": 0.6983143460865201,
"acc_norm_stderr": 0.03142412981235352,
"mc1": 0.35862913096695226,
"mc1_stderr": 0.016789289499502025,
"mc2": 0.5278235105912508,
"mc2_stderr": 0.015439131046987332
},
"harness|arc:challenge|25": {
"acc": 0.5972696245733788,
"acc_stderr": 0.014332236306790145,
"acc_norm": 0.6245733788395904,
"acc_norm_stderr": 0.014150631435111728
},
"harness|hellaswag|10": {
"acc": 0.5924118701453893,
"acc_stderr": 0.004903815885983279,
"acc_norm": 0.784106751643099,
"acc_norm_stderr": 0.004105997149954855
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.042561937679014075,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.042561937679014075
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7828947368421053,
"acc_stderr": 0.03355045304882924,
"acc_norm": 0.7828947368421053,
"acc_norm_stderr": 0.03355045304882924
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7245283018867924,
"acc_stderr": 0.027495663683724057,
"acc_norm": 0.7245283018867924,
"acc_norm_stderr": 0.027495663683724057
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6994219653179191,
"acc_stderr": 0.0349610148119118,
"acc_norm": 0.6994219653179191,
"acc_norm_stderr": 0.0349610148119118
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6978723404255319,
"acc_stderr": 0.030017554471880557,
"acc_norm": 0.6978723404255319,
"acc_norm_stderr": 0.030017554471880557
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5701754385964912,
"acc_stderr": 0.046570472605949646,
"acc_norm": 0.5701754385964912,
"acc_norm_stderr": 0.046570472605949646
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6896551724137931,
"acc_stderr": 0.03855289616378949,
"acc_norm": 0.6896551724137931,
"acc_norm_stderr": 0.03855289616378949
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.5687830687830688,
"acc_stderr": 0.025506481698138215,
"acc_norm": 0.5687830687830688,
"acc_norm_stderr": 0.025506481698138215
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.04360314860077459,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.04360314860077459
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8387096774193549,
"acc_stderr": 0.020923327006423294,
"acc_norm": 0.8387096774193549,
"acc_norm_stderr": 0.020923327006423294
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.03499113137676744,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.03499113137676744
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774707,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774707
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8303030303030303,
"acc_stderr": 0.029311188674983106,
"acc_norm": 0.8303030303030303,
"acc_norm_stderr": 0.029311188674983106
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8787878787878788,
"acc_stderr": 0.023253157951942084,
"acc_norm": 0.8787878787878788,
"acc_norm_stderr": 0.023253157951942084
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9222797927461139,
"acc_stderr": 0.019321805557223147,
"acc_norm": 0.9222797927461139,
"acc_norm_stderr": 0.019321805557223147
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.764102564102564,
"acc_stderr": 0.021525965407408726,
"acc_norm": 0.764102564102564,
"acc_norm_stderr": 0.021525965407408726
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.03041771696171748,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.03041771696171748
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.024762902678057922,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.024762902678057922
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.39072847682119205,
"acc_stderr": 0.039837983066598075,
"acc_norm": 0.39072847682119205,
"acc_norm_stderr": 0.039837983066598075
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8715596330275229,
"acc_stderr": 0.014344977542914318,
"acc_norm": 0.8715596330275229,
"acc_norm_stderr": 0.014344977542914318
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6527777777777778,
"acc_stderr": 0.03246887243637649,
"acc_norm": 0.6527777777777778,
"acc_norm_stderr": 0.03246887243637649
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8725490196078431,
"acc_stderr": 0.023405530480846315,
"acc_norm": 0.8725490196078431,
"acc_norm_stderr": 0.023405530480846315
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8270042194092827,
"acc_stderr": 0.024621562866768424,
"acc_norm": 0.8270042194092827,
"acc_norm_stderr": 0.024621562866768424
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7219730941704036,
"acc_stderr": 0.030069584874494043,
"acc_norm": 0.7219730941704036,
"acc_norm_stderr": 0.030069584874494043
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.036401182719909456,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.036401182719909456
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.803680981595092,
"acc_stderr": 0.031207970394709225,
"acc_norm": 0.803680981595092,
"acc_norm_stderr": 0.031207970394709225
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5625,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.03675668832233188,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.03675668832233188
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8931623931623932,
"acc_stderr": 0.020237149008990936,
"acc_norm": 0.8931623931623932,
"acc_norm_stderr": 0.020237149008990936
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8416347381864623,
"acc_stderr": 0.013055346753516734,
"acc_norm": 0.8416347381864623,
"acc_norm_stderr": 0.013055346753516734
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7774566473988439,
"acc_stderr": 0.02239421566194282,
"acc_norm": 0.7774566473988439,
"acc_norm_stderr": 0.02239421566194282
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4301675977653631,
"acc_stderr": 0.01655860163604104,
"acc_norm": 0.4301675977653631,
"acc_norm_stderr": 0.01655860163604104
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.02440439492808787,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.02440439492808787
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.752411575562701,
"acc_stderr": 0.024513879973621967,
"acc_norm": 0.752411575562701,
"acc_norm_stderr": 0.024513879973621967
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.023132376234543336,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.023132376234543336
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.549645390070922,
"acc_stderr": 0.02968010556502904,
"acc_norm": 0.549645390070922,
"acc_norm_stderr": 0.02968010556502904
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4817470664928292,
"acc_stderr": 0.012761723960595472,
"acc_norm": 0.4817470664928292,
"acc_norm_stderr": 0.012761723960595472
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7279411764705882,
"acc_stderr": 0.02703304115168146,
"acc_norm": 0.7279411764705882,
"acc_norm_stderr": 0.02703304115168146
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.018635594034423976,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.018635594034423976
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04265792110940589,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04265792110940589
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7795918367346939,
"acc_stderr": 0.026537045312145298,
"acc_norm": 0.7795918367346939,
"acc_norm_stderr": 0.026537045312145298
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.02519692987482705,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.02519692987482705
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466125,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466125
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.028782108105401705,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.028782108105401705
},
"harness|truthfulqa:mc|0": {
"mc1": 0.35862913096695226,
"mc1_stderr": 0.016789289499502025,
"mc2": 0.5278235105912508,
"mc2_stderr": 0.015439131046987332
},
"harness|winogrande|5": {
"acc": 0.7632202052091555,
"acc_stderr": 0.011947592365207385
},
"harness|gsm8k|5": {
"acc": 0.4836997725549659,
"acc_stderr": 0.013765164147036952
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CyberHarem/natasha_cioara_honkai3 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of natasha_cioara (Houkai 3rd)
This is the dataset of natasha_cioara (Houkai 3rd), containing 115 images and their tags.
The core tags of this character are `bangs, mole, mole_under_mouth, breasts, short_hair, purple_eyes, red_eyes, black_hair, hair_between_eyes, grey_hair, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 115 | 208.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natasha_cioara_honkai3/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 115 | 99.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natasha_cioara_honkai3/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 277 | 214.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natasha_cioara_honkai3/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 115 | 174.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natasha_cioara_honkai3/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 277 | 335.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natasha_cioara_honkai3/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/natasha_cioara_honkai3',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 30 |  |  |  |  |  | 1girl, solo, black_bodysuit, looking_at_viewer, black_cape, hood, smile, closed_mouth, hair_over_one_eye, claws, simple_background |
| 1 | 7 |  |  |  |  |  | 1girl, closed_mouth, long_sleeves, solo, black_necktie, long_hair, looking_at_viewer, black_gloves, pantyhose, polo_shirt, smile, bartender, holding_weapon, ponytail, simple_background, single_glove, bird, black_footwear, green_shirt, holding_knife, thigh_boots, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | black_bodysuit | looking_at_viewer | black_cape | hood | smile | closed_mouth | hair_over_one_eye | claws | simple_background | long_sleeves | black_necktie | long_hair | black_gloves | pantyhose | polo_shirt | bartender | holding_weapon | ponytail | single_glove | bird | black_footwear | green_shirt | holding_knife | thigh_boots | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:--------------------|:-------------|:-------|:--------|:---------------|:--------------------|:--------|:--------------------|:---------------|:----------------|:------------|:---------------|:------------|:-------------|:------------|:-----------------|:-----------|:---------------|:-------|:-----------------|:--------------|:----------------|:--------------|:-------------|
| 0 | 30 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | | X | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
PocketDoc/Wizard-Vicuna-Refined | ---
task_categories:
- question-answering
- conversational
language:
- en
---
## Description:
This is a small subset of the Wizard-Vicuna dataset that has been normalized and rewritten into more consistent markdown formatting. |
zolak/twitter_dataset_78_1713082655 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: float64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3227267
num_examples: 7957
download_size: 1583083
dataset_size: 3227267
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tr416/test2_dataset_20231007_172035 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73851
dataset_size: 770400.0
---
# Dataset Card for "test2_dataset_20231007_172035"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/isonami_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of isonami/磯波 (Kantai Collection)
This is the dataset of isonami/磯波 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `black_hair, long_hair, braid, twin_braids, sidelocks, hair_between_eyes, black_eyes, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 334.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isonami_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 254.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isonami_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 981 | 480.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isonami_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 318.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isonami_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 981 | 581.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isonami_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/isonami_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, solo, white_shirt, looking_at_viewer, camera, blue_skirt, clothes_writing, t-shirt, cowboy_shot, pleated_skirt, smile, alternate_costume, upper_body |
| 1 | 19 |  |  |  |  |  | 1girl, pleated_skirt, serafuku, solo, blue_sailor_collar, blue_skirt, looking_at_viewer, white_background, simple_background, short_sleeves, sitting, black_socks |
| 2 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, serafuku, solo, blush, pleated_skirt, smile, hair_ribbon, twitter_username |
| 3 | 8 |  |  |  |  |  | 1girl, blue_dress, looking_at_viewer, solo, sun_hat, official_alternate_costume, white_shirt, short_sleeves, blush, brown_headwear, cloud, day, smile, upper_body, bag, blue_sky, outdoors |
| 4 | 17 |  |  |  |  |  | 1girl, white_gloves, black_headwear, solo, black_vest, blue_shirt, dress_shirt, looking_at_viewer, employee_uniform, kepi, simple_background, alternate_costume, armband, name_tag, shako_cap, short_sleeves, white_background, cowboy_shot, open_mouth, upper_body, pants, whistle |
| 5 | 8 |  |  |  |  |  | 1girl, solo, cowboy_shot, looking_at_viewer, white_background, simple_background, blush, smile, standing, bikini, black_one-piece_swimsuit, breasts, collarbone, navel, blue_one-piece_swimsuit, flat_chest, hair_ribbon, new_school_swimsuit |
| 6 | 6 |  |  |  |  |  | 1girl, black_leotard, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, wrist_cuffs, bowtie, breasts, looking_at_viewer, solo, strapless_leotard, simple_background, black_pantyhose, blush |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | white_shirt | looking_at_viewer | camera | blue_skirt | clothes_writing | t-shirt | cowboy_shot | pleated_skirt | smile | alternate_costume | upper_body | serafuku | blue_sailor_collar | white_background | simple_background | short_sleeves | sitting | black_socks | blush | hair_ribbon | twitter_username | blue_dress | sun_hat | official_alternate_costume | brown_headwear | cloud | day | bag | blue_sky | outdoors | white_gloves | black_headwear | black_vest | blue_shirt | dress_shirt | employee_uniform | kepi | armband | name_tag | shako_cap | open_mouth | pants | whistle | standing | bikini | black_one-piece_swimsuit | breasts | collarbone | navel | blue_one-piece_swimsuit | flat_chest | new_school_swimsuit | black_leotard | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | wrist_cuffs | bowtie | strapless_leotard | black_pantyhose |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------|:--------------------|:---------|:-------------|:------------------|:----------|:--------------|:----------------|:--------|:--------------------|:-------------|:-----------|:---------------------|:-------------------|:--------------------|:----------------|:----------|:--------------|:--------|:--------------|:-------------------|:-------------|:----------|:-----------------------------|:-----------------|:--------|:------|:------|:-----------|:-----------|:---------------|:-----------------|:-------------|:-------------|:--------------|:-------------------|:-------|:----------|:-----------|:------------|:-------------|:--------|:----------|:-----------|:---------|:---------------------------|:----------|:-------------|:--------|:--------------------------|:-------------|:----------------------|:----------------|:------------------|:-------------------|:----------------|:--------------|:--------------|:---------|:--------------------|:------------------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 19 |  |  |  |  |  | X | X | | X | | X | | | | X | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | | X | | | | | | X | X | | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | X | X | X | | | | | | | X | | X | | | | | X | | | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 17 |  |  |  |  |  | X | X | | X | | | | | X | | | X | X | | | X | X | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | X | | X | | | | | X | | X | | | | | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | X | | X | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X |
|
zolak/twitter_dataset_79_1713137825 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 244564
num_examples: 614
download_size: 127828
dataset_size: 244564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_lodrick-the-lafted__Winged-Lagomorph-2x13B | ---
pretty_name: Evaluation run of lodrick-the-lafted/Winged-Lagomorph-2x13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lodrick-the-lafted/Winged-Lagomorph-2x13B](https://huggingface.co/lodrick-the-lafted/Winged-Lagomorph-2x13B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lodrick-the-lafted__Winged-Lagomorph-2x13B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-17T18:07:01.785781](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Winged-Lagomorph-2x13B/blob/main/results_2024-01-17T18-07-01.785781.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.44701093892342236,\n\
\ \"acc_stderr\": 0.03445607638165826,\n \"acc_norm\": 0.44980402546373816,\n\
\ \"acc_norm_stderr\": 0.03519159952391745,\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4453666576267616,\n\
\ \"mc2_stderr\": 0.015036118833065276\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.44795221843003413,\n \"acc_stderr\": 0.01453201149821167,\n\
\ \"acc_norm\": 0.47952218430034127,\n \"acc_norm_stderr\": 0.014599131353035005\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5243975303724357,\n\
\ \"acc_stderr\": 0.004983837641502894,\n \"acc_norm\": 0.6938856801433977,\n\
\ \"acc_norm_stderr\": 0.004599358920909553\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.35555555555555557,\n\
\ \"acc_stderr\": 0.04135176749720386,\n \"acc_norm\": 0.35555555555555557,\n\
\ \"acc_norm_stderr\": 0.04135176749720386\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.42105263157894735,\n \"acc_stderr\": 0.040179012759817494,\n\
\ \"acc_norm\": 0.42105263157894735,\n \"acc_norm_stderr\": 0.040179012759817494\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.45,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.44150943396226416,\n \"acc_stderr\": 0.030561590426731837,\n \
\ \"acc_norm\": 0.44150943396226416,\n \"acc_norm_stderr\": 0.030561590426731837\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.375,\n\
\ \"acc_stderr\": 0.04048439222695598,\n \"acc_norm\": 0.375,\n \
\ \"acc_norm_stderr\": 0.04048439222695598\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n\
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3988439306358382,\n\
\ \"acc_stderr\": 0.037336266553835096,\n \"acc_norm\": 0.3988439306358382,\n\
\ \"acc_norm_stderr\": 0.037336266553835096\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.045766654032077636,\n\
\ \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.045766654032077636\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n\
\ \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.425531914893617,\n \"acc_stderr\": 0.032321469162244675,\n\
\ \"acc_norm\": 0.425531914893617,\n \"acc_norm_stderr\": 0.032321469162244675\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2807017543859649,\n\
\ \"acc_stderr\": 0.04227054451232199,\n \"acc_norm\": 0.2807017543859649,\n\
\ \"acc_norm_stderr\": 0.04227054451232199\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n\
\ \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.29894179894179895,\n \"acc_stderr\": 0.023577604791655802,\n \"\
acc_norm\": 0.29894179894179895,\n \"acc_norm_stderr\": 0.023577604791655802\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.04006168083848878,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.04006168083848878\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110175,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110175\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.4096774193548387,\n \"acc_stderr\": 0.027976054915347364,\n \"\
acc_norm\": 0.4096774193548387,\n \"acc_norm_stderr\": 0.027976054915347364\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.32019704433497537,\n \"acc_stderr\": 0.032826493853041504,\n \"\
acc_norm\": 0.32019704433497537,\n \"acc_norm_stderr\": 0.032826493853041504\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.5696969696969697,\n \"acc_stderr\": 0.03866225962879077,\n\
\ \"acc_norm\": 0.5696969696969697,\n \"acc_norm_stderr\": 0.03866225962879077\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.5606060606060606,\n \"acc_stderr\": 0.035360859475294805,\n \"\
acc_norm\": 0.5606060606060606,\n \"acc_norm_stderr\": 0.035360859475294805\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.5440414507772021,\n \"acc_stderr\": 0.03594413711272438,\n\
\ \"acc_norm\": 0.5440414507772021,\n \"acc_norm_stderr\": 0.03594413711272438\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.023901157979402538,\n\
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.023901157979402538\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2777777777777778,\n \"acc_stderr\": 0.027309140588230186,\n \
\ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.027309140588230186\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3949579831932773,\n \"acc_stderr\": 0.03175367846096624,\n \
\ \"acc_norm\": 0.3949579831932773,\n \"acc_norm_stderr\": 0.03175367846096624\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.03802039760107903,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.03802039760107903\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.5651376146788991,\n \"acc_stderr\": 0.021254631465609283,\n \"\
acc_norm\": 0.5651376146788991,\n \"acc_norm_stderr\": 0.021254631465609283\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.3194444444444444,\n \"acc_stderr\": 0.031798763421768524,\n \"\
acc_norm\": 0.3194444444444444,\n \"acc_norm_stderr\": 0.031798763421768524\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.5980392156862745,\n \"acc_stderr\": 0.034411900234824655,\n \"\
acc_norm\": 0.5980392156862745,\n \"acc_norm_stderr\": 0.034411900234824655\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6244725738396625,\n \"acc_stderr\": 0.03152256243091156,\n \
\ \"acc_norm\": 0.6244725738396625,\n \"acc_norm_stderr\": 0.03152256243091156\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5515695067264574,\n\
\ \"acc_stderr\": 0.03337883736255097,\n \"acc_norm\": 0.5515695067264574,\n\
\ \"acc_norm_stderr\": 0.03337883736255097\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.4122137404580153,\n \"acc_stderr\": 0.04317171194870255,\n\
\ \"acc_norm\": 0.4122137404580153,\n \"acc_norm_stderr\": 0.04317171194870255\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6611570247933884,\n \"acc_stderr\": 0.04320767807536671,\n \"\
acc_norm\": 0.6611570247933884,\n \"acc_norm_stderr\": 0.04320767807536671\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5555555555555556,\n\
\ \"acc_stderr\": 0.04803752235190193,\n \"acc_norm\": 0.5555555555555556,\n\
\ \"acc_norm_stderr\": 0.04803752235190193\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.5337423312883436,\n \"acc_stderr\": 0.039194155450484096,\n\
\ \"acc_norm\": 0.5337423312883436,\n \"acc_norm_stderr\": 0.039194155450484096\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.375,\n\
\ \"acc_stderr\": 0.04595091388086298,\n \"acc_norm\": 0.375,\n \
\ \"acc_norm_stderr\": 0.04595091388086298\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.5631067961165048,\n \"acc_stderr\": 0.04911147107365777,\n\
\ \"acc_norm\": 0.5631067961165048,\n \"acc_norm_stderr\": 0.04911147107365777\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7008547008547008,\n\
\ \"acc_stderr\": 0.029996951858349476,\n \"acc_norm\": 0.7008547008547008,\n\
\ \"acc_norm_stderr\": 0.029996951858349476\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5721583652618135,\n\
\ \"acc_stderr\": 0.017692787927803728,\n \"acc_norm\": 0.5721583652618135,\n\
\ \"acc_norm_stderr\": 0.017692787927803728\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.4913294797687861,\n \"acc_stderr\": 0.0269150473553698,\n\
\ \"acc_norm\": 0.4913294797687861,\n \"acc_norm_stderr\": 0.0269150473553698\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32513966480446926,\n\
\ \"acc_stderr\": 0.01566654278505354,\n \"acc_norm\": 0.32513966480446926,\n\
\ \"acc_norm_stderr\": 0.01566654278505354\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.4084967320261438,\n \"acc_stderr\": 0.02814640599309636,\n\
\ \"acc_norm\": 0.4084967320261438,\n \"acc_norm_stderr\": 0.02814640599309636\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5016077170418006,\n\
\ \"acc_stderr\": 0.02839794490780661,\n \"acc_norm\": 0.5016077170418006,\n\
\ \"acc_norm_stderr\": 0.02839794490780661\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.49382716049382713,\n \"acc_stderr\": 0.027818623962583295,\n\
\ \"acc_norm\": 0.49382716049382713,\n \"acc_norm_stderr\": 0.027818623962583295\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.35815602836879434,\n \"acc_stderr\": 0.028602085862759415,\n \
\ \"acc_norm\": 0.35815602836879434,\n \"acc_norm_stderr\": 0.028602085862759415\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3396349413298566,\n\
\ \"acc_stderr\": 0.012095592506931976,\n \"acc_norm\": 0.3396349413298566,\n\
\ \"acc_norm_stderr\": 0.012095592506931976\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.27941176470588236,\n \"acc_stderr\": 0.027257202606114944,\n\
\ \"acc_norm\": 0.27941176470588236,\n \"acc_norm_stderr\": 0.027257202606114944\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.4117647058823529,\n \"acc_stderr\": 0.019910377463105932,\n \
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.019910377463105932\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5363636363636364,\n\
\ \"acc_stderr\": 0.04776449162396197,\n \"acc_norm\": 0.5363636363636364,\n\
\ \"acc_norm_stderr\": 0.04776449162396197\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5755102040816327,\n \"acc_stderr\": 0.031642094879429414,\n\
\ \"acc_norm\": 0.5755102040816327,\n \"acc_norm_stderr\": 0.031642094879429414\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.527363184079602,\n\
\ \"acc_stderr\": 0.035302355173346824,\n \"acc_norm\": 0.527363184079602,\n\
\ \"acc_norm_stderr\": 0.035302355173346824\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.41566265060240964,\n\
\ \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.41566265060240964,\n\
\ \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6023391812865497,\n \"acc_stderr\": 0.0375363895576169,\n\
\ \"acc_norm\": 0.6023391812865497,\n \"acc_norm_stderr\": 0.0375363895576169\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4453666576267616,\n\
\ \"mc2_stderr\": 0.015036118833065276\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6740331491712708,\n \"acc_stderr\": 0.01317378263692219\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2562547384382108,\n \
\ \"acc_stderr\": 0.012025145867332842\n }\n}\n```"
repo_url: https://huggingface.co/lodrick-the-lafted/Winged-Lagomorph-2x13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|arc:challenge|25_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|gsm8k|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hellaswag|10_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-17T18-07-01.785781.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-17T18-07-01.785781.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- '**/details_harness|winogrande|5_2024-01-17T18-07-01.785781.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-17T18-07-01.785781.parquet'
- config_name: results
data_files:
- split: 2024_01_17T18_07_01.785781
path:
- results_2024-01-17T18-07-01.785781.parquet
- split: latest
path:
- results_2024-01-17T18-07-01.785781.parquet
---
# Dataset Card for Evaluation run of lodrick-the-lafted/Winged-Lagomorph-2x13B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [lodrick-the-lafted/Winged-Lagomorph-2x13B](https://huggingface.co/lodrick-the-lafted/Winged-Lagomorph-2x13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lodrick-the-lafted__Winged-Lagomorph-2x13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-17T18:07:01.785781](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Winged-Lagomorph-2x13B/blob/main/results_2024-01-17T18-07-01.785781.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.44701093892342236,
"acc_stderr": 0.03445607638165826,
"acc_norm": 0.44980402546373816,
"acc_norm_stderr": 0.03519159952391745,
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608763,
"mc2": 0.4453666576267616,
"mc2_stderr": 0.015036118833065276
},
"harness|arc:challenge|25": {
"acc": 0.44795221843003413,
"acc_stderr": 0.01453201149821167,
"acc_norm": 0.47952218430034127,
"acc_norm_stderr": 0.014599131353035005
},
"harness|hellaswag|10": {
"acc": 0.5243975303724357,
"acc_stderr": 0.004983837641502894,
"acc_norm": 0.6938856801433977,
"acc_norm_stderr": 0.004599358920909553
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.35555555555555557,
"acc_stderr": 0.04135176749720386,
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.04135176749720386
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.040179012759817494,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.040179012759817494
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.44150943396226416,
"acc_stderr": 0.030561590426731837,
"acc_norm": 0.44150943396226416,
"acc_norm_stderr": 0.030561590426731837
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.375,
"acc_stderr": 0.04048439222695598,
"acc_norm": 0.375,
"acc_norm_stderr": 0.04048439222695598
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3988439306358382,
"acc_stderr": 0.037336266553835096,
"acc_norm": 0.3988439306358382,
"acc_norm_stderr": 0.037336266553835096
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.045766654032077636,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.045766654032077636
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.425531914893617,
"acc_stderr": 0.032321469162244675,
"acc_norm": 0.425531914893617,
"acc_norm_stderr": 0.032321469162244675
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2807017543859649,
"acc_stderr": 0.04227054451232199,
"acc_norm": 0.2807017543859649,
"acc_norm_stderr": 0.04227054451232199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4689655172413793,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.4689655172413793,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.29894179894179895,
"acc_stderr": 0.023577604791655802,
"acc_norm": 0.29894179894179895,
"acc_norm_stderr": 0.023577604791655802
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.04006168083848878,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.04006168083848878
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110175,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110175
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4096774193548387,
"acc_stderr": 0.027976054915347364,
"acc_norm": 0.4096774193548387,
"acc_norm_stderr": 0.027976054915347364
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.32019704433497537,
"acc_stderr": 0.032826493853041504,
"acc_norm": 0.32019704433497537,
"acc_norm_stderr": 0.032826493853041504
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5696969696969697,
"acc_stderr": 0.03866225962879077,
"acc_norm": 0.5696969696969697,
"acc_norm_stderr": 0.03866225962879077
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5606060606060606,
"acc_stderr": 0.035360859475294805,
"acc_norm": 0.5606060606060606,
"acc_norm_stderr": 0.035360859475294805
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5440414507772021,
"acc_stderr": 0.03594413711272438,
"acc_norm": 0.5440414507772021,
"acc_norm_stderr": 0.03594413711272438
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.023901157979402538,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.023901157979402538
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.027309140588230186,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.027309140588230186
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3949579831932773,
"acc_stderr": 0.03175367846096624,
"acc_norm": 0.3949579831932773,
"acc_norm_stderr": 0.03175367846096624
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.03802039760107903,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.03802039760107903
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5651376146788991,
"acc_stderr": 0.021254631465609283,
"acc_norm": 0.5651376146788991,
"acc_norm_stderr": 0.021254631465609283
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3194444444444444,
"acc_stderr": 0.031798763421768524,
"acc_norm": 0.3194444444444444,
"acc_norm_stderr": 0.031798763421768524
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5980392156862745,
"acc_stderr": 0.034411900234824655,
"acc_norm": 0.5980392156862745,
"acc_norm_stderr": 0.034411900234824655
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6244725738396625,
"acc_stderr": 0.03152256243091156,
"acc_norm": 0.6244725738396625,
"acc_norm_stderr": 0.03152256243091156
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5515695067264574,
"acc_stderr": 0.03337883736255097,
"acc_norm": 0.5515695067264574,
"acc_norm_stderr": 0.03337883736255097
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.4122137404580153,
"acc_stderr": 0.04317171194870255,
"acc_norm": 0.4122137404580153,
"acc_norm_stderr": 0.04317171194870255
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6611570247933884,
"acc_stderr": 0.04320767807536671,
"acc_norm": 0.6611570247933884,
"acc_norm_stderr": 0.04320767807536671
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04803752235190193,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04803752235190193
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5337423312883436,
"acc_stderr": 0.039194155450484096,
"acc_norm": 0.5337423312883436,
"acc_norm_stderr": 0.039194155450484096
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.375,
"acc_stderr": 0.04595091388086298,
"acc_norm": 0.375,
"acc_norm_stderr": 0.04595091388086298
},
"harness|hendrycksTest-management|5": {
"acc": 0.5631067961165048,
"acc_stderr": 0.04911147107365777,
"acc_norm": 0.5631067961165048,
"acc_norm_stderr": 0.04911147107365777
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7008547008547008,
"acc_stderr": 0.029996951858349476,
"acc_norm": 0.7008547008547008,
"acc_norm_stderr": 0.029996951858349476
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5721583652618135,
"acc_stderr": 0.017692787927803728,
"acc_norm": 0.5721583652618135,
"acc_norm_stderr": 0.017692787927803728
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4913294797687861,
"acc_stderr": 0.0269150473553698,
"acc_norm": 0.4913294797687861,
"acc_norm_stderr": 0.0269150473553698
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.32513966480446926,
"acc_stderr": 0.01566654278505354,
"acc_norm": 0.32513966480446926,
"acc_norm_stderr": 0.01566654278505354
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4084967320261438,
"acc_stderr": 0.02814640599309636,
"acc_norm": 0.4084967320261438,
"acc_norm_stderr": 0.02814640599309636
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5016077170418006,
"acc_stderr": 0.02839794490780661,
"acc_norm": 0.5016077170418006,
"acc_norm_stderr": 0.02839794490780661
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.49382716049382713,
"acc_stderr": 0.027818623962583295,
"acc_norm": 0.49382716049382713,
"acc_norm_stderr": 0.027818623962583295
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35815602836879434,
"acc_stderr": 0.028602085862759415,
"acc_norm": 0.35815602836879434,
"acc_norm_stderr": 0.028602085862759415
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3396349413298566,
"acc_stderr": 0.012095592506931976,
"acc_norm": 0.3396349413298566,
"acc_norm_stderr": 0.012095592506931976
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.27941176470588236,
"acc_stderr": 0.027257202606114944,
"acc_norm": 0.27941176470588236,
"acc_norm_stderr": 0.027257202606114944
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.019910377463105932,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.019910377463105932
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5363636363636364,
"acc_stderr": 0.04776449162396197,
"acc_norm": 0.5363636363636364,
"acc_norm_stderr": 0.04776449162396197
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5755102040816327,
"acc_stderr": 0.031642094879429414,
"acc_norm": 0.5755102040816327,
"acc_norm_stderr": 0.031642094879429414
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.527363184079602,
"acc_stderr": 0.035302355173346824,
"acc_norm": 0.527363184079602,
"acc_norm_stderr": 0.035302355173346824
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-virology|5": {
"acc": 0.41566265060240964,
"acc_stderr": 0.03836722176598052,
"acc_norm": 0.41566265060240964,
"acc_norm_stderr": 0.03836722176598052
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6023391812865497,
"acc_stderr": 0.0375363895576169,
"acc_norm": 0.6023391812865497,
"acc_norm_stderr": 0.0375363895576169
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608763,
"mc2": 0.4453666576267616,
"mc2_stderr": 0.015036118833065276
},
"harness|winogrande|5": {
"acc": 0.6740331491712708,
"acc_stderr": 0.01317378263692219
},
"harness|gsm8k|5": {
"acc": 0.2562547384382108,
"acc_stderr": 0.012025145867332842
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Mitsuki-Sakamoto/alpaca_farm-deberta-re-pref-64-_fil_self_160m_bo16_2_mix_50_kl_0.1_prm_160m_thr_0.1_seed_3 | ---
dataset_info:
config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: index
dtype: int64
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: filtered_epoch
dtype: int64
- name: gen_reward
dtype: float64
- name: gen_response
dtype: string
splits:
- name: epoch_0
num_bytes: 43762247
num_examples: 18928
- name: epoch_1
num_bytes: 44355479
num_examples: 18928
- name: epoch_2
num_bytes: 44413402
num_examples: 18928
- name: epoch_3
num_bytes: 44446124
num_examples: 18928
- name: epoch_4
num_bytes: 44453469
num_examples: 18928
- name: epoch_5
num_bytes: 44441134
num_examples: 18928
- name: epoch_6
num_bytes: 44427806
num_examples: 18928
- name: epoch_7
num_bytes: 44415327
num_examples: 18928
- name: epoch_8
num_bytes: 44409544
num_examples: 18928
- name: epoch_9
num_bytes: 44408079
num_examples: 18928
- name: epoch_10
num_bytes: 44408107
num_examples: 18928
- name: epoch_11
num_bytes: 44404673
num_examples: 18928
- name: epoch_12
num_bytes: 44406717
num_examples: 18928
- name: epoch_13
num_bytes: 44404524
num_examples: 18928
- name: epoch_14
num_bytes: 44403118
num_examples: 18928
- name: epoch_15
num_bytes: 44403919
num_examples: 18928
- name: epoch_16
num_bytes: 44406501
num_examples: 18928
- name: epoch_17
num_bytes: 44406372
num_examples: 18928
- name: epoch_18
num_bytes: 44403957
num_examples: 18928
- name: epoch_19
num_bytes: 44405464
num_examples: 18928
- name: epoch_20
num_bytes: 44406776
num_examples: 18928
- name: epoch_21
num_bytes: 44405069
num_examples: 18928
- name: epoch_22
num_bytes: 44406545
num_examples: 18928
- name: epoch_23
num_bytes: 44406186
num_examples: 18928
- name: epoch_24
num_bytes: 44405986
num_examples: 18928
- name: epoch_25
num_bytes: 44405903
num_examples: 18928
- name: epoch_26
num_bytes: 44405882
num_examples: 18928
- name: epoch_27
num_bytes: 44405779
num_examples: 18928
- name: epoch_28
num_bytes: 44406020
num_examples: 18928
- name: epoch_29
num_bytes: 44406453
num_examples: 18928
download_size: 701508295
dataset_size: 1331646562
configs:
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: epoch_0
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_0-*
- split: epoch_1
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_1-*
- split: epoch_2
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_2-*
- split: epoch_3
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_3-*
- split: epoch_4
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_4-*
- split: epoch_5
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_5-*
- split: epoch_6
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_6-*
- split: epoch_7
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_7-*
- split: epoch_8
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_8-*
- split: epoch_9
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_9-*
- split: epoch_10
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_10-*
- split: epoch_11
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_11-*
- split: epoch_12
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_12-*
- split: epoch_13
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_13-*
- split: epoch_14
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_14-*
- split: epoch_15
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_15-*
- split: epoch_16
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_16-*
- split: epoch_17
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_17-*
- split: epoch_18
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_18-*
- split: epoch_19
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_19-*
- split: epoch_20
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_20-*
- split: epoch_21
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_21-*
- split: epoch_22
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_22-*
- split: epoch_23
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_23-*
- split: epoch_24
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_24-*
- split: epoch_25
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_25-*
- split: epoch_26
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_26-*
- split: epoch_27
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_27-*
- split: epoch_28
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_28-*
- split: epoch_29
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_29-*
---
|
BangumiBase/isekaidecheatskill | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Isekai De Cheat Skill
This is the image base of bangumi Isekai de Cheat Skill, we detected 22 characters, 1032 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 309 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 23 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 17 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 10 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 24 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 9 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 29 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 8 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 59 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 76 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 19 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 9 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 7 | [Download](12/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 13 | 16 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 6 | [Download](14/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 15 | 10 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 15 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 11 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 73 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 10 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 52 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 240 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BirdL/WhisperGPTFull | ---
license: apache-2.0
---
All the datasets from https://huggingface.co/Whispering-GPT concated together to finetune [OLM-GPT2](https://huggingface.co/Tristan/olm-gpt2-oct-2022) |
gabrielmbmb/ultrafeedback-prompts-ultrajudge-gpt35 | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
dtype: string
- name: generation_prompt
dtype: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: rating
sequence: int64
- name: areas
list:
- name: Authenticity & Reliability
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Clarity & Transparency
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Compliance with Intent
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Practical Accuracy
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
splits:
- name: train
num_bytes: 18658217
num_examples: 1000
download_size: 7709122
dataset_size: 18658217
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ultrafeedback-prompts-ultrajudge-gpt35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_87 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1346207684.0
num_examples: 264377
download_size: 1373965661
dataset_size: 1346207684.0
---
# Dataset Card for "chunk_87"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kawehiwang/alpaca_dataset | ---
license: llama2
---
|
NobodyExistsOnTheInternet/ToxicQAFinal | ---
tags:
- not-for-all-audiences
---
Use only for Alignment research. NOETI is not responsible for what you might do with it. |
YangXiao-nlp/SimulateBench | ---
license: apache-2.0
task_categories:
- text-generation
- question-answering
language:
- en
---
# SimulateBench: How Far Are We from Believable AI Agents? A Framework for Evaluating the Believability of Human Behavior Simulation.
<!-- Provide a quick summary of the dataset. -->
Human behavior simulation of AI agents necessitates that the agents possess a quality of believability, which is crucial as it facilitates users in establishing trust toward the agents and streamlines the fulfillment of the agents' goals. While recent advancements in Large Language Model (LLM) based agents have improved human behavior simulation, challenges inherent to LLMs (e.g., long context modeling) can undermine their believability. Consequently, evaluating AI agent believability becomes imperative. Unfortunately, prior research often neglects the negative impacts of LLM deficiencies. To address these gaps, we introduce two metrics for assessing LLM-based agent believability: consistency and robustness, together with a benchmark, SimulateBench, to evaluate the consistency and robustness of agents implemented with popular LLMs. We find that agents (i) struggle to accurately depict character information when presented with lengthy profile inputs; (ii) exhibit vulnerability to profile perturbations; and (iii) are significantly affected by certain key factors that impact their overall believability.
## Dataset Details
<!-- Provide a longer summary of what this dataset is. -->
#### Profile Descriptive Framework & Character Dataset
The Profile Descriptive Framework is introduced to document information about a person comprehensively, consisting of three parts: Immutable Characteristic, Social Role, Relationship. We selected characters from TV dramas of popular genres: The Simpsons (Animated), Friends (Comedy), Breaking Bad (Crime), and The Rings of Power(Science fiction). According to the profile descriptive framework, we extract the profile information from the fandom.
The profile is recorded in JSON format for easy use. You can find the profile of a character in the folder of "/profile/". The Social Role, Relationship information are stored in one JSON file.
For example, if you want to load the profile of character of homer, his profile file is stored in
Immutable Chaacteristic: `/profile/homer/profile_v1/basic_information.json`
Social Role, Relationship: `/profile/homer/profile_v1/roles.json`
#### Consistency Dataset & Robustness Dataset
The two dataset is proposed to test the Consistency and robustness performance of agents when prompted with the profile of a character to simulate the character. The two datasets are composed of single-choice questions and their gold answer. According to the profile descriptive framework, there are three kinds of questions related to Immutable Characteristics, Social Roles, and Relationships. For a character, you can find the dataset in the folder of "/benchmark_only_QA".
For example, if you want to test the agent when simulating the character of Homer, his dataset is stored in:
Immutable Characteristic: `/benchmark_only_QA/basic_informationhomer/homer/questions.json`
Social Role: `/benchmark_only_QA/role_non_relation/homer/questions.json`
Relationship: `/benchmark_only_QA/role_relation/homer/questions.json`
> To test the agent's consistency ability, we will ask the agent to first simulate the character. Then, we will ask the agent to finsh the corresponding single-choice question in the Consistency Dataset. The accuracy score will be used as a measure of the consistency ability.
> The Robustness Dataset is these datasets whose names are in the format of 'homer_{varients}'. To test the agent's robustness ability, the agent is tested by comparing their performance on the Consistency dataset and Robustness dataset. For example, if we want to test the agent's robustness ability when faced with age perturbations, we will first change the field of the birthday year of the homer in the profile, namely from 1956 to 1985. We then ask the agent to simulate homer('/profile/homer/'') and homer_1985('/profile/homer_1985/'') by prompting the two profile to the agent respectively. Then, we will ask the agent to finish the test in the '/benchmark_only_QA/{question_type}/homer/questions.json' and '/benchmark_only_QA/{question_type}/homer_1985/questions.json' respectively. Then, we can compare the two score on the two dataset to analyse the agent's robustness ability.
<!--
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]-->
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [SimulateBench](https://github.com/GAIR-NLP/SimulateBench)
- **Paper:** [How Far Are We from Believable AI Agents? A Framework for Evaluating the Believability of Human Behavior Simulation](https://arxiv.org/abs/2312.17115)
<!--## Uses-->
<!-- Address questions around how the dataset is intended to be used. -->
<!--### Direct Use-->
<!-- This section describes suitable use cases for the dataset. -->
<!--[More Information Needed]-->
<!--### Out-of-Scope Use-->
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
<!--[More Information Needed]
## Dataset Structure-->
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
<!--[More Information Needed]
## Dataset Creation
### Curation Rationale-->
<!-- Motivation for the creation of this dataset. -->
<!--[More Information Needed]
### Source Data-->
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
<!--#### Data Collection and Processing-->
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
<!--[More Information Needed]
#### Who are the source data producers?-->
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
<!--[More Information Needed]
### Annotations [optional]-->
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
<!--#### Annotation process-->
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
<!--[More Information Needed]
#### Who are the annotators?-->
<!-- This section describes the people or systems who created the annotations. -->
<!--[More Information Needed]
#### Personal and Sensitive Information-->
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
<!--[More Information Needed]
## Bias, Risks, and Limitations-->
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!--[More Information Needed]-->
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
<!--**BibTeX:**-->
@misc{xiao2023far,
title={How Far Are We from Believable AI Agents? A Framework for Evaluating the Believability of Human Behavior Simulation},
author={Yang Xiao and Yi Cheng and Jinlan Fu and Jiashuo Wang and Wenjie Li and Pengfei Liu},
year={2023},
eprint={2312.17115},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
<!--**APA:**
[More Information Needed]
## Glossary [optional]-->
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
<!--[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]-->
<!--## Dataset Card Contact
[More Information Needed]--> |
mboth/medienVersorgen-100-undersampled | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: Datatype
dtype: string
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Unit
dtype: string
- name: text
dtype: string
- name: Grundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
'0': Bereitstellen
'1': Entsorgen
'2': Speichern
'3': Verteilen
splits:
- name: train
num_bytes: 59754.580327868855
num_examples: 303
- name: test
num_bytes: 14725
num_examples: 77
- name: valid
num_bytes: 14725
num_examples: 77
download_size: 42237
dataset_size: 89204.58032786885
---
# Dataset Card for "medienVersorgen-100-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_cola_regularized_plurals | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 4013
num_examples: 54
- name: test
num_bytes: 3805
num_examples: 52
- name: train
num_bytes: 25529
num_examples: 341
download_size: 20659
dataset_size: 33347
---
# Dataset Card for "MULTI_VALUE_cola_regularized_plurals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minnq/dataset | ---
license: mit
---
|
CATIE-AQ/mnli_fr_prompt_textual_entailment | ---
licence: mit
language:
- fr
size_categories:
- 100K<n<1M
task_categories:
- text-classification
tags:
- textual-entailment
- DFP
- french prompts
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
source_datasets:
- multilingual-NLI-26lang-2mil7
---
# mnli_fr_prompt_textual_entailment
## Summary
**mnli_fr_prompt_textual_entailment** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **550,000** rows that can be used for a textual entailment task.
The original data (without prompts) comes from the dataset [multilingual-NLI-26lang-2mil7](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) by Laurer et al. where only the mnli French part has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
"""Prendre l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
"""Prends l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
"""Prenez l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
'"'+premise+'"\nQuestion : Cela implique-t-il que "'+hypothesis+'" ? "vrai", "faux", ou "incertain" ?',
'"'+premise+'"\nQuestion : "'+hypothesis+'" est "vrai", "faux", ou "peut-être" ?',
""" " """+premise+""" "\n D'après le passage précédent, est-il vrai que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nSur la base de ces informations, l'énoncé est-il : " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn gardant à l'esprit le texte ci-dessus, considérez : " """+hypothesis+""" "\n Est-ce que c'est "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn gardant à l'esprit le texte ci-dessus, considére : " """+hypothesis+""" "\n Est-ce que c'est "vrai", "faux", ou "peut-être" ?""",
""" " """+premise+""" "\nEn utilisant uniquement la description ci-dessus et ce que vous savez du monde, " """+hypothesis+""" " est-ce "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn utilisant uniquement la description ci-dessus et ce que tu sais du monde, " """+hypothesis+""" " est-ce "vrai", "faux", ou "incertain" ?""",
"""Étant donné que " """+premise+""" ", s'ensuit-il que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Étant donné que " """+premise+""" ", est-il garanti que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
'Étant donné '+premise+', doit-on supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Étant donné '+premise+', dois-je supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Sachant que '+premise+', doit-on supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Sachant que '+premise+', dois-je supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Étant donné que '+premise+', il doit donc être vrai que '+hypothesis+' ? "vrai", "faux", ou "incertain" ?',
"""Supposons que " """+premise+""" ", pouvons-nous déduire que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons que " """+premise+""" ", puis-je déduire que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons qu'il est vrai que " """+premise+""" ". Alors, est-ce que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons qu'il soit vrai que " """+premise+""" ",\n Donc, " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?"""
```
### Features used in the prompts
In the prompt list above, `premise`, `hypothesis` and `targets` have been constructed from:
```
moritz = load_dataset('MoritzLaurer/multilingual-NLI-26lang-2mil7')
mnli = moritz['fr_mnli']
mnli['premise'] = list(map(lambda i: i.replace(' . ','. ').replace(' .','. ').replace('( ','(').replace(' )',')').replace(' , ',', ').replace(', ',', ').replace("' ","'"), map(str,mnli['premise'])))
mnli['hypothesis'] = list(map(lambda x: x.replace(' . ','. ').replace(' .','. ').replace('( ','(').replace(' )',')').replace(' , ',', ').replace(', ',', ').replace("' ","'"), map(str,mnli['hypothesis'])))
targets = str(anli['label'][i]).replace("0","vrai").replace("1","incertain").replace("2","faux")
```
# Splits
- `train` with 550,000 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/mnli_fr_prompt_textual_entailment")
```
# Citation
## Original data
> @article{laurer_less_2022,
title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
url = {https://osf.io/74b8k},
language = {en-us},
urldate = {2022-07-28},
journal = {Preprint},
author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
month = jun,
year = {2022},
note = {Publisher: Open Science Framework},
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
mit |
oaimli/PeerSum | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
pretty_name: PeerSum
size_categories:
- 10K<n<100K
---
This is PeerSum, a multi-document summarization dataset in the peer-review domain. More details can be found in the paper accepted at EMNLP 2023, [Summarizing Multiple Documents with Conversational Structure for Meta-review Generation](https://arxiv.org/abs/2305.01498). The original code and datasets are public on [GitHub](https://github.com/oaimli/PeerSum).
Please use the following code to download the dataset with the datasets library from Huggingface.
```python
from datasets import load_dataset
peersum_all = load_dataset('oaimli/PeerSum', split='all')
peersum_train = peersum_all.filter(lambda s: s['label'] == 'train')
peersum_val = peersum_all.filter(lambda s: s['label'] == 'val')
peersum_test = peersum_all.filter(lambda s: s['label'] == 'test')
```
The Huggingface dataset is mainly for multi-document summarization. Each sample comprises information with the following keys:
```
* paper_id: str (a link to the raw data)
* paper_title: str
* paper_abstract, str
* paper_acceptance, str
* meta_review, str
* review_ids, list(str)
* review_writers, list(str)
* review_contents, list(str)
* review_ratings, list(int)
* review_confidences, list(int)
* review_reply_tos, list(str)
* label, str, (train, val, test)
```
You can also download the raw data from [Google Drive](https://drive.google.com/drive/folders/1SGYvxY1vOZF2MpDn3B-apdWHCIfpN2uB?usp=sharing). The raw data comprises more information and it can be used for other analysis for peer reviews. |
puddleglum/esm_chem_quarter | ---
dataset_info:
features:
- name: labels
sequence: int64
- name: reactions
sequence: int64
- name: highly_masked_sequences
sequence: int64
- name: binding_site_masked_sequences
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 6572644710.888085
num_examples: 691480
- name: test
num_bytes: 1642676818.3613033
num_examples: 172850
download_size: 41290028
dataset_size: 8215321529.249389
---
# Dataset Card for "esm_chem_quarter"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mitsuki-Sakamoto/alpaca_farm-reward-model-deberta-v3-large-v2-re-preference-64-nsample-2-16_mix_random_seed_3 | ---
dataset_info:
- config_name: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: index
dtype: int64
splits:
- name: preference
num_bytes: 25889425.028748564
num_examples: 20000
download_size: 12358176
dataset_size: 25889425.028748564
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: index
dtype: int64
splits:
- name: preference
num_bytes: 25900235.98820059
num_examples: 20000
download_size: 12311452
dataset_size: 25900235.98820059
configs:
- config_name: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500
data_files:
- split: preference
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/preference-*
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: preference
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/preference-*
---
|
vilm/MathPile-Textbooks | ---
dataset_info:
features:
- name: text
dtype: string
- name: subset
dtype: string
- name: meta
struct:
- name: book_name
dtype: string
- name: type
dtype: string
- name: file_path
dtype: string
splits:
- name: train
num_bytes: 379550590
num_examples: 784
download_size: 166989636
dataset_size: 379550590
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gokuls/processed_eval_coco | ---
dataset_info:
features:
- name: image_path
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: pixel_values
sequence:
sequence:
sequence: float32
splits:
- name: validation
num_bytes: 3026780000
num_examples: 5000
download_size: 920275832
dataset_size: 3026780000
---
# Dataset Card for "processed_eval_coco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.