datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
olm/olm-CC-MAIN-2017-22-sampling-ratio-0.16178770949 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM May 2017 Common Crawl
size_categories:
- 10M<n<100M
source_datasets: []
tags:
- pretraining
- language modelling
- common crawl
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM May 2017 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the May 2017 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. |
marup/GravesRVC200EpochsMW2 | ---
license: openrail
---
|
bigai/DiPlomat | ---
language:
- en
license: cc-by-nc-sa-4.0
pretty_name: DiPlomat
dataset_info:
- config_name: CQA
features:
- name: text
sequence: string
- name: speaker
sequence: string
- name: gold_statement
dtype: string
- name: questions
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 19805284
num_examples: 15585
- name: validation
num_bytes: 1959148
num_examples: 1559
- name: test
num_bytes: 2967746
num_examples: 2338
download_size: 25566918
dataset_size: 24732178
- config_name: NLI_with_context
features:
- name: dialogue
sequence: string
- name: speaker
sequence: string
- name: human answer
dtype: string
splits:
- name: train
num_bytes: 2977929
num_examples: 2551
download_size: 3042193
dataset_size: 2977929
- config_name: NLI_without_context
features:
- name: text
dtype: string
- name: hypothesis
dtype: string
splits:
- name: train
num_bytes: 1095335
num_examples: 2551
download_size: 1146864
dataset_size: 1095335
- config_name: PIR_first
features:
- name: text
sequence: string
- name: speaker
sequence: string
- name: correct_turn_number
sequence: int64
splits:
- name: train
num_bytes: 3442927
num_examples: 3341
- name: validation
num_bytes: 161433
num_examples: 168
- name: test
num_bytes: 687605
num_examples: 668
download_size: 4366468
dataset_size: 4291965
- config_name: PIR_second
features:
- name: text
sequence: string
- name: speaker
sequence: string
- name: correct_turn_number
dtype: int64
- name: label
dtype: int64
- name: choice
sequence: string
splits:
- name: train
num_bytes: 9263111
num_examples: 5188
- name: validation
num_bytes: 399924
num_examples: 244
- name: test
num_bytes: 1890798
num_examples: 1062
download_size: 11740508
dataset_size: 11553833
---
# DiPlomat
<!-- Provide a quick summary of the dataset. -->
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life
conversations and is essential for the development of communicative social agents.
In this paper, we introduce a novel challenge, **DiPlomat**, aiming at benchmarking machines’ capabilities
on pragmatic reasoning and situated conversational understanding.
Compared with previous works that treat different figurative expressions
(e.g. metaphor, sarcasm) as individual tasks, **DiPlomat** provides a cohesive framework
towards general pragmatic understanding.
## Dataset Details
The **DiPlomat** dataset owns 4,177 data and covers a vocabulary of 48,900 words.
More than that, human-annotated answers reach an amount of 6,494,
hold a vocabulary size of 20,000, and cover 5 types of reasoning.
Along with the dataset, we propose two tasks:
**Pragmatic Identification and Reasoning (PIR)** and **Conversational Question Answering (CQA)**. Furthermore, we provide the
data that we use for **zero-NLI**.
- **Language(s) (NLP):** [English]
- **License:** [CC BY-NC-SA (Attribution-NonCommercial-ShareAlike)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [link](https://github.com/diplomat-dataset/diplomat)
- **Paper:** [DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning](https://arxiv.org/abs/2306.09030)
- **Website:** [link](https://diplomat-dataset.github.io)
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
| Field | Task|
| --- | --- |
| ``PIR_first`` | Pragmatic Identification and Reasoning Subtask1|
| ``PIR_second`` | Pragmatic Identification and Reasoning Subtask2|
| ``CQA`` | Conversational Question Answering|
| ``NLI_with_context`` | Zero-Shot NLI with context|
| ``NLI_without_context`` | Zero-Shot NLI without context|
**NOTE:** If you'd like to test on the whole PIR task, please don't change the order of PIR Subtask 1's and Subtask 2's test file's data, as both of them are deliberately arranged as the same order.
## Dataset Creation
### Source Data
We leverage the data of [INTERVIEW dataset](https://www.kaggle.com/datasets/shuyangli94/interview-npr-media-dialog-transcripts) collected by
Majumder et al as our source.
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
### Annotating Process
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
#### Step I. Automatic Selection:
The extensive size of the source dataset introduces redundancy,
and thus requires automatic measures to alleviate the burden of human annotation.
Therefore, we employ algorithms and models to perform an initial filtering process.
#### Step II. Fine-grained Annotation:
We leverage Amazon Mechanical Turk to conduct detailed annotations of pragmatic turns within our dialogues.
Workers participating in the annotation task are instructed to select
all turns that exhibit a divergence between their literal meaning and their intended meaning.
Due to the subjective nature of pragmatic reasoning, we request the workers to provide confidence scores
along with reasons for their choices.
#### Step III. Human Refinement:
In this process, tasks for workers are formulated as multiple-choice questions.
Previously collected human-annotated reasons are transformed into choices, utilizing a template format:
[turn {turn_id}: {reason}]. In addition, to mitigate the impact of careless workers,
we introduce a distractor choice for each gold choice.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```
@inproceedings{li2023diplomat,
title={DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning},
author={Hengli Li and Song-Chun Zhu and Zilong Zheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023}
}
```
## Dataset Card Contact
If there is any problem with the dataset, please email [lihengli@stu.pku.edu.cn](mailto: 2000017754@stu.pku.edu.cn).
|
muhammadravi251001/indonesian-nli-and-qa | ---
license: mit
---
|
mask-distilled-one-sec-cv12/chunk_96 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1248125580
num_examples: 245115
download_size: 1273303637
dataset_size: 1248125580
---
# Dataset Card for "chunk_96"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Babypotatotang/logo-captioning-blip | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 166009745.001
num_examples: 12911
- name: test
num_bytes: 41712829.864
num_examples: 3228
download_size: 208992324
dataset_size: 207722574.86499998
---
# Dataset Card for "logo-captioning-blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tverous/claim-amr-test | ---
dataset_info:
features:
- name: uid
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: reason
dtype: string
- name: claim_cleaned_amr
dtype: string
splits:
- name: train
num_bytes: 2143
num_examples: 2
download_size: 11051
dataset_size: 2143
---
# Dataset Card for "claim-amr-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/find_second_sent_train_50_eval_10 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 133699
num_examples: 110
- name: validation
num_bytes: 9071
num_examples: 10
download_size: 80138
dataset_size: 142770
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_second_sent_train_50_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HPGomes/MarioVoice | ---
license: openrail
---
|
adamjweintraut/eli5_precomputed_best | ---
dataset_info:
features:
- name: index
dtype: int64
- name: q_id
dtype: string
- name: question
dtype: string
- name: best_answer
dtype: string
- name: all_answers
sequence: string
- name: num_answers
dtype: int64
- name: top_answers
sequence: string
- name: num_top_answers
dtype: int64
- name: context
dtype: string
- name: orig
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 5071864050.742018
num_examples: 183333
- name: test
num_bytes: 633993380.6289911
num_examples: 22917
- name: validation
num_bytes: 633993380.6289911
num_examples: 22917
download_size: 3932220410
dataset_size: 6339850812.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
bigscience-data/roots_indic-hi_wikisource | ---
language: hi
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-hi_wikisource
# wikisource_filtered
- Dataset uid: `wikisource_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.6306 % of total
- 12.7884 % of fr
- 19.8886 % of indic-bn
- 20.9966 % of indic-ta
- 2.3478 % of ar
- 4.7068 % of indic-hi
- 18.0998 % of indic-te
- 1.7155 % of es
- 19.4800 % of indic-kn
- 9.1737 % of indic-ml
- 17.1771 % of indic-mr
- 17.1870 % of indic-gu
- 70.3687 % of indic-as
- 1.0165 % of pt
- 7.8642 % of indic-pa
- 1.3501 % of vi
- 4.9411 % of indic-or
- 0.5307 % of ca
- 2.3593 % of id
- 1.5928 % of eu
### BigScience processing steps
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- remove_wiki_mojibake
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
|
jorgeortizfuentes/mc4_es_cl | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 33474281363.716232
num_examples: 8698099
- name: validation
num_bytes: 33072762.075400714
num_examples: 8582
download_size: 15903346209
dataset_size: 33507354125.791634
---
# Dataset Card for "mc4_es_cl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Circularmachines/batch_indexing_machine_230529_002 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 156369319.0
num_examples: 720
download_size: 156379644
dataset_size: 156369319.0
---
# Dataset Card for "batch_indexing_machine_230529_002"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davanstrien/autotrain-data-test-news-class | Invalid username or password. |
legacy107/newsqa-retrieved-ce-chunk-100 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: key
dtype: string
- name: labels
list:
- name: end
sequence: int64
- name: start
sequence: int64
- name: document_id
dtype: int64
- name: retrieved_context
dtype: string
splits:
- name: train
num_bytes: 506360128
num_examples: 69960
- name: validation
num_bytes: 31115876
num_examples: 4200
- name: test
num_bytes: 30314274
num_examples: 4212
download_size: 80627687
dataset_size: 567790278
---
# Dataset Card for "newsqa-retrieved-ce-chunk-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_pankajmathur__model_007 | ---
pretty_name: Evaluation run of pankajmathur/model_007
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [pankajmathur/model_007](https://huggingface.co/pankajmathur/model_007) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_pankajmathur__model_007\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-10-09T02:03:09.335068](https://huggingface.co/datasets/open-llm-leaderboard/details_pankajmathur__model_007/blob/main/results_2023-10-09T02-03-09.335068.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6901502879968988,\n\
\ \"acc_stderr\": 0.031344534847114004,\n \"acc_norm\": 0.6939037892141556,\n\
\ \"acc_norm_stderr\": 0.03131458982120537,\n \"mc1\": 0.44920440636474906,\n\
\ \"mc1_stderr\": 0.01741294198611531,\n \"mc2\": 0.6312306236860621,\n\
\ \"mc2_stderr\": 0.014945471343395618\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6749146757679181,\n \"acc_stderr\": 0.01368814730972912,\n\
\ \"acc_norm\": 0.7107508532423208,\n \"acc_norm_stderr\": 0.013250012579393441\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6908982274447322,\n\
\ \"acc_stderr\": 0.004611787665905346,\n \"acc_norm\": 0.8765186217884884,\n\
\ \"acc_norm_stderr\": 0.003283165867631372\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939098,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939098\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8157894736842105,\n \"acc_stderr\": 0.0315469804508223,\n\
\ \"acc_norm\": 0.8157894736842105,\n \"acc_norm_stderr\": 0.0315469804508223\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n\
\ \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7320754716981132,\n \"acc_stderr\": 0.027257260322494845,\n\
\ \"acc_norm\": 0.7320754716981132,\n \"acc_norm_stderr\": 0.027257260322494845\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8125,\n\
\ \"acc_stderr\": 0.032639560491693344,\n \"acc_norm\": 0.8125,\n\
\ \"acc_norm_stderr\": 0.032639560491693344\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\"\
: 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.047840607041056527,\n\
\ \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.047840607041056527\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6680851063829787,\n \"acc_stderr\": 0.030783736757745657,\n\
\ \"acc_norm\": 0.6680851063829787,\n \"acc_norm_stderr\": 0.030783736757745657\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n\
\ \"acc_stderr\": 0.046570472605949625,\n \"acc_norm\": 0.4298245614035088,\n\
\ \"acc_norm_stderr\": 0.046570472605949625\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6068965517241379,\n \"acc_stderr\": 0.0407032901370707,\n\
\ \"acc_norm\": 0.6068965517241379,\n \"acc_norm_stderr\": 0.0407032901370707\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.455026455026455,\n \"acc_stderr\": 0.025646928361049398,\n \"\
acc_norm\": 0.455026455026455,\n \"acc_norm_stderr\": 0.025646928361049398\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677173,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677173\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \
\ \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8129032258064516,\n \"acc_stderr\": 0.022185710092252252,\n \"\
acc_norm\": 0.8129032258064516,\n \"acc_norm_stderr\": 0.022185710092252252\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5369458128078818,\n \"acc_stderr\": 0.035083705204426656,\n \"\
acc_norm\": 0.5369458128078818,\n \"acc_norm_stderr\": 0.035083705204426656\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\"\
: 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8666666666666667,\n \"acc_stderr\": 0.026544435312706467,\n\
\ \"acc_norm\": 0.8666666666666667,\n \"acc_norm_stderr\": 0.026544435312706467\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8888888888888888,\n \"acc_stderr\": 0.02239078763821678,\n \"\
acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.02239078763821678\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9326424870466321,\n \"acc_stderr\": 0.018088393839078894,\n\
\ \"acc_norm\": 0.9326424870466321,\n \"acc_norm_stderr\": 0.018088393839078894\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7102564102564103,\n \"acc_stderr\": 0.023000628243687968,\n\
\ \"acc_norm\": 0.7102564102564103,\n \"acc_norm_stderr\": 0.023000628243687968\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028597,\n \
\ \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028597\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7436974789915967,\n \"acc_stderr\": 0.02835962087053395,\n \
\ \"acc_norm\": 0.7436974789915967,\n \"acc_norm_stderr\": 0.02835962087053395\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4768211920529801,\n \"acc_stderr\": 0.04078093859163083,\n \"\
acc_norm\": 0.4768211920529801,\n \"acc_norm_stderr\": 0.04078093859163083\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8899082568807339,\n \"acc_stderr\": 0.0134199390186812,\n \"acc_norm\"\
: 0.8899082568807339,\n \"acc_norm_stderr\": 0.0134199390186812\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5416666666666666,\n\
\ \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.5416666666666666,\n\
\ \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.9215686274509803,\n \"acc_stderr\": 0.018869514646658925,\n\
\ \"acc_norm\": 0.9215686274509803,\n \"acc_norm_stderr\": 0.018869514646658925\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8734177215189873,\n \"acc_stderr\": 0.021644195727955173,\n \
\ \"acc_norm\": 0.8734177215189873,\n \"acc_norm_stderr\": 0.021644195727955173\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7982062780269058,\n\
\ \"acc_stderr\": 0.026936111912802273,\n \"acc_norm\": 0.7982062780269058,\n\
\ \"acc_norm_stderr\": 0.026936111912802273\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.816793893129771,\n \"acc_stderr\": 0.03392770926494733,\n\
\ \"acc_norm\": 0.816793893129771,\n \"acc_norm_stderr\": 0.03392770926494733\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8677685950413223,\n \"acc_stderr\": 0.030922788320445815,\n \"\
acc_norm\": 0.8677685950413223,\n \"acc_norm_stderr\": 0.030922788320445815\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n\
\ \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n\
\ \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742179,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742179\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822582,\n\
\ \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822582\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9017094017094017,\n\
\ \"acc_stderr\": 0.019503444900757567,\n \"acc_norm\": 0.9017094017094017,\n\
\ \"acc_norm_stderr\": 0.019503444900757567\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8748403575989783,\n\
\ \"acc_stderr\": 0.011832954239305724,\n \"acc_norm\": 0.8748403575989783,\n\
\ \"acc_norm_stderr\": 0.011832954239305724\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7630057803468208,\n \"acc_stderr\": 0.02289408248992599,\n\
\ \"acc_norm\": 0.7630057803468208,\n \"acc_norm_stderr\": 0.02289408248992599\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5497206703910614,\n\
\ \"acc_stderr\": 0.016639615236845817,\n \"acc_norm\": 0.5497206703910614,\n\
\ \"acc_norm_stderr\": 0.016639615236845817\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.02555316999182651,\n\
\ \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.02555316999182651\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7684887459807074,\n\
\ \"acc_stderr\": 0.023956532766639133,\n \"acc_norm\": 0.7684887459807074,\n\
\ \"acc_norm_stderr\": 0.023956532766639133\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.02202136610022019,\n\
\ \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.02202136610022019\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5460992907801419,\n \"acc_stderr\": 0.029700453247291477,\n \
\ \"acc_norm\": 0.5460992907801419,\n \"acc_norm_stderr\": 0.029700453247291477\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.545632333767927,\n\
\ \"acc_stderr\": 0.012716941720734818,\n \"acc_norm\": 0.545632333767927,\n\
\ \"acc_norm_stderr\": 0.012716941720734818\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7316176470588235,\n \"acc_stderr\": 0.026917481224377197,\n\
\ \"acc_norm\": 0.7316176470588235,\n \"acc_norm_stderr\": 0.026917481224377197\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.75,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\"\
: 0.75,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\"\
: {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n\
\ \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n\
\ },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7918367346938775,\n\
\ \"acc_stderr\": 0.025991117672813296,\n \"acc_norm\": 0.7918367346938775,\n\
\ \"acc_norm_stderr\": 0.025991117672813296\n },\n \"harness|hendrycksTest-sociology|5\"\
: {\n \"acc\": 0.8557213930348259,\n \"acc_stderr\": 0.02484575321230604,\n\
\ \"acc_norm\": 0.8557213930348259,\n \"acc_norm_stderr\": 0.02484575321230604\n\
\ },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\":\
\ 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \"acc_norm\": 0.86,\n\
\ \"acc_norm_stderr\": 0.03487350880197769\n },\n \"harness|hendrycksTest-virology|5\"\
: {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n\
\ \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n\
\ },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8654970760233918,\n\
\ \"acc_stderr\": 0.026168221344662297,\n \"acc_norm\": 0.8654970760233918,\n\
\ \"acc_norm_stderr\": 0.026168221344662297\n },\n \"harness|truthfulqa:mc|0\"\
: {\n \"mc1\": 0.44920440636474906,\n \"mc1_stderr\": 0.01741294198611531,\n\
\ \"mc2\": 0.6312306236860621,\n \"mc2_stderr\": 0.014945471343395618\n\
\ }\n}\n```"
repo_url: https://huggingface.co/pankajmathur/model_007
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|arc:challenge|25_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hellaswag|10_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-09T02-03-09.335068.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-09T02-03-09.335068.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-09T02-03-09.335068.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-09T02-03-09.335068.parquet'
- config_name: results
data_files:
- split: 2023_10_09T02_03_09.335068
path:
- results_2023-10-09T02-03-09.335068.parquet
- split: latest
path:
- results_2023-10-09T02-03-09.335068.parquet
---
# Dataset Card for Evaluation run of pankajmathur/model_007
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/pankajmathur/model_007
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [pankajmathur/model_007](https://huggingface.co/pankajmathur/model_007) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_pankajmathur__model_007",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-09T02:03:09.335068](https://huggingface.co/datasets/open-llm-leaderboard/details_pankajmathur__model_007/blob/main/results_2023-10-09T02-03-09.335068.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6901502879968988,
"acc_stderr": 0.031344534847114004,
"acc_norm": 0.6939037892141556,
"acc_norm_stderr": 0.03131458982120537,
"mc1": 0.44920440636474906,
"mc1_stderr": 0.01741294198611531,
"mc2": 0.6312306236860621,
"mc2_stderr": 0.014945471343395618
},
"harness|arc:challenge|25": {
"acc": 0.6749146757679181,
"acc_stderr": 0.01368814730972912,
"acc_norm": 0.7107508532423208,
"acc_norm_stderr": 0.013250012579393441
},
"harness|hellaswag|10": {
"acc": 0.6908982274447322,
"acc_stderr": 0.004611787665905346,
"acc_norm": 0.8765186217884884,
"acc_norm_stderr": 0.003283165867631372
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939098,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939098
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8157894736842105,
"acc_stderr": 0.0315469804508223,
"acc_norm": 0.8157894736842105,
"acc_norm_stderr": 0.0315469804508223
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7320754716981132,
"acc_stderr": 0.027257260322494845,
"acc_norm": 0.7320754716981132,
"acc_norm_stderr": 0.027257260322494845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8125,
"acc_stderr": 0.032639560491693344,
"acc_norm": 0.8125,
"acc_norm_stderr": 0.032639560491693344
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.047840607041056527,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.047840607041056527
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6680851063829787,
"acc_stderr": 0.030783736757745657,
"acc_norm": 0.6680851063829787,
"acc_norm_stderr": 0.030783736757745657
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.046570472605949625,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.046570472605949625
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6068965517241379,
"acc_stderr": 0.0407032901370707,
"acc_norm": 0.6068965517241379,
"acc_norm_stderr": 0.0407032901370707
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.455026455026455,
"acc_stderr": 0.025646928361049398,
"acc_norm": 0.455026455026455,
"acc_norm_stderr": 0.025646928361049398
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677173,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677173
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8129032258064516,
"acc_stderr": 0.022185710092252252,
"acc_norm": 0.8129032258064516,
"acc_norm_stderr": 0.022185710092252252
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8666666666666667,
"acc_stderr": 0.026544435312706467,
"acc_norm": 0.8666666666666667,
"acc_norm_stderr": 0.026544435312706467
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.02239078763821678,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.02239078763821678
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9326424870466321,
"acc_stderr": 0.018088393839078894,
"acc_norm": 0.9326424870466321,
"acc_norm_stderr": 0.018088393839078894
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7102564102564103,
"acc_stderr": 0.023000628243687968,
"acc_norm": 0.7102564102564103,
"acc_norm_stderr": 0.023000628243687968
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028597,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028597
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7436974789915967,
"acc_stderr": 0.02835962087053395,
"acc_norm": 0.7436974789915967,
"acc_norm_stderr": 0.02835962087053395
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4768211920529801,
"acc_stderr": 0.04078093859163083,
"acc_norm": 0.4768211920529801,
"acc_norm_stderr": 0.04078093859163083
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8899082568807339,
"acc_stderr": 0.0134199390186812,
"acc_norm": 0.8899082568807339,
"acc_norm_stderr": 0.0134199390186812
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9215686274509803,
"acc_stderr": 0.018869514646658925,
"acc_norm": 0.9215686274509803,
"acc_norm_stderr": 0.018869514646658925
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8734177215189873,
"acc_stderr": 0.021644195727955173,
"acc_norm": 0.8734177215189873,
"acc_norm_stderr": 0.021644195727955173
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7982062780269058,
"acc_stderr": 0.026936111912802273,
"acc_norm": 0.7982062780269058,
"acc_norm_stderr": 0.026936111912802273
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.816793893129771,
"acc_stderr": 0.03392770926494733,
"acc_norm": 0.816793893129771,
"acc_norm_stderr": 0.03392770926494733
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8677685950413223,
"acc_stderr": 0.030922788320445815,
"acc_norm": 0.8677685950413223,
"acc_norm_stderr": 0.030922788320445815
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.036809181416738807,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.036809181416738807
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742179,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742179
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822582,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822582
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9017094017094017,
"acc_stderr": 0.019503444900757567,
"acc_norm": 0.9017094017094017,
"acc_norm_stderr": 0.019503444900757567
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8748403575989783,
"acc_stderr": 0.011832954239305724,
"acc_norm": 0.8748403575989783,
"acc_norm_stderr": 0.011832954239305724
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7630057803468208,
"acc_stderr": 0.02289408248992599,
"acc_norm": 0.7630057803468208,
"acc_norm_stderr": 0.02289408248992599
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.5497206703910614,
"acc_stderr": 0.016639615236845817,
"acc_norm": 0.5497206703910614,
"acc_norm_stderr": 0.016639615236845817
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.02555316999182651,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.02555316999182651
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7684887459807074,
"acc_stderr": 0.023956532766639133,
"acc_norm": 0.7684887459807074,
"acc_norm_stderr": 0.023956532766639133
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.02202136610022019,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.02202136610022019
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5460992907801419,
"acc_stderr": 0.029700453247291477,
"acc_norm": 0.5460992907801419,
"acc_norm_stderr": 0.029700453247291477
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.545632333767927,
"acc_stderr": 0.012716941720734818,
"acc_norm": 0.545632333767927,
"acc_norm_stderr": 0.012716941720734818
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7316176470588235,
"acc_stderr": 0.026917481224377197,
"acc_norm": 0.7316176470588235,
"acc_norm_stderr": 0.026917481224377197
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.75,
"acc_stderr": 0.01751781884501444,
"acc_norm": 0.75,
"acc_norm_stderr": 0.01751781884501444
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7363636363636363,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.7363636363636363,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7918367346938775,
"acc_stderr": 0.025991117672813296,
"acc_norm": 0.7918367346938775,
"acc_norm_stderr": 0.025991117672813296
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8557213930348259,
"acc_stderr": 0.02484575321230604,
"acc_norm": 0.8557213930348259,
"acc_norm_stderr": 0.02484575321230604
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197769,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197769
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8654970760233918,
"acc_stderr": 0.026168221344662297,
"acc_norm": 0.8654970760233918,
"acc_norm_stderr": 0.026168221344662297
},
"harness|truthfulqa:mc|0": {
"mc1": 0.44920440636474906,
"mc1_stderr": 0.01741294198611531,
"mc2": 0.6312306236860621,
"mc2_stderr": 0.014945471343395618
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
PGT/artificial-unbalanced-500K | ---
dataset_info:
features:
- name: edge_index
sequence:
sequence: int64
- name: y
sequence: int64
- name: num_nodes
dtype: int64
splits:
- name: train
num_bytes: 2712963616
num_examples: 499986
download_size: 398809184
dataset_size: 2712963616
---
# Dataset Card for "artificial-unbalanced-500Kb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Theon1130/ROCO_for_training | ---
license: apache-2.0
---
|
fshala/8samples | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 13794666.0
num_examples: 8
download_size: 13733548
dataset_size: 13794666.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bigscience-data/roots_indic-ml_ted_talks_iwslt | ---
language: ml
license: cc-by-nc-nd-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-ml_ted_talks_iwslt
# WIT Ted Talks
- Dataset uid: `ted_talks_iwslt`
### Description
The Web Inventory Talk is a collection of the original Ted talks and their translated version. The translations are available in more than 109+ languages, though the distribution is not uniform.
### Homepage
https://github.com/huggingface/datasets/blob/master/datasets/ted_talks_iwslt/README.md
### Licensing
- open license
- cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International
TED makes its collection of video recordings and transcripts of talks available under the Creative Commons BY-NC-ND license (look here). WIT3 acknowledges the authorship of TED talks (BY condition) and does not redistribute transcripts for commercial purposes (NC). As regards the integrity of the work (ND), WIT3 only changes the format of the container, while preserving the original contents. WIT3 aims to support research on human language processing as well as the diffusion of TED Talks!
### Speaker Locations
- Southern Europe
- Italy
### Sizes
- 0.0305 % of total
- 0.0736 % of ar
- 0.2002 % of pt
- 0.0128 % of zh
- 0.2236 % of vi
- 0.0330 % of fr
- 0.0545 % of es
- 0.0122 % of en
- 0.3704 % of id
- 0.0373 % of indic-hi
- 0.0330 % of indic-ta
- 0.1393 % of indic-mr
- 0.0305 % of ca
- 0.1179 % of indic-ur
- 0.0147 % of indic-bn
- 0.0240 % of indic-ml
- 0.0244 % of indic-te
- 0.0503 % of indic-gu
- 0.0211 % of indic-kn
- 0.0274 % of eu
- 0.0023 % of indic-as
- 0.0001 % of indic-pa
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ca
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ur
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-as
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-pa
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
lansinuote/gen.5.flower.book | ---
dataset_info:
features:
- name: image
dtype: image
- name: cls
dtype: int64
splits:
- name: train
num_bytes: 105735918.0
num_examples: 2000
download_size: 0
dataset_size: 105735918.0
---
# Dataset Card for "gen.5.flower.book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-computer_security-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 41847
num_examples: 100
download_size: 28432
dataset_size: 41847
---
# Dataset Card for "mmlu-computer_security-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_sst2_negative_concord | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 6956
num_examples: 48
- name: test
num_bytes: 12384
num_examples: 84
- name: train
num_bytes: 165604
num_examples: 1366
download_size: 95983
dataset_size: 184944
---
# Dataset Card for "MULTI_VALUE_sst2_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joshwe/storiesdas2 | ---
dataset_info:
features:
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 45208540
num_examples: 11005
download_size: 7958523
dataset_size: 45208540
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/med_alpaca_standardized_cluster_18_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 3182419
num_examples: 5915
download_size: 1319748
dataset_size: 3182419
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_18_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Janekmanek/Aje | ---
license: agpl-3.0
---
|
deboramachadoandrade/sft_dataset_rlaif | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8091
num_examples: 5
download_size: 16567
dataset_size: 8091
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AWeirdDev/confucius | ---
dataset_info:
features:
- name: chapter
dtype: string
- name: content
dtype: string
- name: translation
dtype: string
splits:
- name: train
num_bytes: 163063
num_examples: 503
download_size: 100365
dataset_size: 163063
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- translation
- text-generation
language:
- zh
tags:
- chinese
- analect
- confucius
- language
pretty_name: Confucius Analects
size_categories:
- n<1K
---
# confucius
Confucius for real. See all [analects](https://ctext.org/analects).
```python
from datasets import load_dataset
dataset = load_dataset("AWeirdDev/confucius")
```
## Format
```python
{
"chapter": "學而", # Chapter name
"content": "子曰:「學而時習之,不亦說乎?…", # Content
"translation": "孔子說:「經常學習,不也喜悅嗎?…" # Translated (zh-TW)
}
```
<div align="center">
<figure>
<img
src="https://cdn-uploads.huggingface.co/production/uploads/640739e3a5e2ff2832ead08b/Ks3e4yFB5v9jN4F19rQ6v.png"
alt="Confucius lore"
width="300"
/>
<figcaption>Confucius, confused.</figcaption>
</figure>
</div> |
ikezoe/alpha-test | ---
license: afl-3.0
---
|
CyberHarem/leberecht_maass_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of leberecht_maass/レーベレヒト・マース/Z1 (Azur Lane)
This is the dataset of leberecht_maass/レーベレヒト・マース/Z1 (Azur Lane), containing 88 images and their tags.
The core tags of this character are `purple_eyes, hair_between_eyes, bangs, hat, white_hair, beret, long_hair, side_ponytail, black_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 88 | 97.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leberecht_maass_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 88 | 59.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leberecht_maass_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 204 | 126.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leberecht_maass_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 88 | 88.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leberecht_maass_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 204 | 173.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leberecht_maass_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/leberecht_maass_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, bare_shoulders, black_jacket, black_necktie, black_shorts, collared_shirt, looking_at_viewer, sleeveless_shirt, solo, white_shirt, blush, iron_cross, long_sleeves, midriff, navel, open_mouth, short_shorts, sitting, white_gloves, :d, black_socks, kneehighs, sidelocks, armband, black_sleeves, braid, detached_sleeves, off_shoulder, shadow, simple_background, white_belt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | black_jacket | black_necktie | black_shorts | collared_shirt | looking_at_viewer | sleeveless_shirt | solo | white_shirt | blush | iron_cross | long_sleeves | midriff | navel | open_mouth | short_shorts | sitting | white_gloves | :d | black_socks | kneehighs | sidelocks | armband | black_sleeves | braid | detached_sleeves | off_shoulder | shadow | simple_background | white_belt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:---------------|:----------------|:---------------|:-----------------|:--------------------|:-------------------|:-------|:--------------|:--------|:-------------|:---------------|:----------|:--------|:-------------|:---------------|:----------|:---------------|:-----|:--------------|:------------|:------------|:----------|:----------------|:--------|:-------------------|:---------------|:---------|:--------------------|:-------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
fathyshalab/massive_social-de | ---
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28548
num_examples: 391
- name: validation
num_bytes: 4886
num_examples: 68
- name: test
num_bytes: 7331
num_examples: 106
download_size: 25046
dataset_size: 40765
---
# Dataset Card for "massive_social-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_TheBloke__GPlatty-30B-SuperHOT-8K-fp16 | ---
pretty_name: Evaluation run of TheBloke/GPlatty-30B-SuperHOT-8K-fp16
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheBloke/GPlatty-30B-SuperHOT-8K-fp16](https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-fp16)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__GPlatty-30B-SuperHOT-8K-fp16\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-08-01T15:51:23.628970](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__GPlatty-30B-SuperHOT-8K-fp16/blob/main/results_2023-08-01T15%3A51%3A23.628970.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24941704039386783,\n\
\ \"acc_stderr\": 0.0314384194357432,\n \"acc_norm\": 0.2512238671780757,\n\
\ \"acc_norm_stderr\": 0.03145763914734606,\n \"mc1\": 0.22888616891064872,\n\
\ \"mc1_stderr\": 0.014706994909055027,\n \"mc2\": 0.46272712607124966,\n\
\ \"mc2_stderr\": 0.016702158477967525\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.22696245733788395,\n \"acc_stderr\": 0.012240491536132868,\n\
\ \"acc_norm\": 0.2832764505119454,\n \"acc_norm_stderr\": 0.013167478735134576\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.28450507866958774,\n\
\ \"acc_stderr\": 0.004502563079349398,\n \"acc_norm\": 0.33479386576379205,\n\
\ \"acc_norm_stderr\": 0.0047095388649163105\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2518518518518518,\n\
\ \"acc_stderr\": 0.037498507091740206,\n \"acc_norm\": 0.2518518518518518,\n\
\ \"acc_norm_stderr\": 0.037498507091740206\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.19736842105263158,\n \"acc_stderr\": 0.03238981601699397,\n\
\ \"acc_norm\": 0.19736842105263158,\n \"acc_norm_stderr\": 0.03238981601699397\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.31,\n\
\ \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.22641509433962265,\n \"acc_stderr\": 0.02575755989310675,\n\
\ \"acc_norm\": 0.22641509433962265,\n \"acc_norm_stderr\": 0.02575755989310675\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2361111111111111,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.2361111111111111,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909282,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909282\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.039427724440366234,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.039427724440366234\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.20809248554913296,\n\
\ \"acc_stderr\": 0.030952890217749874,\n \"acc_norm\": 0.20809248554913296,\n\
\ \"acc_norm_stderr\": 0.030952890217749874\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808777,\n\
\ \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808777\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2851063829787234,\n \"acc_stderr\": 0.029513196625539355,\n\
\ \"acc_norm\": 0.2851063829787234,\n \"acc_norm_stderr\": 0.029513196625539355\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21052631578947367,\n\
\ \"acc_stderr\": 0.03835153954399421,\n \"acc_norm\": 0.21052631578947367,\n\
\ \"acc_norm_stderr\": 0.03835153954399421\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.27586206896551724,\n \"acc_stderr\": 0.037245636197746325,\n\
\ \"acc_norm\": 0.27586206896551724,\n \"acc_norm_stderr\": 0.037245636197746325\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.21428571428571427,\n \"acc_stderr\": 0.02113285918275444,\n \"\
acc_norm\": 0.21428571428571427,\n \"acc_norm_stderr\": 0.02113285918275444\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n\
\ \"acc_stderr\": 0.04040610178208841,\n \"acc_norm\": 0.2857142857142857,\n\
\ \"acc_norm_stderr\": 0.04040610178208841\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.2645161290322581,\n \"acc_stderr\": 0.025091892378859275,\n \"\
acc_norm\": 0.2645161290322581,\n \"acc_norm_stderr\": 0.025091892378859275\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.22167487684729065,\n \"acc_stderr\": 0.029225575892489624,\n \"\
acc_norm\": 0.22167487684729065,\n \"acc_norm_stderr\": 0.029225575892489624\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \"acc_norm\"\
: 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.21717171717171718,\n \"acc_stderr\": 0.029376616484945633,\n \"\
acc_norm\": 0.21717171717171718,\n \"acc_norm_stderr\": 0.029376616484945633\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.18134715025906736,\n \"acc_stderr\": 0.02780703236068609,\n\
\ \"acc_norm\": 0.18134715025906736,\n \"acc_norm_stderr\": 0.02780703236068609\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2717948717948718,\n \"acc_stderr\": 0.022556551010132354,\n\
\ \"acc_norm\": 0.2717948717948718,\n \"acc_norm_stderr\": 0.022556551010132354\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2037037037037037,\n \"acc_stderr\": 0.024556172219141265,\n \
\ \"acc_norm\": 0.2037037037037037,\n \"acc_norm_stderr\": 0.024556172219141265\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.22268907563025211,\n \"acc_stderr\": 0.027025433498882385,\n\
\ \"acc_norm\": 0.22268907563025211,\n \"acc_norm_stderr\": 0.027025433498882385\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.19205298013245034,\n \"acc_stderr\": 0.032162984205936135,\n \"\
acc_norm\": 0.19205298013245034,\n \"acc_norm_stderr\": 0.032162984205936135\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.22568807339449543,\n \"acc_stderr\": 0.017923087667803053,\n \"\
acc_norm\": 0.22568807339449543,\n \"acc_norm_stderr\": 0.017923087667803053\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.26851851851851855,\n \"acc_stderr\": 0.030225226160012397,\n \"\
acc_norm\": 0.26851851851851855,\n \"acc_norm_stderr\": 0.030225226160012397\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.24509803921568626,\n \"acc_stderr\": 0.03019028245350195,\n \"\
acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.03019028245350195\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.2869198312236287,\n \"acc_stderr\": 0.029443773022594693,\n \
\ \"acc_norm\": 0.2869198312236287,\n \"acc_norm_stderr\": 0.029443773022594693\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.2914798206278027,\n\
\ \"acc_stderr\": 0.030500283176545902,\n \"acc_norm\": 0.2914798206278027,\n\
\ \"acc_norm_stderr\": 0.030500283176545902\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2748091603053435,\n \"acc_stderr\": 0.03915345408847835,\n\
\ \"acc_norm\": 0.2748091603053435,\n \"acc_norm_stderr\": 0.03915345408847835\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2727272727272727,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.26851851851851855,\n\
\ \"acc_stderr\": 0.04284467968052191,\n \"acc_norm\": 0.26851851851851855,\n\
\ \"acc_norm_stderr\": 0.04284467968052191\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.26993865030674846,\n \"acc_stderr\": 0.03487825168497892,\n\
\ \"acc_norm\": 0.26993865030674846,\n \"acc_norm_stderr\": 0.03487825168497892\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n\
\ \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n\
\ \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n\
\ \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2863247863247863,\n\
\ \"acc_stderr\": 0.029614323690456648,\n \"acc_norm\": 0.2863247863247863,\n\
\ \"acc_norm_stderr\": 0.029614323690456648\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.3065134099616858,\n\
\ \"acc_stderr\": 0.01648695289304151,\n \"acc_norm\": 0.3065134099616858,\n\
\ \"acc_norm_stderr\": 0.01648695289304151\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.23121387283236994,\n \"acc_stderr\": 0.022698657167855716,\n\
\ \"acc_norm\": 0.23121387283236994,\n \"acc_norm_stderr\": 0.022698657167855716\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24581005586592178,\n\
\ \"acc_stderr\": 0.014400296429225629,\n \"acc_norm\": 0.24581005586592178,\n\
\ \"acc_norm_stderr\": 0.014400296429225629\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.3104575163398693,\n \"acc_stderr\": 0.026493033225145894,\n\
\ \"acc_norm\": 0.3104575163398693,\n \"acc_norm_stderr\": 0.026493033225145894\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.27009646302250806,\n\
\ \"acc_stderr\": 0.025218040373410612,\n \"acc_norm\": 0.27009646302250806,\n\
\ \"acc_norm_stderr\": 0.025218040373410612\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2345679012345679,\n \"acc_stderr\": 0.023576881744005716,\n\
\ \"acc_norm\": 0.2345679012345679,\n \"acc_norm_stderr\": 0.023576881744005716\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.25177304964539005,\n \"acc_stderr\": 0.025892151156709405,\n \
\ \"acc_norm\": 0.25177304964539005,\n \"acc_norm_stderr\": 0.025892151156709405\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.26597131681877445,\n\
\ \"acc_stderr\": 0.011285033165551274,\n \"acc_norm\": 0.26597131681877445,\n\
\ \"acc_norm_stderr\": 0.011285033165551274\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.17647058823529413,\n \"acc_stderr\": 0.02315746830855934,\n\
\ \"acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.02315746830855934\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.27450980392156865,\n \"acc_stderr\": 0.018054027458815198,\n \
\ \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.018054027458815198\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2,\n\
\ \"acc_stderr\": 0.03831305140884601,\n \"acc_norm\": 0.2,\n \
\ \"acc_norm_stderr\": 0.03831305140884601\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.20816326530612245,\n \"acc_stderr\": 0.025991117672813292,\n\
\ \"acc_norm\": 0.20816326530612245,\n \"acc_norm_stderr\": 0.025991117672813292\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.26865671641791045,\n\
\ \"acc_stderr\": 0.03134328358208954,\n \"acc_norm\": 0.26865671641791045,\n\
\ \"acc_norm_stderr\": 0.03134328358208954\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2710843373493976,\n\
\ \"acc_stderr\": 0.03460579907553027,\n \"acc_norm\": 0.2710843373493976,\n\
\ \"acc_norm_stderr\": 0.03460579907553027\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.29239766081871343,\n \"acc_stderr\": 0.034886477134579215,\n\
\ \"acc_norm\": 0.29239766081871343,\n \"acc_norm_stderr\": 0.034886477134579215\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22888616891064872,\n\
\ \"mc1_stderr\": 0.014706994909055027,\n \"mc2\": 0.46272712607124966,\n\
\ \"mc2_stderr\": 0.016702158477967525\n }\n}\n```"
repo_url: https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-fp16
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|arc:challenge|25_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hellaswag|10_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-01T15:51:23.628970.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-01T15:51:23.628970.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-01T15:51:23.628970.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-01T15:51:23.628970.parquet'
- config_name: results
data_files:
- split: 2023_08_01T15_51_23.628970
path:
- results_2023-08-01T15:51:23.628970.parquet
- split: latest
path:
- results_2023-08-01T15:51:23.628970.parquet
---
# Dataset Card for Evaluation run of TheBloke/GPlatty-30B-SuperHOT-8K-fp16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-fp16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/GPlatty-30B-SuperHOT-8K-fp16](https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__GPlatty-30B-SuperHOT-8K-fp16",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-01T15:51:23.628970](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__GPlatty-30B-SuperHOT-8K-fp16/blob/main/results_2023-08-01T15%3A51%3A23.628970.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24941704039386783,
"acc_stderr": 0.0314384194357432,
"acc_norm": 0.2512238671780757,
"acc_norm_stderr": 0.03145763914734606,
"mc1": 0.22888616891064872,
"mc1_stderr": 0.014706994909055027,
"mc2": 0.46272712607124966,
"mc2_stderr": 0.016702158477967525
},
"harness|arc:challenge|25": {
"acc": 0.22696245733788395,
"acc_stderr": 0.012240491536132868,
"acc_norm": 0.2832764505119454,
"acc_norm_stderr": 0.013167478735134576
},
"harness|hellaswag|10": {
"acc": 0.28450507866958774,
"acc_stderr": 0.004502563079349398,
"acc_norm": 0.33479386576379205,
"acc_norm_stderr": 0.0047095388649163105
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2518518518518518,
"acc_stderr": 0.037498507091740206,
"acc_norm": 0.2518518518518518,
"acc_norm_stderr": 0.037498507091740206
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.19736842105263158,
"acc_stderr": 0.03238981601699397,
"acc_norm": 0.19736842105263158,
"acc_norm_stderr": 0.03238981601699397
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22641509433962265,
"acc_stderr": 0.02575755989310675,
"acc_norm": 0.22641509433962265,
"acc_norm_stderr": 0.02575755989310675
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2361111111111111,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.2361111111111111,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.19,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.19,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.20809248554913296,
"acc_stderr": 0.030952890217749874,
"acc_norm": 0.20809248554913296,
"acc_norm_stderr": 0.030952890217749874
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808777,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808777
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2851063829787234,
"acc_stderr": 0.029513196625539355,
"acc_norm": 0.2851063829787234,
"acc_norm_stderr": 0.029513196625539355
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.03835153954399421,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.03835153954399421
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.27586206896551724,
"acc_stderr": 0.037245636197746325,
"acc_norm": 0.27586206896551724,
"acc_norm_stderr": 0.037245636197746325
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.21428571428571427,
"acc_stderr": 0.02113285918275444,
"acc_norm": 0.21428571428571427,
"acc_norm_stderr": 0.02113285918275444
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.04040610178208841,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.04040610178208841
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2645161290322581,
"acc_stderr": 0.025091892378859275,
"acc_norm": 0.2645161290322581,
"acc_norm_stderr": 0.025091892378859275
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.22167487684729065,
"acc_stderr": 0.029225575892489624,
"acc_norm": 0.22167487684729065,
"acc_norm_stderr": 0.029225575892489624
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720683,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720683
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.21717171717171718,
"acc_stderr": 0.029376616484945633,
"acc_norm": 0.21717171717171718,
"acc_norm_stderr": 0.029376616484945633
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.18134715025906736,
"acc_stderr": 0.02780703236068609,
"acc_norm": 0.18134715025906736,
"acc_norm_stderr": 0.02780703236068609
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2717948717948718,
"acc_stderr": 0.022556551010132354,
"acc_norm": 0.2717948717948718,
"acc_norm_stderr": 0.022556551010132354
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2037037037037037,
"acc_stderr": 0.024556172219141265,
"acc_norm": 0.2037037037037037,
"acc_norm_stderr": 0.024556172219141265
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.22268907563025211,
"acc_stderr": 0.027025433498882385,
"acc_norm": 0.22268907563025211,
"acc_norm_stderr": 0.027025433498882385
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.19205298013245034,
"acc_stderr": 0.032162984205936135,
"acc_norm": 0.19205298013245034,
"acc_norm_stderr": 0.032162984205936135
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.22568807339449543,
"acc_stderr": 0.017923087667803053,
"acc_norm": 0.22568807339449543,
"acc_norm_stderr": 0.017923087667803053
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.26851851851851855,
"acc_stderr": 0.030225226160012397,
"acc_norm": 0.26851851851851855,
"acc_norm_stderr": 0.030225226160012397
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.03019028245350195,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.03019028245350195
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2869198312236287,
"acc_stderr": 0.029443773022594693,
"acc_norm": 0.2869198312236287,
"acc_norm_stderr": 0.029443773022594693
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.2914798206278027,
"acc_stderr": 0.030500283176545902,
"acc_norm": 0.2914798206278027,
"acc_norm_stderr": 0.030500283176545902
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2748091603053435,
"acc_stderr": 0.03915345408847835,
"acc_norm": 0.2748091603053435,
"acc_norm_stderr": 0.03915345408847835
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2727272727272727,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.2727272727272727,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.26851851851851855,
"acc_stderr": 0.04284467968052191,
"acc_norm": 0.26851851851851855,
"acc_norm_stderr": 0.04284467968052191
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.26993865030674846,
"acc_stderr": 0.03487825168497892,
"acc_norm": 0.26993865030674846,
"acc_norm_stderr": 0.03487825168497892
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3125,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2863247863247863,
"acc_stderr": 0.029614323690456648,
"acc_norm": 0.2863247863247863,
"acc_norm_stderr": 0.029614323690456648
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.3065134099616858,
"acc_stderr": 0.01648695289304151,
"acc_norm": 0.3065134099616858,
"acc_norm_stderr": 0.01648695289304151
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.23121387283236994,
"acc_stderr": 0.022698657167855716,
"acc_norm": 0.23121387283236994,
"acc_norm_stderr": 0.022698657167855716
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24581005586592178,
"acc_stderr": 0.014400296429225629,
"acc_norm": 0.24581005586592178,
"acc_norm_stderr": 0.014400296429225629
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.3104575163398693,
"acc_stderr": 0.026493033225145894,
"acc_norm": 0.3104575163398693,
"acc_norm_stderr": 0.026493033225145894
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.27009646302250806,
"acc_stderr": 0.025218040373410612,
"acc_norm": 0.27009646302250806,
"acc_norm_stderr": 0.025218040373410612
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2345679012345679,
"acc_stderr": 0.023576881744005716,
"acc_norm": 0.2345679012345679,
"acc_norm_stderr": 0.023576881744005716
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.25177304964539005,
"acc_stderr": 0.025892151156709405,
"acc_norm": 0.25177304964539005,
"acc_norm_stderr": 0.025892151156709405
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.26597131681877445,
"acc_stderr": 0.011285033165551274,
"acc_norm": 0.26597131681877445,
"acc_norm_stderr": 0.011285033165551274
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.17647058823529413,
"acc_stderr": 0.02315746830855934,
"acc_norm": 0.17647058823529413,
"acc_norm_stderr": 0.02315746830855934
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.018054027458815198,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.018054027458815198
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.2,
"acc_stderr": 0.03831305140884601,
"acc_norm": 0.2,
"acc_norm_stderr": 0.03831305140884601
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.20816326530612245,
"acc_stderr": 0.025991117672813292,
"acc_norm": 0.20816326530612245,
"acc_norm_stderr": 0.025991117672813292
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.26865671641791045,
"acc_stderr": 0.03134328358208954,
"acc_norm": 0.26865671641791045,
"acc_norm_stderr": 0.03134328358208954
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2710843373493976,
"acc_stderr": 0.03460579907553027,
"acc_norm": 0.2710843373493976,
"acc_norm_stderr": 0.03460579907553027
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.29239766081871343,
"acc_stderr": 0.034886477134579215,
"acc_norm": 0.29239766081871343,
"acc_norm_stderr": 0.034886477134579215
},
"harness|truthfulqa:mc|0": {
"mc1": 0.22888616891064872,
"mc1_stderr": 0.014706994909055027,
"mc2": 0.46272712607124966,
"mc2_stderr": 0.016702158477967525
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
BangumiBase/mondaijitachigaisekaikarakurusoudesuyo | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Mondaiji-tachi Ga Isekai Kara Kuru Sou Desu Yo?
This is the image base of bangumi Mondaiji-tachi ga Isekai kara Kuru Sou desu yo?, we detected 19 characters, 1545 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 61 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 26 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 37 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 55 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 79 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 175 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 151 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 20 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 41 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 19 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 9 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 132 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 68 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 21 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 221 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 35 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 13 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 22 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 360 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
yuvalkirstain/beautiful_interesting_spectacular_photo_model_30000_with_generated_captions | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: width
dtype: int64
- name: height
dtype: int64
- name: pclean
dtype: float64
- name: generated_caption
dtype: string
splits:
- name: train
num_bytes: 120069364.0
num_examples: 228
download_size: 120060100
dataset_size: 120069364.0
---
# Dataset Card for "beautiful_interesting_spectacular_photo_model_30000_with_generated_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-a06941db-151a-4b58-9601-c0f67c181a54-6665 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
CarperAI/pickapic_v1_no_images_training_sfw | ---
license: mit
---
### Dataset Information
This is an SFW sanitized prompt only version of the [PickAPic dataset](https://huggingface.co/datasets/yuvalkirstain/pickapic_v1), with 335,000 prompts and image URLs.
### Citation Information
If you find this work useful, please cite:
```bibtex
@inproceedings{Kirstain2023PickaPicAO,
title={Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation},
author={Yuval Kirstain and Adam Polyak and Uriel Singer and Shahbuland Matiana and Joe Penna and Omer Levy},
year={2023}
}
```
### LICENSE
MIT License
Copyright (c) 2021
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
BangumiBase/zetsuennotempest | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Zetsuen No Tempest
This is the image base of bangumi Zetsuen no Tempest, we detected 16 characters, 2070 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 75 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 405 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 19 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 435 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 15 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 35 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 40 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 124 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 10 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 28 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 402 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 88 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 55 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 40 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 175 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 124 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Murple/mmcrsc | ---
annotations_creators:
- expert-generated
language:
- zh
language_creators:
- crowdsourced
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: MAGICDATA_Mandarin_Chinese_Read_Speech_Corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for MMCRSC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MAGICDATA Mandarin Chinese Read Speech Corpus](https://openslr.org/68/)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use.
The contents and the corresponding descriptions of the corpus include:
The corpus contains 755 hours of speech data, which is mostly mobile recorded data.
1080 speakers from different accent areas in China are invited to participate in the recording.
The sentence transcription accuracy is higher than 98%.
Recordings are conducted in a quiet indoor environment.
The database is divided into training set, validation set, and testing set in a ratio of 51: 1: 2.
Detail information such as speech data coding and speaker information is preserved in the metadata file.
The domain of recording texts is diversified, including interactive Q&A, music search, SNS messages, home command and control, etc.
Segmented transcripts are also provided.
The corpus aims to support researchers in speech recognition, machine translation, speaker recognition, and other speech-related fields. Therefore, the corpus is totally free for academic use.
The corpus is a subset of a much bigger data ( 10566.9 hours Chinese Mandarin Speech Corpus ) set which was recorded in the same environment. Please feel free to contact us via business@magicdatatech.com for more details.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
zh-CN
## Dataset Structure
### Data Instances
```json
{
'file': '14_3466_20170826171404.wav',
'audio': {
'path': '14_3466_20170826171404.wav',
'array': array([0., 0., 0., ..., 0., 0., 0.]),
'sampling_rate': 16000
},
'text': '请搜索我附近的超市',
'speaker_id': 143466,
'id': '14_3466_20170826171404.wav'
}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Please cite the corpus as "Magic Data Technology Co., Ltd., "http://www.imagicdatatech.com/index.php/home/dataopensource/data_info/id/101", 05/2019".
|
thegovind/llamav2-instruct-miyagi | ---
license: openrail
task_categories:
- conversational
language:
- en
tags:
- finance
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
mse5357/amazon-shoe-reviews | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16847665.2
num_examples: 90000
- name: test
num_bytes: 1871962.8
num_examples: 10000
download_size: 11141108
dataset_size: 18719628.0
---
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_TheBloke__gpt4-alpaca-lora_mlp-65B-HF | ---
pretty_name: Evaluation run of TheBloke/gpt4-alpaca-lora_mlp-65B-HF
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheBloke/gpt4-alpaca-lora_mlp-65B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__gpt4-alpaca-lora_mlp-65B-HF\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-23T07:45:08.272902](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__gpt4-alpaca-lora_mlp-65B-HF/blob/main/results_2023-10-23T07-45-08.272902.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.015625,\n \
\ \"em_stderr\": 0.0012700767094662763,\n \"f1\": 0.09636115771812082,\n\
\ \"f1_stderr\": 0.0019819425315034905,\n \"acc\": 0.5447099133363212,\n\
\ \"acc_stderr\": 0.011752408531897077\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.015625,\n \"em_stderr\": 0.0012700767094662763,\n \
\ \"f1\": 0.09636115771812082,\n \"f1_stderr\": 0.0019819425315034905\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.28278999241849884,\n \
\ \"acc_stderr\": 0.01240502041787362\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8066298342541437,\n \"acc_stderr\": 0.011099796645920533\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|arc:challenge|25_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_23T07_45_08.272902
path:
- '**/details_harness|drop|3_2023-10-23T07-45-08.272902.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-23T07-45-08.272902.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_23T07_45_08.272902
path:
- '**/details_harness|gsm8k|5_2023-10-23T07-45-08.272902.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-23T07-45-08.272902.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hellaswag|10_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-25T19:53:38.948593.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-25T19:53:38.948593.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-25T19:53:38.948593.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_23T07_45_08.272902
path:
- '**/details_harness|winogrande|5_2023-10-23T07-45-08.272902.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-23T07-45-08.272902.parquet'
- config_name: results
data_files:
- split: 2023_07_25T19_53_38.948593
path:
- results_2023-07-25T19:53:38.948593.parquet
- split: 2023_10_23T07_45_08.272902
path:
- results_2023-10-23T07-45-08.272902.parquet
- split: latest
path:
- results_2023-10-23T07-45-08.272902.parquet
---
# Dataset Card for Evaluation run of TheBloke/gpt4-alpaca-lora_mlp-65B-HF
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/gpt4-alpaca-lora_mlp-65B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__gpt4-alpaca-lora_mlp-65B-HF",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T07:45:08.272902](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__gpt4-alpaca-lora_mlp-65B-HF/blob/main/results_2023-10-23T07-45-08.272902.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.015625,
"em_stderr": 0.0012700767094662763,
"f1": 0.09636115771812082,
"f1_stderr": 0.0019819425315034905,
"acc": 0.5447099133363212,
"acc_stderr": 0.011752408531897077
},
"harness|drop|3": {
"em": 0.015625,
"em_stderr": 0.0012700767094662763,
"f1": 0.09636115771812082,
"f1_stderr": 0.0019819425315034905
},
"harness|gsm8k|5": {
"acc": 0.28278999241849884,
"acc_stderr": 0.01240502041787362
},
"harness|winogrande|5": {
"acc": 0.8066298342541437,
"acc_stderr": 0.011099796645920533
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Ehraim/PlSequentialLearner | ---
license: apache-2.0
---
|
freshpearYoon/vr_train_free_9 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 7229833194
num_examples: 10000
download_size: 1168147629
dataset_size: 7229833194
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yorai/detect-waste | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': metals_and_plastic
'1': other
'2': non_recyclable
'3': glass
'4': paper
'5': bio
'6': unknown
splits:
- name: train
num_bytes: 14799255261.307
num_examples: 3647
- name: test
num_bytes: 3009820376.0
num_examples: 915
download_size: 3002391644
dataset_size: 17809075637.307
language:
- en
tags:
- climate
pretty_name: detect-waste
size_categories:
- 1K<n<10K
---
# Dataset Card for detect-waste
## Dataset Description
- **Homepage: https://github.com/wimlds-trojmiasto/detect-waste**
### Dataset Summary
AI4Good project for detecting waste in environment. www.detectwaste.ml.
Our latest results were published in Waste Management journal in article titled Deep learning-based waste detection in natural and urban environments.
You can find more technical details in our technical report Waste detection in Pomerania: non-profit project for detecting waste in environment.
Did you know that we produce 300 million tons of plastic every year? And only the part of it is properly recycled.
The idea of detect waste project is to use Artificial Intelligence to detect plastic waste in the environment. Our solution is applicable for video and photography. Our goal is to use AI for Good.
### Supported Tasks and Leaderboards
Object Detection
### Languages
English
### Data Fields
https://github.com/wimlds-trojmiasto/detect-waste/tree/main/annotations
## Dataset Creation
The images are post processed to remove exif and reorient as required. Some images are labelled without the exif rotation in mind thus they're not rotated at all but have their exif metadata removed
### Personal and Sensitive Information
**BEWARE** This repository had been created by a third-party and is not affiliated in any way with the original detect-waste creators/
## Considerations for Using the Data
### Licensing Information
https://raw.githubusercontent.com/wimlds-trojmiasto/detect-waste/main/LICENSE |
CyberHarem/stechkin_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of stechkin/スチェッキン/斯捷奇金 (Girls' Frontline)
This is the dataset of stechkin/スチェッキン/斯捷奇金 (Girls' Frontline), containing 33 images and their tags.
The core tags of this character are `blue_eyes, long_hair, black_hair, bangs, breasts, very_long_hair, ahoge, hat, medium_breasts, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 33 | 42.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/stechkin_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 33 | 23.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/stechkin_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 82 | 49.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/stechkin_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 33 | 36.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/stechkin_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 82 | 68.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/stechkin_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/stechkin_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 33 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, gun, simple_background, holding, dress, open_mouth, white_background, full_body, official_alternate_costume, smile, bare_shoulders, elbow_gloves, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | blush | gun | simple_background | holding | dress | open_mouth | white_background | full_body | official_alternate_costume | smile | bare_shoulders | elbow_gloves | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:--------|:------|:--------------------|:----------|:--------|:-------------|:-------------------|:------------|:-----------------------------|:--------|:-----------------|:---------------|:-------------|
| 0 | 33 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
anonymouse03052002/kan-ds-mini | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 49353.04
num_examples: 88
- name: validation
num_bytes: 5608.3
num_examples: 10
download_size: 0
dataset_size: 54961.340000000004
---
# Dataset Card for "kan-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ovelozz/nicdalord | ---
license: openrail
---
|
AlekseyKorshuk/davinci-pairwise-all | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
sequence: string
- name: prompt
dtype: string
- name: api_prompt
dtype: string
splits:
- name: train
num_bytes: 1866932601
num_examples: 47954
download_size: 820468098
dataset_size: 1866932601
---
# Dataset Card for "davinci-pairwise-all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mdeputy/timelapse_embryonic_lung | ---
dataset_info:
features:
- name: ground truth mask
sequence:
sequence:
sequence: float32
- name: normalized brightfield image
sequence:
sequence:
sequence: float32
splits:
- name: timelapse_embryonic_lung
num_bytes: 755220840
num_examples: 30
download_size: 35007127
dataset_size: 755220840
configs:
- config_name: default
data_files:
- split: timelapse_embryonic_lung
path: data/timelapse_embryonic_lung-*
---
|
TinyPixel/tiny-codes | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3766347321
num_examples: 1632309
download_size: 856000484
dataset_size: 3766347321
---
# Dataset Card for "tiny-codes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nielsr/realworldqa | ---
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 678377348.0
num_examples: 765
download_size: 678335845
dataset_size: 678377348.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Dataset from here: https://x.ai/blog/grok-1.5v
Script used to push the dataset:
```python
import json
from datasets import load_dataset
# load annotations
with open('realworldqa/annotations.json') as f:
data = json.load(f)
# create metadata for the dataset
metadata = [{"file_name": i["image"], "question": i["question"], "answer": i["answer"]} for i in data]
# write metadata to json lines file
with open('realworldqa/images/metadata.jsonl', 'w') as outfile:
for entry in metadata:
json.dump(entry, outfile)
outfile.write('\n')
# load image dataset
dataset = load_dataset("imagefolder", data_dir="realworldqa/images")
# push to hub!
dataset.push_to_hub("xai-org/realworldqa")
```
Based on the guide here: https://huggingface.co/docs/datasets/en/image_dataset#object-detection |
tianyang/repo_dedup_sep2023 | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: language
dtype: string
- name: created_at
dtype: timestamp[ns]
- name: license
dtype: string
- name: description
dtype: string
- name: stars
dtype: int64
- name: forks
dtype: int64
- name: url
dtype: string
- name: repo_code
list:
- name: code
dtype: string
- name: path
dtype: string
- name: repo_name
dtype: string
- name: size
dtype: int64
splits:
- name: train
num_bytes: 219555370
num_examples: 1474
download_size: 71458940
dataset_size: 219555370
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "repo_dedup_sep2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jamessyx/PathCap | ---
license: cc-by-nc-2.0
extra_ated_heading: "Access PathCap on Hugging Face"
extra_gated_prompt: "Requests will be processed in 1 business days."
extra_gated_fields:
Country: country
Affiliation: text
Specific date: date_picker
I want to use this dataset for:
type: select
options:
- Research
- Education
- label: Other
value: other
I agree to use this dataset for non-commercial use ONLY: checkbox
I agree to give appropriate cite for the source data: checkbox
---
This is the official Hugging Face repo for **PathCap** dataset.
## Citation
```
@article{sun2023pathasst,
title={Pathasst: Redefining pathology through generative foundation ai assistant for pathology},
author={Sun, Yuxuan and Zhu, Chenglu and Zheng, Sunyi and Zhang, Kai and Shui, Zhongyi and Yu, Xiaoxuan and Zhao, Yizhi and Li, Honglin and Zhang, Yunlong and Zhao, Ruojia and others},
journal={arXiv preprint arXiv:2305.15072},
year={2023}
}
```
|
alpayariyak/prm800k | ---
language:
- en
---
[From OpenAI](https://github.com/openai/prm800k)
# PRM800K: A Process Supervision Dataset
- [Blog Post](https://openai.com/research/improving-mathematical-reasoning-with-process-supervision)
This repository accompanies the paper [Let's Verify Step by Step](https://openai.com/research/improving-mathematical-reasoning-with-process-supervision) and presents the PRM800K dataset introduced there. PRM800K is a process supervision dataset containing 800,000 step-level correctness labels for model-generated solutions to problems from the [MATH](https://github.com/hendrycks/math) dataset. More information on PRM800K and the project can be found in the paper.
We are releasing the raw labels as well as the instructions we gave labelers during phase 1 and phase 2 of the project. Example labels can be seen in the image below.
<p align="center">
<img src="https://github.com/openai/prm800k/blob/main/prm800k/img/interface.png?raw=true" height="300"/>
</p>
## Data
The data contains our labels formatted as newline-delimited lists of `json` data.
Each line represents 1 full solution sample and can contain many step-level labels. Here is one annotated line:
```javascript
{
// UUID representing a particular labeler.
"labeler": "340d89bc-f5b7-45e9-b272-909ba68ee363",
// The timestamp this trajectory was submitted.
"timestamp": "2023-01-22T04:34:27.052924",
// In phase 2, we split our data collection into generations, using our best
// PRM so far to pick which solutions to score in the next generation.
// In phase 1, this value should always be null.
"generation": 9,
// In each generation, we reserve some solutions for quality control. We serve
// these solutions to every labeler, and check that they agree with our
// gold labels.
"is_quality_control_question": false,
// generation -1 was reserved for a set of 30 questions we served every
// labeler in order to screen for base task performance.
"is_initial_screening_question": false,
// Metadata about the question this solution is a response to.
"question": {
// Text of the MATH problem being solved.
"problem": "What is the greatest common factor of $20 !$ and $200,\\!000$? (Reminder: If $n$ is a positive integer, then $n!$ stands for the product $1\\cdot 2\\cdot 3\\cdot \\cdots \\cdot (n-1)\\cdot n$.)",
// Ground truth solution from the MATH dataset.
"ground_truth_solution": "The prime factorization of $200,000$ is $2^6 \\cdot 5^5$. Then count the number of factors of $2$ and $5$ in $20!$. Since there are $10$ even numbers, there are more than $6$ factors of $2$. There are $4$ factors of $5$. So the greatest common factor is $2^6 \\cdot 5^4=\\boxed{40,\\!000}$.",
// Ground truth answer.
"ground_truth_answer": "40,\\!000",
// The full steps of the model-generated solution. This is only set for
// phase 2 where we pre-generated all solutions that we labeled.
"pre_generated_steps": [
"I want to find the largest positive integer that divides both $20 !$ and $200,\\!000$ evenly.",
"One way to do this is to factor both numbers into prime factors and look for the common ones.",
"I know that $200,\\!000 = 2^5\\cdot 10^4 = 2^9\\cdot 5^4$.",
"To find the prime factorization of $20 !$, I can use the fact that it is the product of all the positive integers from $1$ to $20$.",
"For each prime number $p$ between $1$ and $20$, I can count how many multiples of $p$ are in that range.",
"For example, there are $10$ multiples of $2$ between $1$ and $20$, namely $2, 4, 6, \\dots, 20$.",
"But there are also $5$ multiples of $4$, which is $2^2$, and $2$ multiples of $8$, which is $2^3$, and $1$ multiple of $16$, which is $2^4$.",
"So, the total power of $2$ in $20 !$ is $10 + 5 + 2 + 1 = 18$.",
"Similarly, there are $4$ multiples of $5$, namely $5, 10, 15, 20$, so the power of $5$ in $20 !$ is $4$.",
"There are $6$ multiples of $3$, namely $3, 6, 9, \\dots, 18$, but there are also $2$ multiples of $9$, which is $3^2$, so the power of $3$ in $20 !$ is $6 + 2 = 8$.",
"There are $2$ multiples of $7$, namely $7$ and $14$, so the power of $7$ in $20 !$ is $2$.",
"There are $1$ multiple of each of the other prime numbers $11, 13, 17$, and $19$, so the powers of those primes in $20 !$ are $1$ each.",
"Therefore, the prime factorization of $20 !$ is $2^{18}\\cdot 3^8\\cdot 5^4\\cdot 7^2\\cdot 11\\cdot 13\\cdot 17\\cdot 19$.",
"To find the greatest common factor of $20 !$ and $200,\\!000$, I need to take the lowest power of each common prime factor.",
"The only common prime factors are $2$ and $5$, and the lowest powers are $9$ and $4$, respectively.",
"So, the greatest common factor is $2^9\\cdot 5^4 = 512\\cdot 625 = 320,\\!000$.\n\n# Answer\n\n320,000"
],
// The answer given as the end of the pre-generated solution. We can see
// this solution is incorrect.
"pre_generated_answer": "320,000",
// The score given by our PRM to this solution. This one isn't rated very
// highly!
"pre_generated_verifier_score": 0.010779580529581414
},
// The human data we collected for this solution, containing correctness
// labels for each step of the solution.
"label": {
"steps": [
// Each object here represents labels for one step of the solution.
{
// Each step will contain one or more completions. These are candidate
// steps the model output at this step of the trajectory. In phase 1,
// we frequently collect labels on alternative steps, while in phase 2
// we only collect labels on alternative steps after the first mistake,
// so most completions lists are singletons.
"completions": [
{
// Text of the step.
"text": "I want to find the largest positive integer that divides both $20 !$ and $200,\\!000$ evenly.",
// The rating the labeler gave to this step. Can be -1, 0, or +1.
// This is a 0 because it isn't incorrect, but it does not make
// any progress.
"rating": 0,
// The labeler can flag steps that they don't know how to label.
// This is rarely used.
"flagged": null
}
],
// In phase 1, if all completions were rated -1, we allowed labelers to
// write their own +1 step. This is null for all steps in phase 2.
"human_completion": null,
// The index of the completion "chosen" at this step, or null if the
// human_completion was used. You can reconstruct the solution
// trajectory like:
// [
// step["human_completion"] if step["chosen_completion"] is None
// else step["completions"][step["chosen_completion"]]["text"]
// for step in labeled_solution["label"]["steps"]
// ]
"chosen_completion": 0
},
{
"completions": [
{
"text": "One way to do this is to factor both numbers into prime factors and look for the common ones.",
"rating": 0,
"flagged": null
}
],
"human_completion": null,
"chosen_completion": 0
},
{
// Some steps contain multiple alternative completions, and each one
// gets a rating.
"completions": [
{
"text": "I know that $200,\\!000 = 2^5\\cdot 10^4 = 2^9\\cdot 5^4$.",
"rating": -1,
"flagged": null
},
{
"text": "To factor $20 !$, I can use the fact that every factorial is a multiple of every number less than or equal to it.",
"rating": 0,
"flagged": false
},
{
"text": "I can use a factor tree to find the prime factors of $200,\\!000$: $200,\\!000 = 2^5\\cdot 10^4 = 2^5\\cdot 2^4\\cdot 5^4 = 2^9\\cdot 5^4$.",
"rating": -1,
"flagged": false
},
{
"text": "I can use a factor tree to find the prime factors of $200,\\!000$.",
"rating": 0,
"flagged": false
},
{
"text": "To factor $20 !$, I can use the fact that any factorial is divisible by all the primes less than or equal to the input.",
"rating": 0,
"flagged": false
}
],
"human_completion": null,
"chosen_completion": null
}
],
// Total time in milliseconds spent on labeling this solution.
"total_time": 278270,
// Final result of labeling this solution. Will be one of:
// - "found_error": In phase 2 we stop labeling a solution after the
// first error is found.
// - "solution": We reached a step that concluded in the correct answer
// to the problem.
// - "bad_problem": The labeler reported the problem as broken.
// - "give_up": The labeler was stuck (the problem was taking too long,
// or the instructions were unclear) and moved onto the
// next problem.
"finish_reason": "found_error"
}
}
```
## Citation
Please use the below BibTeX entry to cite this dataset:
COMING SOON |
nayohan/030_web | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11864271386
num_examples: 5532953
download_size: 6779466232
dataset_size: 11864271386
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Zarxrax/ModernAnimation1080_v2 | ---
license: wtfpl
configs:
- config_name: sample
data_files:
- split: sample
path: "sample.zip"
---
- This dataset, designed for training Single Image Super Resolution models, includes 2200 full size frames taken from Blu-rays of modern animation across a variety of different art styles. Efforts have been taken to try to ensure that all sources were produced at native 1080p or better (or perhaps, it would be better to say "Full HD").
- The only processing that has been done was a conversion to RGB, and for letterboxed sources, cropping. Sol Levante was produced at 4k, and has been downscaled to 1080p to match the other sources.
- All frames are at a size of 1920x1080, with the exception of Wolfwalkers and Wish, which have had their letterboxes cropped off.
- The files are named according to the source, making it easy to exclude any sources you don't want. It's designed to be modular, just use what you want.
- You may wish to do further processing such as generating tiles prior to training. You are also expected to generate any LR images yourself.
- All images have been saved as lossless webp to conserve space. My tests show that webp typically decodes just slightly slower than png, so there should not be any significant performance difference if you train from these directly. Of course, feel free to convert to png if you like.
- 23 additional validation images have also been included, one from each source.
- To download, click on "Files and Versions" and download the file "ModernAnimation1080_v2.zip" |
Cmarti24/downsampled_upsampled | ---
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
sequence: string
- name: annotation_agent
dtype: string
- name: vectors
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 15655201.915976798
num_examples: 37289
- name: test
num_bytes: 6709792.084023202
num_examples: 15982
download_size: 12515554
dataset_size: 22364994.0
---
# Dataset Card for "downsampled_upsampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sijuade/ImageNette | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': 'tench'
'1': 'springer'
'2': 'casette_player'
'3': 'chain_saw'
'4': 'church'
'5': 'French_horn'
'6': 'garbage_truck'
'7': 'gas_pump'
'8': 'gold_ball'
'9': 'parachute'
splits:
- name: train
num_bytes: 1154087527.273
num_examples: 9469
- name: validation
num_bytes: 494394874.125
num_examples: 4425
download_size: 1629620479
dataset_size: 1648482401.398
---
|
simonveitner/MetaMathQA-40K-Filtered | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 10337682
num_examples: 15259
download_size: 4926366
dataset_size: 10337682
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815011 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: dfsj/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: dfsj/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
rishabhjain16/myst_pf_ot50 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 8509570768.06
num_examples: 19332
- name: test
num_bytes: 1447570290.631
num_examples: 3317
download_size: 8974808612
dataset_size: 9957141058.691
---
# Dataset Card for "myst_pf_ot50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anzorq/hf-spaces-descriptions-embeddings | ---
license: mit
dataset_info:
features:
- name: id
dtype: string
- name: description
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 94758018
num_examples: 29718
download_size: 78891306
dataset_size: 94758018
---
# Hugging Face Spaces Descriptions and Embeddings Dataset
I parsed all the available public 🤗 spaces as of May 22, 2023, generated concise descriptions of their functionality, and created embeddings for them.
The descriptions were generated using various LLMs from each space's app file (README.md -> app_file). The embeddings were created using the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) SentenceTransformer model.
The dataset comprises approximately 30,000 spaces that meet specific criteria: having more than 40 lines of code and over 1000 characters in the app file.
The descriptions provide an overview of the spaces and their features.
## Dataset Details
- **Name**: HF Spaces Descriptions and Embeddings
- **Creator**: [anzorq](https://huggingface.co/anzorq)
- **License**: MIT
## Dataset Usage
You can use this dataset for various natural language processing (NLP) tasks such as semantic search, clustering, etc.
## Loading the Dataset
You can load the dataset using the datasets library:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("anzorq/hf-spaces-descriptions-embeddings")
# Access the different splits
train_split = dataset['train']
valid_split = dataset['valid']
test_split = dataset['test']
```
## Semantic Search Example
Performing a semantic search using the dataset's embeddings:
```python
import torch
from sentence_transformers import SentenceTransformer
from datasets import load_dataset
import numpy as np
# Load the dataset
dataset = load_dataset("anzorq/hf-spaces-descriptions-embeddings")
# Load the SentenceTransformer model
model = SentenceTransformer('all-MiniLM-L6-v2')
# Example query
query = "Removing background from images"
# Encode the query
query_embedding = model.encode([query], convert_to_tensor=True)
# Get the space descriptions and embeddings
descriptions = dataset['train']['description']
embeddings = np.array(dataset['train']['embedding'])
# Calculate cosine similarity
cosine_scores = torch.nn.functional.cosine_similarity(query_embedding, torch.tensor(embeddings))
# Sort the results
top_k = torch.topk(cosine_scores, k=5)
# Print the top-k results
print("Query:", query)
for idx in top_k.indices[0]:
print("Space ID:", dataset['train']['id'][idx])
print("Description:", descriptions[idx])
print("Score:", cosine_scores[idx].item())
```
## License
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT).
|
sh110495/compressed_arc | ---
dataset_info:
features:
- name: id
sequence: string
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
dtype: int64
- name: candidate_length
sequence: int64
splits:
- name: test
num_bytes: 8012734
num_examples: 1172
download_size: 1491232
dataset_size: 8012734
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
msj19/static_10000_2 | ---
license: apache-2.0
---
|
mask-distilled-one-sec-cv12/chunk_238 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1339053424
num_examples: 262972
download_size: 1363072631
dataset_size: 1339053424
---
# Dataset Card for "chunk_238"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aniketr/hpdv2-test-no-embeds | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: prompt
dtype: string
- name: image_left_id
dtype: string
- name: image_right_id
dtype: string
- name: pref
dtype: int64
- name: ranking_id
dtype: string
splits:
- name: train
num_bytes: 20770830
num_examples: 153000
download_size: 1465804
dataset_size: 20770830
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
language:
- en
size_categories:
- 100K<n<1M
---
Constructing a preference dataset from the test split of [HPDv2](https://huggingface.co/datasets/ymhao/HPDv2).
Each row in the dataset is a unique ranking of preference on a single image pair and contains
1. `user_id`: unique identifier of the annotator
2. `prompt`: text prompt used to generate image candidates
3. `image_left_id`: id of left image in preference pair
4. `image_right_id`: id of right image in preference pair
5. `pref`: image preferred by annotator. 1 for left image, -1 for right image
6. `ranking_id`: unique identifier for the ranking pair |
bigbio/minimayosrs |
---
language:
- en
bigbio_language:
- English
license: cc0-1.0
multilinguality: monolingual
bigbio_license_shortname: CC0_1p0
pretty_name: MiniMayoSRS
homepage: https://conservancy.umn.edu/handle/11299/196265
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for MiniMayoSRS
## Dataset Description
- **Homepage:** https://conservancy.umn.edu/handle/11299/196265
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter-annotator agreement was
achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78.
## Citation Information
```
@article{pedersen2007measures,
title={Measures of semantic similarity and relatedness in the biomedical domain},
author={Pedersen, Ted and Pakhomov, Serguei VS and Patwardhan, Siddharth and Chute, Christopher G},
journal={Journal of biomedical informatics},
volume={40},
number={3},
pages={288--299},
year={2007},
publisher={Elsevier}
}
```
|
ShenaoZ/0.0001_idpo_same_3itersn_dataset | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: is_better
dtype: bool
splits:
- name: train_prefs_1
num_bytes: 168081294
num_examples: 20378
- name: test_prefs_1
num_bytes: 16410846
num_examples: 2000
- name: train_prefs_2
num_bytes: 172983228
num_examples: 20378
- name: test_prefs_2
num_bytes: 16854087
num_examples: 2000
download_size: 206699429
dataset_size: 374329455
configs:
- config_name: default
data_files:
- split: train_prefs_1
path: data/train_prefs_1-*
- split: test_prefs_1
path: data/test_prefs_1-*
- split: train_prefs_2
path: data/train_prefs_2-*
- split: test_prefs_2
path: data/test_prefs_2-*
---
# Dataset Card for "0.0001_idpo_same_3itersn_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mber/subset_squadv2_format_date_only_train | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 2651123.3716349658
num_examples: 2852
download_size: 4253248
dataset_size: 2651123.3716349658
---
# Dataset Card for "subset_squadv2_format_date_only_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HEMASENTHIL/NEWDEMO | ---
dataset_info:
features:
- name: English Sentences
dtype: string
- name: Thanglish sentences
dtype: string
- name: Text
dtype: string
splits:
- name: train
num_bytes: 4761.428571428572
num_examples: 5
- name: test
num_bytes: 1904.5714285714287
num_examples: 2
download_size: 19197
dataset_size: 6666.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
yardeny/mlm_test_set_context_len_64 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 506880
num_examples: 1280
download_size: 0
dataset_size: 506880
---
# Dataset Card for "loss_landscape_test_set_context_len_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
InfoCert/faces | ---
license: apache-2.0
---
|
gguichard/wsd_fr_wngt_semcor_translated_aligned_all_v1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: wn_sens
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 919383947.317851
num_examples: 925349
- name: test
num_bytes: 9286746.682149062
num_examples: 9347
download_size: 218724215
dataset_size: 928670694.0
---
# Dataset Card for "wsd_fr_wngt_semcor_translated_aligned_all_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TenzinGayche/t5_predata_nocs | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: file_name
dtype:
audio:
sampling_rate: 16000
- name: uni
dtype: string
- name: wylie
dtype: string
- name: url
dtype: string
- name: dept
dtype: string
- name: timestamp
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 16669393849.934
num_examples: 276501
- name: test
num_bytes: 836705600.272
num_examples: 14564
download_size: 42185941593
dataset_size: 17506099450.206
---
# Dataset Card for "t5_predata_nocs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_sst2_his_he | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 6515
num_examples: 42
- name: test
num_bytes: 17181
num_examples: 102
- name: train
num_bytes: 202559
num_examples: 1660
download_size: 111675
dataset_size: 226255
---
# Dataset Card for "MULTI_VALUE_sst2_his_he"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Locutusque/InstructMix | ---
dataset:
name: InstructiveMix
tagline: A Combined Dataset of Diverse Instructional Content
description: >
InstructiveMix is a comprehensive dataset that brings together various
instructional content from different domains. It combines instructions for
tasks, code, poems, math, essays, medical texts, and more. With a diverse range of
instructional data, this dataset is suitable for a wide range of natural
language processing (NLP) tasks and research.
license: CC-BY-SA-4.0
dataset_creation: '2023-08-02T00:00:00.000Z'
dataset_version: 1.0.0
authors:
- name: Locutusque
email: locutusque.airshipcraft@gmail.com
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
---
**Dataset Summary:**
InstructMix is a comprehensive combined dataset that offers diverse instructional content for a range of tasks. It includes data from various sources, such as code instructions, poems, essays, medical texts, and more. This dataset is designed to support natural language processing (NLP) research, model training, and evaluation across different domains.
**Dataset Contents:**
The dataset contains a collection of instructional data with corresponding inputs and outputs. Each entry has an "Input" field that contains the instructional content, and an "Output" field that represents the corresponding response or completion. Here is a list of the datasets used:
- Locutusque/ColumnedChatCombined
- TokenBender/code_instructions_120k_alpaca_style
- Open-Orca/OpenOrca
- vicgalle/alpaca-gpt4
- ChristophSchuhmann/essays-with-instructions
- checkai/instruction-poems
- pubmed_qa
- BI55/MedText
- nampdn-ai/tiny-codes
- TIGER-Lab/MathInstruct
- garage-bAInd/Open-Platypus
It contains two of the following columns:
- Input (string)
- Output (string)
These should hopefully be self-explanatory
**Dataset Composition:**
- Number of samples: 7570315
- Languages: English
**Use Cases:**
The InstructiveMix dataset is suitable for various NLP tasks, including text generation, text completion, translation, summarization, and more. It can be used to train and evaluate language models, code generation models, and other NLP-based applications.
**Dataset Creation:**
The InstructiveMix dataset was created by combining multiple existing datasets with instructional content and adding metadata to facilitate seamless integration. The content spans a diverse set of domains and was sourced from reputable datasets and public sources.
**Acknowledgements:**
I would like to acknowledge the original creators of the datasets used to construct InstructiveMix. Their contributions have enabled the creation of this valuable resource for the NLP community.
**Contact:**
For any questions or inquiries related to the InstructiveMix dataset, please contact me at [locutusque.airshipcraft@gmail.com].
--- |
folkopinion/government-interpellation-qa-swedish | ---
task_categories:
- table-question-answering
language:
- sv
tags:
- government
- politics
- interpellation
pretty_name: Riksdagen Interpellation Debate QA
size_categories:
- 1K<n<10K
--- |
CyberHarem/frost_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of frost/Frost/霜华 (Arknights)
This is the dataset of frost/Frost/霜华 (Arknights), containing 32 images and their tags.
The core tags of this character are `black_hair, hat, short_hair, breasts, large_breasts, black_headwear, blue_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 32 | 37.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/frost_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 32 | 32.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/frost_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 72 | 58.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/frost_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/frost_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, beanie, jacket, solo, looking_at_viewer, black_gloves, long_sleeves, fur_trim, pants, pouch, closed_mouth, knee_pads, outdoors, snow, tactical_clothes, holding_gun, holster, military, mole, rifle |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | beanie | jacket | solo | looking_at_viewer | black_gloves | long_sleeves | fur_trim | pants | pouch | closed_mouth | knee_pads | outdoors | snow | tactical_clothes | holding_gun | holster | military | mole | rifle |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:---------|:-------|:--------------------|:---------------|:---------------|:-----------|:--------|:--------|:---------------|:------------|:-----------|:-------|:-------------------|:--------------|:----------|:-----------|:-------|:--------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
sh0416/humanextension | ---
license: mit
---
Related github repository: https://github.com/sh0416/humanextension |
christinacdl/binary_hate_speech | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- code
size_categories:
- 10K<n<100K
--- |
heliosprime/twitter_dataset_1712932789 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 14444
num_examples: 35
download_size: 10368
dataset_size: 14444
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1712932789"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
frluquba/adv-ele | ---
dataset_info:
features:
- name: ADV
dtype: string
- name: ELE
dtype: string
splits:
- name: train
num_bytes: 430918.56140350876
num_examples: 1732
- name: test
num_bytes: 107978.43859649122
num_examples: 434
download_size: 293295
dataset_size: 538897.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
moritzrippe/231130_submit_button | ---
license: other
license_name: moritzrippe
license_link: LICENSE
---
|
AlanYky/flan-phi-2-800k | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 1228291142
num_examples: 720000
- name: validation
num_bytes: 137023175
num_examples: 80000
download_size: 765837799
dataset_size: 1365314317
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
philschmid/tgi_benchmark_sharegpt | ---
license: apache-2.0
---
|
Vinnyyw/Maitesongs | ---
license: openrail
---
|
Seanxh/twitter_dataset_1713201760 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 115388
num_examples: 270
download_size: 44665
dataset_size: 115388
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/kawasaki_saki_yahariorenoseishunlovecomewamachigatteiru | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Kawasaki Saki (Yahari Ore no Seishun LoveCome wa Machigatte Iru)
This is the dataset of Kawasaki Saki (Yahari Ore no Seishun LoveCome wa Machigatte Iru), containing 128 images and their tags.
The core tags of this character are `ponytail, long_hair, blue_hair, mole, mole_under_eye, purple_eyes, scrunchie`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 128 | 69.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawasaki_saki_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 128 | 58.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawasaki_saki_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 265 | 117.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawasaki_saki_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 128 | 69.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawasaki_saki_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 265 | 135.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawasaki_saki_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kawasaki_saki_yahariorenoseishunlovecomewamachigatteiru',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, bartender, solo, closed_eyes, grey_hair, vest, formal |
| 1 | 20 |  |  |  |  |  | 1girl, solo, anime_coloring, white_shirt, open_mouth, looking_at_viewer, smile |
| 2 | 30 |  |  |  |  |  | 1girl, sobu_high_school_uniform, solo, blazer, shirt, black_jacket |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bartender | solo | closed_eyes | grey_hair | vest | formal | anime_coloring | white_shirt | open_mouth | looking_at_viewer | smile | sobu_high_school_uniform | blazer | shirt | black_jacket |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:-------|:--------------|:------------|:-------|:---------|:-----------------|:--------------|:-------------|:--------------------|:--------|:---------------------------|:---------|:--------|:---------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | |
| 1 | 20 |  |  |  |  |  | X | | X | | | | | X | X | X | X | X | | | | |
| 2 | 30 |  |  |  |  |  | X | | X | | | | | | | | | | X | X | X | X |
|
roa7n/patched_test_p_40_f_UCH_m1_predictions | ---
dataset_info:
features:
- name: id
dtype: string
- name: sequence_str
dtype: string
- name: label
dtype: int64
- name: m1_preds
dtype: float32
splits:
- name: train
num_bytes: 50671867
num_examples: 114052
download_size: 4695593
dataset_size: 50671867
---
# Dataset Card for "patched_test_p_40_f_UCH_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_sst2_invariant_tag_fronted_isnt | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 963
num_examples: 9
download_size: 2847
dataset_size: 963
---
# Dataset Card for "MULTI_VALUE_sst2_invariant_tag_fronted_isnt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Codec-SUPERB/SNIPS | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: audio
dtype: audio
- name: id
dtype: string
splits:
- name: train
num_bytes: 19197922646.0
num_examples: 209344
- name: test
num_bytes: 1035368762.0
num_examples: 11200
- name: valid
num_bytes: 1047359800.0
num_examples: 11200
download_size: 21173943484
dataset_size: 21280651208.0
---
# Dataset Card for "SNIPS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-futin__feed-sen_en_-7dbe88-2245971653 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: facebook/opt-66b
metrics: []
dataset_name: futin/feed
dataset_config: sen_en_
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/feed
* Config: sen_en_
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
Vctr-dev/tagesspiegel-article-metadata | ---
license: mit
---
|
zhengxuanzenwu/fair_glue_stsb | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: float32
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 754791
num_examples: 5749
- name: validation
num_bytes: 108032.0
num_examples: 750
- name: test
num_bytes: 216064
num_examples: 1500
download_size: 735298
dataset_size: 1078887.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
otalorajuand/data_house_museum | ---
configs:
- config_name: data_museum
data_files: "data_museum.csv"
- config_name: data_santuario
data_files: "data_santuario.csv"
--- |
heloisy/cosmos_qa_ptbr | ---
pretty_name: CosmosQA Português
language:
- pt
license:
- cc-by-4.0
source_datasets:
- cosmos_qa
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: contexto
dtype: string
- name: pergunta
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: Label
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 25262
- name: validation
num_examples: 2985
- name: test
num_examples: 6963
task_categories:
- multiple-choice
---
# Cosmos QA Português
Este dataset é uma tradução para português do Cosmos QA, que originalmente é na língua inglesa.
A tradução foi feita automaticamente usando o GPT-3.5-turbo, logo pode ter erros que não foram notados numa análise superficial.
Se atente ao uso.
# Dataset Card for cosmos_qa
## Licensing Information
The data is distributed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
## Source Data Citation INformation
```
@inproceedings{huang-etal-2019-cosmos,
title = "Cosmos {QA}: Machine Reading Comprehension with Contextual Commonsense Reasoning",
author = "Huang, Lifu and
Le Bras, Ronan and
Bhagavatula, Chandra and
Choi, Yejin",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1243",
doi = "10.18653/v1/D19-1243",
pages = "2391--2401",
}
``` |
Circularmachines/batch_indexing_machine_230529_001 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 157818200.0
num_examples: 720
download_size: 157829992
dataset_size: 157818200.0
---
# Dataset Card for "batch_indexing_machine_230529_001"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rishiraj/samvaad-hi-v1 | ---
language:
- en
- hi
license: apache-2.0
task_categories:
- text-generation
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 421602696
num_examples: 91281
- name: test
num_bytes: 46377710
num_examples: 10143
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
100k high-quality conversations in English, Hindi, and Hinglish curated exclusively with an Indic context. |
vwxyzjn/summarize_from_feedback_oai_preprocessing_pythia-160m_169 | ---
dataset_info:
features:
- name: info
struct:
- name: id
dtype: string
- name: post
dtype: string
- name: title
dtype: string
- name: subreddit
dtype: string
- name: site
dtype: string
- name: article
dtype: string
- name: summaries
list:
- name: text
dtype: string
- name: policy
dtype: string
- name: note
dtype: string
- name: choice
dtype: int32
- name: worker
dtype: string
- name: batch
dtype: string
- name: split
dtype: string
- name: extra
struct:
- name: confidence
dtype: int32
- name: query_token
sequence: int64
- name: query
dtype: string
- name: response0
dtype: string
- name: response0_token
sequence: int64
- name: response0_token_len
dtype: int64
- name: response1
dtype: string
- name: response1_token
sequence: int64
- name: response1_token_len
dtype: int64
- name: response0_policy
dtype: string
- name: response1_policy
dtype: string
- name: policies
dtype: string
- name: query_response0
dtype: string
- name: query_response0_token
sequence: int64
- name: query_response0_token_len
dtype: int64
- name: query_response1
dtype: string
- name: query_response1_token
sequence: int64
- name: query_response1_token_len
dtype: int64
splits:
- name: train
num_bytes: 2210564467
num_examples: 92858
- name: validation
num_bytes: 2054238499
num_examples: 86086
download_size: 271347530
dataset_size: 4264802966
---
# Dataset Card for "summarize_from_feedback_oai_preprocessing_pythia-160m_169"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vtuber-plan/quantdata-10k | ---
license: openrail
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.