datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
lukebarousse/data_jobs | ---
license: apache-2.0
---
|
Nooha/job_categorization_dataset | ---
dataset_info:
features:
- name: title
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 435029
num_examples: 6539
download_size: 164148
dataset_size: 435029
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713032076 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 12085
num_examples: 28
download_size: 9998
dataset_size: 12085
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713032076"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kilt_tasks | ---
annotations_creators:
- crowdsourced
- found
- machine-generated
language_creators:
- crowdsourced
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
source_datasets:
- extended|natural_questions
- extended|other-aidayago
- extended|other-fever
- extended|other-hotpotqa
- extended|other-trex
- extended|other-triviaqa
- extended|other-wizardsofwikipedia
- extended|other-wned-cweb
- extended|other-wned-wiki
- extended|other-zero-shot-re
- original
task_categories:
- fill-mask
- question-answering
- text-classification
- text-generation
- text-retrieval
- text2text-generation
task_ids:
- abstractive-qa
- dialogue-modeling
- document-retrieval
- entity-linking-retrieval
- extractive-qa
- fact-checking
- fact-checking-retrieval
- open-domain-abstractive-qa
- open-domain-qa
- slot-filling
paperswithcode_id: kilt
pretty_name: KILT
config_names:
- aidayago2
- cweb
- eli5
- fever
- hotpotqa
- nq
- structured_zeroshot
- trex
- triviaqa_support_only
- wned
- wow
dataset_info:
- config_name: aidayago2
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 68943890
num_examples: 18395
- name: validation
num_bytes: 20743172
num_examples: 4784
- name: test
num_bytes: 14210587
num_examples: 4463
download_size: 13419920
dataset_size: 103897649
- config_name: cweb
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: validation
num_bytes: 89819252
num_examples: 5599
- name: test
num_bytes: 99208393
num_examples: 5543
download_size: 32809813
dataset_size: 189027645
- config_name: eli5
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 525554458
num_examples: 272634
- name: validation
num_bytes: 13860033
num_examples: 1507
- name: test
num_bytes: 107092
num_examples: 600
download_size: 329302944
dataset_size: 539521583
- config_name: fever
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 23937486
num_examples: 104966
- name: validation
num_bytes: 3167751
num_examples: 10444
- name: test
num_bytes: 1040116
num_examples: 10100
download_size: 11571038
dataset_size: 28145353
- config_name: hotpotqa
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 33595295
num_examples: 88869
- name: validation
num_bytes: 2371262
num_examples: 5600
- name: test
num_bytes: 887204
num_examples: 5569
download_size: 17914796
dataset_size: 36853761
- config_name: nq
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 30385368
num_examples: 87372
- name: validation
num_bytes: 6190373
num_examples: 2837
- name: test
num_bytes: 333162
num_examples: 1444
download_size: 16535475
dataset_size: 36908903
- config_name: structured_zeroshot
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 47165561
num_examples: 147909
- name: validation
num_bytes: 1612123
num_examples: 3724
- name: test
num_bytes: 1140265
num_examples: 4966
download_size: 21038900
dataset_size: 49917949
- config_name: trex
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 1190183022
num_examples: 2284168
- name: validation
num_bytes: 2573444
num_examples: 5000
- name: test
num_bytes: 757470
num_examples: 5000
download_size: 546671157
dataset_size: 1193513936
- config_name: triviaqa_support_only
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 72021515
num_examples: 61844
- name: validation
num_bytes: 6824398
num_examples: 5359
- name: test
num_bytes: 340692
num_examples: 6586
download_size: 31946196
dataset_size: 79186605
- config_name: wned
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: validation
num_bytes: 12659518
num_examples: 3396
- name: test
num_bytes: 13080824
num_examples: 3376
download_size: 3608615
dataset_size: 25740342
- config_name: wow
features:
- name: id
dtype: string
- name: input
dtype: string
- name: meta
struct:
- name: left_context
dtype: string
- name: mention
dtype: string
- name: right_context
dtype: string
- name: partial_evidence
list:
- name: start_paragraph_id
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: title
dtype: string
- name: section
dtype: string
- name: wikipedia_id
dtype: string
- name: meta
struct:
- name: evidence_span
list: string
- name: obj_surface
list: string
- name: sub_surface
list: string
- name: subj_aliases
list: string
- name: template_questions
list: string
- name: output
list:
- name: answer
dtype: string
- name: meta
struct:
- name: score
dtype: int32
- name: provenance
list:
- name: bleu_score
dtype: float32
- name: start_character
dtype: int32
- name: start_paragraph_id
dtype: int32
- name: end_character
dtype: int32
- name: end_paragraph_id
dtype: int32
- name: meta
struct:
- name: fever_page_id
dtype: string
- name: fever_sentence_id
dtype: int32
- name: annotation_id
dtype: string
- name: yes_no_answer
dtype: string
- name: evidence_span
list: string
- name: section
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 41870938
num_examples: 63734
- name: validation
num_bytes: 2021752
num_examples: 3054
- name: test
num_bytes: 1339546
num_examples: 2944
download_size: 25441975
dataset_size: 45232236
configs:
- config_name: aidayago2
data_files:
- split: train
path: aidayago2/train-*
- split: validation
path: aidayago2/validation-*
- split: test
path: aidayago2/test-*
- config_name: cweb
data_files:
- split: validation
path: cweb/validation-*
- split: test
path: cweb/test-*
- config_name: eli5
data_files:
- split: train
path: eli5/train-*
- split: validation
path: eli5/validation-*
- split: test
path: eli5/test-*
- config_name: fever
data_files:
- split: train
path: fever/train-*
- split: validation
path: fever/validation-*
- split: test
path: fever/test-*
- config_name: hotpotqa
data_files:
- split: train
path: hotpotqa/train-*
- split: validation
path: hotpotqa/validation-*
- split: test
path: hotpotqa/test-*
- config_name: nq
data_files:
- split: train
path: nq/train-*
- split: validation
path: nq/validation-*
- split: test
path: nq/test-*
default: true
- config_name: structured_zeroshot
data_files:
- split: train
path: structured_zeroshot/train-*
- split: validation
path: structured_zeroshot/validation-*
- split: test
path: structured_zeroshot/test-*
- config_name: trex
data_files:
- split: train
path: trex/train-*
- split: validation
path: trex/validation-*
- split: test
path: trex/test-*
- config_name: triviaqa_support_only
data_files:
- split: train
path: triviaqa_support_only/train-*
- split: validation
path: triviaqa_support_only/validation-*
- split: test
path: triviaqa_support_only/test-*
- config_name: wned
data_files:
- split: validation
path: wned/validation-*
- split: test
path: wned/test-*
- config_name: wow
data_files:
- split: train
path: wow/train-*
- split: validation
path: wow/validation-*
- split: test
path: wow/test-*
---
# Dataset Card for KILT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ai.facebook.com/tools/kilt/
- **Repository:** https://github.com/facebookresearch/KILT
- **Paper:** https://arxiv.org/abs/2009.02252
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/689/leaderboard/
- **Point of Contact:** [Needs More Information]
### Dataset Summary
KILT has been built from 11 datasets representing 5 types of tasks:
- Fact-checking
- Entity linking
- Slot filling
- Open domain QA
- Dialog generation
All these datasets have been grounded in a single pre-processed Wikipedia dump, allowing for fairer and more consistent evaluation as well as enabling new task setups such as multitask and transfer learning with minimal effort. KILT also provides tools to analyze and understand the predictions made by models, as well as the evidence they provide for their predictions.
#### Loading the KILT knowledge source and task data
The original KILT [release](https://github.com/facebookresearch/KILT) only provides question IDs for the TriviaQA task. Using the full dataset requires mapping those back to the TriviaQA questions, which can be done as follows:
```python
from datasets import load_dataset
# Get the pre-processed Wikipedia knowledge source for kild
kilt_wiki = load_dataset("kilt_wikipedia")
# Get the KILT task datasets
kilt_triviaqa = load_dataset("kilt_tasks", name="triviaqa_support_only")
# Most tasks in KILT already have all required data, but KILT-TriviaQA
# only provides the question IDs, not the questions themselves.
# Thankfully, we can get the original TriviaQA data with:
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# The KILT IDs can then be mapped to the TriviaQA questions with:
triviaqa_map = {}
def add_missing_data(x, trivia_qa_subset, triviaqa_map):
i = triviaqa_map[x['id']]
x['input'] = trivia_qa_subset[i]['question']
x['output']['original_answer'] = trivia_qa_subset[i]['answer']['value']
return x
for k in ['train', 'validation', 'test']:
triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(trivia_qa[k]['question_id'])])
kilt_triviaqa[k] = kilt_triviaqa[k].filter(lambda x: x['id'] in triviaqa_map)
kilt_triviaqa[k] = kilt_triviaqa[k].map(add_missing_data, fn_kwargs=dict(trivia_qa_subset=trivia_qa[k], triviaqa_map=triviaqa_map))
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
### Data Instances
An example of open-domain QA from the Natural Questions `nq` configuration looks as follows:
```
{'id': '-5004457603684974952',
'input': 'who is playing the halftime show at super bowl 2016',
'meta': {'left_context': '',
'mention': '',
'obj_surface': [],
'partial_evidence': [],
'right_context': '',
'sub_surface': [],
'subj_aliases': [],
'template_questions': []},
'output': [{'answer': 'Coldplay',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 186,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 178,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Beyoncé',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 224,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 217,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Bruno Mars',
'meta': {'score': 0},
'provenance': [{'bleu_score': 1.0,
'end_character': 239,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': 229,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': 'Coldplay with special guest performers Beyoncé and Bruno Mars',
'meta': {'score': 0},
'provenance': []},
{'answer': 'British rock group Coldplay with special guest performers Beyoncé and Bruno Mars',
'meta': {'score': 0},
'provenance': []},
{'answer': '',
'meta': {'score': 0},
'provenance': [{'bleu_score': 0.9657992720603943,
'end_character': 341,
'end_paragraph_id': 1,
'meta': {'annotation_id': '2430977867500315580',
'evidence_span': [],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': 'NONE'},
'section': 'Section::::Abstract.',
'start_character': 0,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]},
{'answer': '',
'meta': {'score': 0},
'provenance': [{'bleu_score': -1.0,
'end_character': -1,
'end_paragraph_id': 1,
'meta': {'annotation_id': '-1',
'evidence_span': ['It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars',
'It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who previously had headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively.',
"The Super Bowl 50 Halftime Show took place on February 7, 2016, at Levi's Stadium in Santa Clara, California as part of Super Bowl 50. It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars",
"The Super Bowl 50 Halftime Show took place on February 7, 2016, at Levi's Stadium in Santa Clara, California as part of Super Bowl 50. It was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars,"],
'fever_page_id': '',
'fever_sentence_id': -1,
'yes_no_answer': ''},
'section': 'Section::::Abstract.',
'start_character': -1,
'start_paragraph_id': 1,
'title': 'Super Bowl 50 halftime show',
'wikipedia_id': '45267196'}]}]}
```
### Data Fields
Examples from all configurations have the following features:
- `input`: a `string` feature representing the query.
- `output`: a `list` of features each containing information for an answer, made up of:
- `answer`: a `string` feature representing a possible answer.
- `provenance`: a `list` of features representing Wikipedia passages that support the `answer`, denoted by:
- `title`: a `string` feature, the title of the Wikipedia article the passage was retrieved from.
- `section`: a `string` feature, the title of the section in Wikipedia article.
- `wikipedia_id`: a `string` feature, a unique identifier for the Wikipedia article.
- `start_character`: a `int32` feature.
- `start_paragraph_id`: a `int32` feature.
- `end_character`: a `int32` feature.
- `end_paragraph_id`: a `int32` feature.
### Data Splits
The configurations have the following splits:
| | Train | Validation | Test |
| ----------- | ----------- | ----------- | ----------- |
| triviaqa | 61844 | 5359 | 6586 |
| fever | 104966 | 10444 | 10100 |
| aidayago2 | 18395 | 4784 | 4463 |
| wned | | 3396 | 3376 |
| cweb | | 5599 | 5543 |
| trex | 2284168 | 5000 | 5000 |
| structured_zeroshot | 147909 | 3724 | 4966 |
| nq | 87372 | 2837 | 1444 |
| hotpotqa | 88869 | 5600 | 5569 |
| eli5 | 272634 | 1507 | 600 |
| wow | 94577 | 3058 | 2944 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{kilt_tasks,
author = {Fabio Petroni and
Aleksandra Piktus and
Angela Fan and
Patrick S. H. Lewis and
Majid Yazdani and
Nicola De Cao and
James Thorne and
Yacine Jernite and
Vladimir Karpukhin and
Jean Maillard and
Vassilis Plachouras and
Tim Rockt{\"{a}}schel and
Sebastian Riedel},
editor = {Kristina Toutanova and
Anna Rumshisky and
Luke Zettlemoyer and
Dilek Hakkani{-}T{\"{u}}r and
Iz Beltagy and
Steven Bethard and
Ryan Cotterell and
Tanmoy Chakraborty and
Yichao Zhou},
title = {{KILT:} a Benchmark for Knowledge Intensive Language Tasks},
booktitle = {Proceedings of the 2021 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language Technologies,
{NAACL-HLT} 2021, Online, June 6-11, 2021},
pages = {2523--2544},
publisher = {Association for Computational Linguistics},
year = {2021},
url = {https://www.aclweb.org/anthology/2021.naacl-main.200/}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset. |
hpprc/jawiki-books | ---
dataset_info:
config_name: '2024-04-01'
features:
- name: id
dtype: int64
- name: title
dtype: string
- name: text
dtype: string
- name: paragraphs
list:
- name: paragraph_id
dtype: int64
- name: tag
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: wikitext
dtype: string
- name: date_created
dtype: string
- name: date_modified
dtype: string
- name: templates
sequence: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 172222533.07899398
num_examples: 13490
download_size: 74095911
dataset_size: 172222533.07899398
configs:
- config_name: '2024-04-01'
data_files:
- split: train
path: 2024-04-01/train-*
license: cc-by-sa-3.0
language:
- ja
--- |
TuringsSolutions/FoolTheGANDataset | ---
license: other
license_name: onlyuseifyoucontactmefirstlicense
license_link: LICENSE
---
|
najju/sign-psl-13b-new | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Gloss
dtype: string
splits:
- name: train
num_bytes: 347110
num_examples: 5446
download_size: 215108
dataset_size: 347110
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jingmiao/PUZZLEQA | ---
language:
- en
license: apache-2.0
---
### Acknowledgements
The PUZZLEQA is scraped from [NPR Sunday Puzzle Official Website](https://www.npr.org/series/4473090/sunday-puzzle) and [NPR Puzzle Synopsis](https://groups.google.com/g/nprpuzzle),
made by a group of fans by running a mailing list that distributed questions and answers for each week’s puzzle.
The authors of the dataset cleaned the data and made some multiple choice based on the question and answers.
### Creation
The Multiple Choice Dataset is generated from PUZZLEQA dataset using the following algorithm.
1. Read the fr_big_exp.tsv.tsv file
2. Group rule-question-answer triples in a given Sunday together (so the rules of each question will be the same)
3. For each question, randomly select three other answers from answers on the same Sunday. Shuffle 3 selected answers with the correct answer for the given question to obtain 4 choices for this question. \\
4. identify the correct answer for the given question as the "gold" answer.
Recent.tsv is the dataset based on the NPR PUZZLE in 2023.
# Citation
@inproceedings{zhao2023solving,
title={Solving and Generating NPR Sunday Puzzles with Large Language Models},
author={Jingmiao Zhao and Carolyn Jane Anderson},
year={2023},
eprint={2306.12255},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
awettig/Pile-Gutenberg-0.5B-6K-opt | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6500959920
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1706776857
dataset_size: 6565905612
---
# Dataset Card for "Pile-Gutenberg-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/DTD_parition1_test | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': banded
'1': blotchy
'2': braided
'3': bubbly
'4': bumpy
'5': chequered
'6': cobwebbed
'7': cracked
'8': crosshatched
'9': crystalline
'10': dotted
'11': fibrous
'12': flecked
'13': freckled
'14': frilly
'15': gauzy
'16': grid
'17': grooved
'18': honeycombed
'19': interlaced
'20': knitted
'21': lacelike
'22': lined
'23': marbled
'24': matted
'25': meshed
'26': paisley
'27': perforated
'28': pitted
'29': pleated
'30': polka-dotted
'31': porous
'32': potholed
'33': scaly
'34': smeared
'35': spiralled
'36': sprinkled
'37': stained
'38': stratified
'39': striped
'40': studded
'41': swirly
'42': veined
'43': waffled
'44': woven
'45': wrinkled
'46': zigzagged
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_opt175b_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: clip_tag_ViT_L_14_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_dtd
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
splits:
- name: test
num_bytes: 184279525.4
num_examples: 1880
download_size: 180002375
dataset_size: 184279525.4
---
# Dataset Card for "DTD_parition1_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
erichilarysmithsr/dovichousesimulation | ---
license: ncsa
---
|
CyberHarem/akari_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of akari/鰐渕アカリ/明里 (Blue Archive)
This is the dataset of akari/鰐渕アカリ/明里 (Blue Archive), containing 152 images and their tags.
The core tags of this character are `blonde_hair, long_hair, horns, halo, breasts, ahoge, large_breasts, blue_eyes, braid, hair_between_eyes, demon_horns, symbol-shaped_pupils`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 152 | 208.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akari_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 152 | 180.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akari_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 376 | 369.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akari_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/akari_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 26 |  |  |  |  |  | 1girl, black_gloves, solo, long_sleeves, smile, garter_straps, looking_at_viewer, simple_background, blush, open_mouth, black_thighhighs, white_background, neck_ribbon, red_ribbon |
| 1 | 34 |  |  |  |  |  | 1girl, blush, looking_at_viewer, black_kimono, smile, black_flower, hair_flower, solo, black_gloves, obi, long_sleeves, simple_background, wide_sleeves, collarbone, open_mouth, white_background, black_horns, cleavage, heart-shaped_pupils, floral_print, official_alternate_costume, upper_body |
| 2 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, bare_shoulders, navel, black_bikini, cleavage, smile, solo, very_long_hair, blush, simple_background, closed_mouth, collarbone, heart-shaped_pupils, stomach, string_bikini, white_background, cowboy_shot, holding_food |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | solo | long_sleeves | smile | garter_straps | looking_at_viewer | simple_background | blush | open_mouth | black_thighhighs | white_background | neck_ribbon | red_ribbon | black_kimono | black_flower | hair_flower | obi | wide_sleeves | collarbone | black_horns | cleavage | heart-shaped_pupils | floral_print | official_alternate_costume | upper_body | bare_shoulders | navel | black_bikini | very_long_hair | closed_mouth | stomach | string_bikini | cowboy_shot | holding_food |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:---------------|:--------|:----------------|:--------------------|:--------------------|:--------|:-------------|:-------------------|:-------------------|:--------------|:-------------|:---------------|:---------------|:--------------|:------|:---------------|:-------------|:--------------|:-----------|:----------------------|:---------------|:-----------------------------|:-------------|:-----------------|:--------|:---------------|:-----------------|:---------------|:----------|:----------------|:--------------|:---------------|
| 0 | 26 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 34 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | | X | | X | | X | X | X | | | X | | | | | | | | X | | X | X | | | | X | X | X | X | X | X | X | X | X |
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/a1a25b97 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1337
dataset_size: 182
---
# Dataset Card for "a1a25b97"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shuwang127/patch_sum_db | ---
license: cc-by-4.0
task_categories:
- summarization
language:
- en
tags:
- code
pretty_name: pard_4k
size_categories:
- 1K<n<10K
---
# A dataset for security patch summarization task.
Currently, the dataset is under data curation phase; thus, `patch_sum_db` is not ready for public release. |
niv-al/sq-babi_nli_basic-coreference | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype:
class_label:
names:
'0': not-entailed
'1': entailed
splits:
- name: train
num_bytes: 209225
num_examples: 1000
- name: validation
num_bytes: 29532
num_examples: 144
- name: test
num_bytes: 30008
num_examples: 144
download_size: 48437
dataset_size: 268765
language:
- sq
---
# Dataset Card for "sq-babi_nli_basic-coreference"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
deetsadi/musiccaps_mel_spectrograms | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 117739026.0
num_examples: 999
download_size: 115824318
dataset_size: 117739026.0
---
# Dataset Card for "musiccaps_mel_spectrograms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bloyal/small-uniref30 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: num
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1067207.070393368
num_examples: 4096
- name: test
num_bytes: 167427.70557437633
num_examples: 640
- name: validation
num_bytes: 169382.9274292743
num_examples: 640
download_size: 1368501
dataset_size: 1404017.7033970184
task_categories:
- fill-mask
size_categories:
- 1K<n<10K
--- |
adamjweintraut/eli5_precomputed_top_slice | ---
dataset_info:
features:
- name: index
dtype: int64
- name: q_id
dtype: string
- name: question
dtype: string
- name: best_answer
dtype: string
- name: all_answers
sequence: string
- name: num_answers
dtype: int64
- name: top_answers
sequence: string
- name: num_top_answers
dtype: int64
- name: context
dtype: string
- name: orig
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 277523684
num_examples: 10000
- name: test
num_bytes: 34508046
num_examples: 1250
- name: validation
num_bytes: 34663133
num_examples: 1250
download_size: 214939984
dataset_size: 346694863
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
jahb57/gpt2_embeddings_BATCH_3 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: last_hidden_state
sequence:
sequence: float32
splits:
- name: train
num_bytes: 18590683446
num_examples: 100000
download_size: 18639362250
dataset_size: 18590683446
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bahidalgo/Me | ---
license: afl-3.0
---
|
AdapterOcean/med_alpaca_standardized_cluster_9_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2443204
num_examples: 8301
download_size: 838833
dataset_size: 2443204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_9_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Pankaj202133/Pankaj | ---
license: cc-by-nc-4.0
---
|
jondurbin/airoboros-3.0 | ---
license: apache-2.0
---
## Overview
This dataset is a continuation of the airoboros datasets, with two main new contributions:
* MathJSON - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Anon-contributed RP dataset to enhance multi-turn coherency.
Some of the MathJSON data was adapted from https://huggingface.co/datasets/meta-math/MetaMathQA
## Format
The format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.
## MathJSON
As we all know, math is really difficult for LLMs, especially floating point operations, trig functions, factorials, and several-digit numbers.
Instead of sending millions and millions of examples to the LLM for fine-tuning, I've created a new category `mathjson` which simply presents the solution to the problem as a JSON string that can be evaluated by traditional computation.
The dataset has roughly 4k samples, as a proof of concept. This can be expanded fairly easily, now that the fine-tuned models have a basic understanding of how to represent the problems.
For example:
```
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
Simply extract the JSON string between the mathjson tags, and evaluate it with https://cortexjs.io/compute-engine/ or my ultra-hacky limited-functionality python version of it https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py |
open-llm-leaderboard/details_nlpguy__AlloyIngotNeoX | ---
pretty_name: Evaluation run of nlpguy/AlloyIngotNeoX
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [nlpguy/AlloyIngotNeoX](https://huggingface.co/nlpguy/AlloyIngotNeoX) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_nlpguy__AlloyIngotNeoX\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-15T11:28:14.890311](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__AlloyIngotNeoX/blob/main/results_2024-02-15T11-28-14.890311.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6559618494796027,\n\
\ \"acc_stderr\": 0.03203002675451656,\n \"acc_norm\": 0.6554016452842437,\n\
\ \"acc_norm_stderr\": 0.03269903110679164,\n \"mc1\": 0.6034271725826194,\n\
\ \"mc1_stderr\": 0.017124930942023515,\n \"mc2\": 0.7456883658583785,\n\
\ \"mc2_stderr\": 0.014353519946726465\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7158703071672355,\n \"acc_stderr\": 0.013179442447653886,\n\
\ \"acc_norm\": 0.7431740614334471,\n \"acc_norm_stderr\": 0.0127669237941168\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7193786098386775,\n\
\ \"acc_stderr\": 0.004483845735187827,\n \"acc_norm\": 0.8906592312288388,\n\
\ \"acc_norm_stderr\": 0.0031142850772280335\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n\
\ \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n\
\ \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249386,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249386\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"\
acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663454,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663454\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108101,\n\
\ \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108101\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.04697085136647863,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.04697085136647863\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404907,\n \"\
acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404907\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n\
\ \"acc_stderr\": 0.023025899617188712,\n \"acc_norm\": 0.7935483870967742,\n\
\ \"acc_norm_stderr\": 0.023025899617188712\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n\
\ \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.03374402644139403,\n\
\ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.03374402644139403\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9119170984455959,\n \"acc_stderr\": 0.02045374660160103,\n\
\ \"acc_norm\": 0.9119170984455959,\n \"acc_norm_stderr\": 0.02045374660160103\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.02371088850197057,\n \
\ \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.02371088850197057\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524565,\n \
\ \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524565\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886793,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886793\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8422018348623853,\n \"acc_stderr\": 0.01563002297009244,\n \"\
acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.01563002297009244\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"\
acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"\
acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601443,\n \
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601443\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8263090676883781,\n\
\ \"acc_stderr\": 0.01354741565866226,\n \"acc_norm\": 0.8263090676883781,\n\
\ \"acc_norm_stderr\": 0.01354741565866226\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.023703099525258176,\n\
\ \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.023703099525258176\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4480446927374302,\n\
\ \"acc_stderr\": 0.016631976628930595,\n \"acc_norm\": 0.4480446927374302,\n\
\ \"acc_norm_stderr\": 0.016631976628930595\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n\
\ \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\
\ \"acc_stderr\": 0.025670259242188936,\n \"acc_norm\": 0.7138263665594855,\n\
\ \"acc_norm_stderr\": 0.025670259242188936\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n \
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\"\
: 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"\
acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47392438070404175,\n\
\ \"acc_stderr\": 0.012752858346533127,\n \"acc_norm\": 0.47392438070404175,\n\
\ \"acc_norm_stderr\": 0.012752858346533127\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.02806499816704009,\n\
\ \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.02806499816704009\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6748366013071896,\n \"acc_stderr\": 0.018950886770806315,\n \
\ \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.018950886770806315\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6034271725826194,\n\
\ \"mc1_stderr\": 0.017124930942023515,\n \"mc2\": 0.7456883658583785,\n\
\ \"mc2_stderr\": 0.014353519946726465\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8453038674033149,\n \"acc_stderr\": 0.010163172650433533\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6982562547384382,\n \
\ \"acc_stderr\": 0.012643544762873358\n }\n}\n```"
repo_url: https://huggingface.co/nlpguy/AlloyIngotNeoX
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|arc:challenge|25_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|gsm8k|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hellaswag|10_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-15T11-28-14.890311.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-15T11-28-14.890311.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- '**/details_harness|winogrande|5_2024-02-15T11-28-14.890311.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-15T11-28-14.890311.parquet'
- config_name: results
data_files:
- split: 2024_02_15T11_28_14.890311
path:
- results_2024-02-15T11-28-14.890311.parquet
- split: latest
path:
- results_2024-02-15T11-28-14.890311.parquet
---
# Dataset Card for Evaluation run of nlpguy/AlloyIngotNeoX
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [nlpguy/AlloyIngotNeoX](https://huggingface.co/nlpguy/AlloyIngotNeoX) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_nlpguy__AlloyIngotNeoX",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-15T11:28:14.890311](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__AlloyIngotNeoX/blob/main/results_2024-02-15T11-28-14.890311.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6559618494796027,
"acc_stderr": 0.03203002675451656,
"acc_norm": 0.6554016452842437,
"acc_norm_stderr": 0.03269903110679164,
"mc1": 0.6034271725826194,
"mc1_stderr": 0.017124930942023515,
"mc2": 0.7456883658583785,
"mc2_stderr": 0.014353519946726465
},
"harness|arc:challenge|25": {
"acc": 0.7158703071672355,
"acc_stderr": 0.013179442447653886,
"acc_norm": 0.7431740614334471,
"acc_norm_stderr": 0.0127669237941168
},
"harness|hellaswag|10": {
"acc": 0.7193786098386775,
"acc_stderr": 0.004483845735187827,
"acc_norm": 0.8906592312288388,
"acc_norm_stderr": 0.0031142850772280335
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249386,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249386
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663454,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663454
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108101,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108101
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.04697085136647863,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.04697085136647863
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.025279850397404907,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.025279850397404907
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5,
"acc_stderr": 0.04472135954999579,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04472135954999579
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.023025899617188712,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.023025899617188712
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.03374402644139403,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.03374402644139403
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9119170984455959,
"acc_stderr": 0.02045374660160103,
"acc_norm": 0.9119170984455959,
"acc_norm_stderr": 0.02045374660160103
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.676923076923077,
"acc_stderr": 0.02371088850197057,
"acc_norm": 0.676923076923077,
"acc_norm_stderr": 0.02371088850197057
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524565,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524565
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886793,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886793
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8422018348623853,
"acc_stderr": 0.01563002297009244,
"acc_norm": 0.8422018348623853,
"acc_norm_stderr": 0.01563002297009244
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931792,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931792
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601443,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601443
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8263090676883781,
"acc_stderr": 0.01354741565866226,
"acc_norm": 0.8263090676883781,
"acc_norm_stderr": 0.01354741565866226
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.023703099525258176,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.023703099525258176
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4480446927374302,
"acc_stderr": 0.016631976628930595,
"acc_norm": 0.4480446927374302,
"acc_norm_stderr": 0.016631976628930595
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242557,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242557
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.025670259242188936,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.025670259242188936
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47392438070404175,
"acc_stderr": 0.012752858346533127,
"acc_norm": 0.47392438070404175,
"acc_norm_stderr": 0.012752858346533127
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.02806499816704009,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.02806499816704009
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.018950886770806315,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.018950886770806315
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169136,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169136
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6034271725826194,
"mc1_stderr": 0.017124930942023515,
"mc2": 0.7456883658583785,
"mc2_stderr": 0.014353519946726465
},
"harness|winogrande|5": {
"acc": 0.8453038674033149,
"acc_stderr": 0.010163172650433533
},
"harness|gsm8k|5": {
"acc": 0.6982562547384382,
"acc_stderr": 0.012643544762873358
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
sKushagra/hmm | ---
license: openrail
---
|
yzhuang/autotree_automl_electricity_gosdt_l512_d3_sd2 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float64
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float64
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 5538400000
num_examples: 100000
- name: validation
num_bytes: 553840000
num_examples: 10000
download_size: 1560789998
dataset_size: 6092240000
---
# Dataset Card for "autotree_automl_electricity_gosdt_l512_d3_sd2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alvations/c4p0-x1-ko-en | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: timestamp[us]
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 20460
num_examples: 18
download_size: 20224
dataset_size: 20460
configs:
- config_name: default
data_files:
- split: train
path: e8af6469d8d0c520/train-*
---
|
LasseRogers2111/test | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: start_time
dtype: string
- name: end_time
dtype: string
splits:
- name: train
num_bytes: 1273208.0
num_examples: 6
- name: validation
num_bytes: 1229380.0
num_examples: 5
download_size: 2508853
dataset_size: 2502588.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Sasini/Audio_NER_dataset_2 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: text
sequence: string
- name: ner_tags
sequence: int64
splits:
- name: testtrue
num_bytes: 106107756.6
num_examples: 3100
download_size: 104273091
dataset_size: 106107756.6
configs:
- config_name: default
data_files:
- split: testtrue
path: data/testtrue-*
---
|
GeneralAwareness/Various | ---
license: cc-by-nc-sa-4.0
---
|
sonny-dev/databricks-mini | ---
license: apache-2.0
task_categories:
- question-answering
- text-classification
- summarization
language:
- en
--- |
wndknd/german-law-bgb | ---
license: mit
task_categories:
- text-generation
language:
- de
---
The Bürgerliche Gesetzbuch divided by each paragraph for text-generation. |
akash140500/failure13 | ---
license: apache-2.0
---
|
senhorsapo/cream | ---
license: openrail
---
|
SINAI/COAR | ---
language:
- es
license: cc-by-nc-sa-4.0
---
# COAR
## Description
The COAR (Corpus of Restaurant Opinions) dataset is designed for research in the field of document-level polarity classification and is focused on the hospitality domain (tourism-hospitality). The corpus consists of 2202 opinions extracted from TripAdvisor, which are categorized on a scale of five levels of opinion intensity (1 (negative) - 5 (positive)). The number of opinions per class is as follows:
| Rating | 1 | 2 | 3 | 4 | 5 | Total |
| --- |:---:|:---:|:---:|:---:|:---:|:---: |
| #Opinions | 565 | 246 | 188 | 333 | 870 | 2202 |
## Citation
If you use the corpus in your research, please cite: [Cross-Domain Sentiment Analysis Using Spanish Opinionated Words](https://link.springer.com/chapter/10.1007/978-3-319-07983-7_28).
```
@inproceedings{molina2014cross,
title={Cross-domain sentiment analysis using Spanish opinionated words},
author={Molina-Gonz{\'a}lez, M Dolores and Mart{\'\i}nez-C{\'a}mara, Eugenio and Mart{\'\i}n-Valdivia, M Teresa and Urena-L{\'o}pez, L Alfonso},
booktitle={Natural Language Processing and Information Systems: 19th International Conference on Applications of Natural Language to Information Systems, NLDB 2014, Montpellier, France, June 18-20, 2014. Proceedings 19},
pages={214--219},
year={2014},
organization={Springer}
}
```
# COAR
## Descripción
Corpus de opiniones de restaurantes destinado a la investigación en el ámbito de la clasificación de la polaridad a nivel de documento, y se circunscribe en el dominio de alojamiento hostelero (turismo-hostelera). El corpus está formado por 2202 opiniones extraídas de TripAdvisor, las cuales están catalogadas en una escala de cinco niveles de intensidad de opinión (1 (negativo) - 5 (positivo)). El número de opiniones por clase es:
| Puntuación | 1 | 2 | 3 | 4 | 5 | Total |
| --- |:---:|:---:|:---:|:---:|:---:|:---: |
| #Opiniones | 565 | 246 | 188 | 333 | 870 | 2202 |
# Cita
Si utiliza el corpus en su investigación, por favor cite: [Cross-Domain Sentiment Analysis Using Spanish Opinionated Words](https://link.springer.com/chapter/10.1007/978-3-319-07983-7_28).
```
@inproceedings{molina2014cross,
title={Cross-domain sentiment analysis using Spanish opinionated words},
author={Molina-Gonz{\'a}lez, M Dolores and Mart{\'\i}nez-C{\'a}mara, Eugenio and Mart{\'\i}n-Valdivia, M Teresa and Urena-L{\'o}pez, L Alfonso},
booktitle={Natural Language Processing and Information Systems: 19th International Conference on Applications of Natural Language to Information Systems, NLDB 2014, Montpellier, France, June 18-20, 2014. Proceedings 19},
pages={214--219},
year={2014},
organization={Springer}
}
``` |
yoshitomo-matsubara/srsd-feynman_easy_dummy | ---
pretty_name: SRSD-Feynman (Easy w/ Dummy Variables)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Easy set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Easy set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 30 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.12.1, I.12.4, I.12.5, I.18.12, I.25.13, I.47.23
**2 dummy variables**: I.14.3, I.18.16, I.43.16, II.3.24, II.8.31, II.10.9, II.13.17, II.15.5, II.27.18, III.7.38, III.12.43
**3 dummy variables**: I.14.4, I.26.2, I.27.6, I.30.5, II.2.42, II.4.23, II.15.4, II.27.16, II.34.11, II.34.29b, II.38.3, II.38.14, III.15.27
More details of these datasets are provided in [the paper and its supplementary material](https://openreview.net/forum?id=qrUdrXsiXX).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[OpenReview](https://openreview.net/forum?id=qrUdrXsiXX)] [[Video](https://www.youtube.com/watch?v=MmeOXuUUAW0)] [[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2024rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={Journal of Data-centric Machine Learning Research},
year={2024},
url={https://openreview.net/forum?id=qrUdrXsiXX}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
|
snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: all_generated_responses
sequence: string
- name: all_rm_scores
sequence: float64
splits:
- name: train_iteration_1
num_bytes: 276734865
num_examples: 19766
- name: test_iteration_1
num_bytes: 13616124
num_examples: 994
- name: train_iteration_2
num_bytes: 313248218
num_examples: 19958
- name: test_iteration_2
num_bytes: 15553468
num_examples: 1000
- name: train_iteration_3
num_bytes: 379805458
num_examples: 19996
- name: test_iteration_3
num_bytes: 19111694
num_examples: 1000
download_size: 485703305
dataset_size: 1018069827
configs:
- config_name: default
data_files:
- split: train_iteration_1
path: data/train_iteration_1-*
- split: test_iteration_1
path: data/test_iteration_1-*
- split: train_iteration_2
path: data/train_iteration_2-*
- split: test_iteration_2
path: data/test_iteration_2-*
- split: train_iteration_3
path: data/train_iteration_3-*
- split: test_iteration_3
path: data/test_iteration_3-*
license: apache-2.0
task_categories:
- text-generation
---
#### Dataset:
This is the data used for training [Snorkel model](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO)
We use ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized); **no external LLM responses used**.
#### Methodology:
1. Generate 5 response variations for each prompt from a subset of 20,000 using the LLM - to start, we used [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
2. Apply [PairRM](https://huggingface.co/llm-blender/PairRM) for response reranking.
3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses.
4. Use this LLM as the base model for the next iteration and use a different set of 20,000 prompts, repeating three times in total.
Please see the model page for more details on the methodology.
Columns:
- prompt: the current prompt
- chosen: the list of messages for the chosen response
- rejected: the list of messages for the rejected response
- all_generated_responses: The 5 generated responses
- all_rm_scores: The 5 corresponding reward model scores
Splits:
- train/test_iteration_{n}: The dataset used at the n_th iteration. We did 3 iterations in total.
**Training recipe**: This data is formatted to be compatible with the Hugging Face's [Zephyr recipe](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta).
We executed the n_th DPO iteration using the "train/test_iteration_{n}". |
RCC-MSU/collection3 | ---
annotations_creators:
- other
language:
- ru
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: Collection3
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: test
num_bytes: 935298
num_examples: 1922
- name: train
num_bytes: 4380588
num_examples: 9301
- name: validation
num_bytes: 1020711
num_examples: 2153
download_size: 878777
dataset_size: 6336597
---
# Dataset Card for Collection3
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Collection3 homepage](http://labinform.ru/pub/named_entities/index.htm)
- **Repository:** [Needs More Information]
- **Paper:** [Two-stage approach in Russian named entity recognition](https://ieeexplore.ieee.org/document/7584769)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Collection3 is a Russian dataset for named entity recognition annotated with LOC (location), PER (person), and ORG (organization) tags. Dataset is based on collection [Persons-1000](http://ai-center.botik.ru/Airec/index.php/ru/collections/28-persons-1000) originally containing 1000 news documents labeled only with names of persons.
Additional labels were obtained using guidelines similar to MUC-7 with web-based tool [Brat](http://brat.nlplab.org/) for collaborative text annotation.
Currently dataset contains 26K annotated named entities (11K Persons, 7K Locations and 8K Organizations).
Conversion to the IOB2 format and splitting into train, validation and test sets was done by [DeepPavlov team](http://files.deeppavlov.ai/deeppavlov_data/collection3_v2.tar.gz).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Russian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"id": "851",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 1, 2, 0, 0, 0],
"tokens": ['Главный', 'архитектор', 'программного', 'обеспечения', '(', 'ПО', ')', 'американского', 'высокотехнологичного', 'гиганта', 'Microsoft', 'Рэй', 'Оззи', 'покидает', 'компанию', '.']
}
```
### Data Fields
- id: a string feature.
- tokens: a list of string features.
- ner_tags: a list of classification labels (int). Full tagset with indices:
```
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}
```
### Data Splits
|name|train|validation|test|
|---------|----:|---------:|---:|
|Collection3|9301|2153|1922|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{mozharova-loukachevitch-2016-two-stage-russian-ner,
author={Mozharova, Valerie and Loukachevitch, Natalia},
booktitle={2016 International FRUCT Conference on Intelligence, Social Media and Web (ISMW FRUCT)},
title={Two-stage approach in Russian named entity recognition},
year={2016},
pages={1-6},
doi={10.1109/FRUCT.2016.7584769}}
``` |
Falah/family_lifestyle_photography | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1039539
num_examples: 10000
download_size: 22749
dataset_size: 1039539
---
# Dataset Card for "family_lifestyle_photography"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bdebayan/my_dataset | ---
license: bsd
task_categories:
- question-answering
language:
- en
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[English]
## Dataset Structure
CSV
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
dcml0714/Heros | ---
license: apache-2.0
size_categories:
- n<1K
---
HEROS is a dataset used to compare the sentence cosine similarity among sentences with high lexical overlapping but differ in their semantics.
Please refer to the paper, "Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS" for more details of how the dataset is constructed and the comparison of different sentence encoders.
The dataset `heros.tsv` consists of 6 columns: `Original`, `Synonym`, `Antonym`, `Negation`, `Random`, `Typo`, `Negation`.
The first column, `Original` are the sentences from GoEmotion dataset, and sentences in the other columns are constructed by replacing some words in the original sentences based on different rules, making up different subsets in HEROS.
Different subsets in HEROS capture various aspects of semantics.
Comparing the average cosine similarity between minimal pairs in Synonym and Antonym allows one to understand whether replacing a word with an antonym is more dissimilar to the original semantics than replacing a word with a synonym.
The average cosine similarity between minimal pairs in Negation can tell us how negation affects sentence embedding similarity.
Typos are realistic and happen every day.
While humans can infer the original word from a typo and get the original meaning of the sentence, it will be interesting to see how the typos affect the sentences' similarity with the original sentences.
The Random MLM subset can tell us how similar the sentence embedding can be when two sentences are semantically different but with high lexical overlaps.
By comparing the performance of different SEs on different subsets in HEROS, we can further understand the trait of different SEs.
|
shujatoor/test_dataset-meaningful | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 30177
num_examples: 129
download_size: 16560
dataset_size: 30177
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
emozilla/proofpile-test-tokenized | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: tokenized_len
dtype: int64
splits:
- name: test
num_bytes: 1644067664
num_examples: 46251
download_size: 552973486
dataset_size: 1644067664
---
# Dataset Card for "proofpile-test-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/allen_m_sumner_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of allen_m_sumner/アレン・M・サムナー/艾伦·萨姆纳 (Azur Lane)
This is the dataset of allen_m_sumner/アレン・M・サムナー/艾伦·萨姆纳 (Azur Lane), containing 41 images and their tags.
The core tags of this character are `breasts, long_hair, red_eyes, black_hair, bangs, hair_between_eyes, twintails, hair_ornament, medium_breasts, very_long_hair, bow, large_breasts, animal_ears, blue_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 41 | 67.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/allen_m_sumner_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 41 | 35.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/allen_m_sumner_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 104 | 76.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/allen_m_sumner_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 41 | 57.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/allen_m_sumner_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 104 | 113.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/allen_m_sumner_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/allen_m_sumner_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, bare_shoulders, double_bun, off_shoulder, official_alternate_costume, playboy_bunny, rabbit_ears, solo, black_jacket, black_leotard, long_sleeves, looking_at_viewer, open_jacket, fake_animal_ears, smile, hair_bow, underboob_cutout, braided_bun, brown_pantyhose, sitting, ass, tongue_out, bodystocking, closed_mouth, simple_background, sleeves_past_wrists, black_footwear, blush, shoes, white_background |
| 1 | 18 |  |  |  |  |  | looking_at_viewer, underboob_cutout, 1girl, solo, bare_shoulders, two-tone_leotard, off_shoulder, open_coat, black_leotard, open_mouth, skindentation, black_coat, blush, groin, long_sleeves, thigh_strap, badge, cowboy_shot, frilled_leotard, standing, sidelocks, :d, armpits, ass_visible_through_thighs, white_leotard |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | double_bun | off_shoulder | official_alternate_costume | playboy_bunny | rabbit_ears | solo | black_jacket | black_leotard | long_sleeves | looking_at_viewer | open_jacket | fake_animal_ears | smile | hair_bow | underboob_cutout | braided_bun | brown_pantyhose | sitting | ass | tongue_out | bodystocking | closed_mouth | simple_background | sleeves_past_wrists | black_footwear | blush | shoes | white_background | two-tone_leotard | open_coat | open_mouth | skindentation | black_coat | groin | thigh_strap | badge | cowboy_shot | frilled_leotard | standing | sidelocks | :d | armpits | ass_visible_through_thighs | white_leotard |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-------------|:---------------|:-----------------------------|:----------------|:--------------|:-------|:---------------|:----------------|:---------------|:--------------------|:--------------|:-------------------|:--------|:-----------|:-------------------|:--------------|:------------------|:----------|:------|:-------------|:---------------|:---------------|:--------------------|:----------------------|:-----------------|:--------|:--------|:-------------------|:-------------------|:------------|:-------------|:----------------|:-------------|:--------|:--------------|:--------|:--------------|:------------------|:-----------|:------------|:-----|:----------|:-----------------------------|:----------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 1 | 18 |  |  |  |  |  | X | X | | X | | | | X | | X | X | X | | | | | X | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875716 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: csebuetnlp/mT5_multilingual_XLSum
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: csebuetnlp/mT5_multilingual_XLSum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model. |
AIBrainBox/guanaco-llama2-1k | ---
license: apache-2.0
---
|
iamnguyen/cdnc_law_eval | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: citation
dtype: string
- name: content
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 153441
num_examples: 100
download_size: 71332
dataset_size: 153441
---
# Dataset Card for "cdnc_law_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KshitizPandya/GenzTranscribe-hi | ---
task_categories:
- automatic-speech-recognition
language:
- hi
pretty_name: GenzTranscribe-hi
size_categories:
- 10K<n<100K
viewer: True
---
|
chirunder/text_message_transliteration_1k | ---
dataset_info:
features:
- name: text
dtype: string
- name: translations
struct:
- name: chinese
dtype: string
- name: hindi
dtype: string
- name: russian
dtype: string
- name: transliteration
struct:
- name: chinese
dtype: string
- name: hindi
dtype: string
- name: russian
dtype: string
splits:
- name: train
num_bytes: 609895
num_examples: 1000
download_size: 361488
dataset_size: 609895
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "text_message_transliteration_1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FaalSa/f5 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 79710
num_examples: 1
- name: validation
num_bytes: 80190
num_examples: 1
- name: test
num_bytes: 80670
num_examples: 1
download_size: 69957
dataset_size: 240570
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
presencesw/Llama_data_bad | ---
dataset_info:
features:
- name: topic
dtype: string
- name: Evidence
dtype: string
- name: predict
dtype: string
- name: Label
dtype: string
- name: Claim
dtype: string
- name: eval
dtype: int64
splits:
- name: train
num_bytes: 24278461
num_examples: 5781
download_size: 5443107
dataset_size: 24278461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mpasila/DarkViperAU-QA-ChatML | ---
language:
- en
---
This is unfinished, only meant for testing. |
kaleemWaheed/twitter_dataset_1712982744 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 153449
num_examples: 409
download_size: 47466
dataset_size: 153449
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
communityai/ise-uiuc___Magicoder-Evol-Instruct-110K | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 485762091.0
num_examples: 106878
download_size: 222135894
dataset_size: 485762091.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
robji/MYSD | ---
license: unknown
---
|
dmrau/cqadupstack-unix | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 72357
num_examples: 1072
- name: corpus
num_bytes: 46102756
num_examples: 47382
download_size: 24571026
dataset_size: 46175113
---
# Dataset Card for "cqadupstack-unix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Lazycuber__L2-7b-Orca-WVG-Test | ---
pretty_name: Evaluation run of Lazycuber/L2-7b-Orca-WVG-Test
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Lazycuber/L2-7b-Orca-WVG-Test](https://huggingface.co/Lazycuber/L2-7b-Orca-WVG-Test)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Lazycuber__L2-7b-Orca-WVG-Test\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-26T20:44:34.027885](https://huggingface.co/datasets/open-llm-leaderboard/details_Lazycuber__L2-7b-Orca-WVG-Test/blob/main/results_2023-10-26T20-44-34.027885.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002202181208053691,\n\
\ \"em_stderr\": 0.00048005108166191996,\n \"f1\": 0.07443687080536941,\n\
\ \"f1_stderr\": 0.0016342523738966323,\n \"acc\": 0.4119262338489193,\n\
\ \"acc_stderr\": 0.009880953290999535\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.002202181208053691,\n \"em_stderr\": 0.00048005108166191996,\n\
\ \"f1\": 0.07443687080536941,\n \"f1_stderr\": 0.0016342523738966323\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0803639120545868,\n \
\ \"acc_stderr\": 0.007488258573239077\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7434885556432518,\n \"acc_stderr\": 0.01227364800875999\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Lazycuber/L2-7b-Orca-WVG-Test
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|arc:challenge|25_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_26T20_44_34.027885
path:
- '**/details_harness|drop|3_2023-10-26T20-44-34.027885.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-26T20-44-34.027885.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_26T20_44_34.027885
path:
- '**/details_harness|gsm8k|5_2023-10-26T20-44-34.027885.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-26T20-44-34.027885.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hellaswag|10_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T15-39-37.735727.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T15-39-37.735727.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T15-39-37.735727.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_26T20_44_34.027885
path:
- '**/details_harness|winogrande|5_2023-10-26T20-44-34.027885.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-26T20-44-34.027885.parquet'
- config_name: results
data_files:
- split: 2023_10_10T15_39_37.735727
path:
- results_2023-10-10T15-39-37.735727.parquet
- split: 2023_10_26T20_44_34.027885
path:
- results_2023-10-26T20-44-34.027885.parquet
- split: latest
path:
- results_2023-10-26T20-44-34.027885.parquet
---
# Dataset Card for Evaluation run of Lazycuber/L2-7b-Orca-WVG-Test
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Lazycuber/L2-7b-Orca-WVG-Test
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Lazycuber/L2-7b-Orca-WVG-Test](https://huggingface.co/Lazycuber/L2-7b-Orca-WVG-Test) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Lazycuber__L2-7b-Orca-WVG-Test",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T20:44:34.027885](https://huggingface.co/datasets/open-llm-leaderboard/details_Lazycuber__L2-7b-Orca-WVG-Test/blob/main/results_2023-10-26T20-44-34.027885.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002202181208053691,
"em_stderr": 0.00048005108166191996,
"f1": 0.07443687080536941,
"f1_stderr": 0.0016342523738966323,
"acc": 0.4119262338489193,
"acc_stderr": 0.009880953290999535
},
"harness|drop|3": {
"em": 0.002202181208053691,
"em_stderr": 0.00048005108166191996,
"f1": 0.07443687080536941,
"f1_stderr": 0.0016342523738966323
},
"harness|gsm8k|5": {
"acc": 0.0803639120545868,
"acc_stderr": 0.007488258573239077
},
"harness|winogrande|5": {
"acc": 0.7434885556432518,
"acc_stderr": 0.01227364800875999
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-source-metrics/optimum-dependents | ---
license: apache-2.0
pretty_name: optimum metrics
tags:
- github-stars
dataset_info:
features:
- name: name
dtype: string
- name: stars
dtype: int64
- name: forks
dtype: int64
splits:
- name: package
num_bytes: 555
num_examples: 13
- name: repository
num_bytes: 3790
num_examples: 81
download_size: 6617
dataset_size: 4345
---
# optimum metrics
This dataset contains metrics about the huggingface/optimum package.
Number of repositories in the dataset: 19
Number of packages in the dataset: 6
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/optimum/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 0 packages that have more than 1000 stars.
There are 0 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 288
[AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 114
[huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 61
[huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 34
[huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 24
[bhavsarpratik/easy-transformers](https://github.com/bhavsarpratik/easy-transformers): 10
*Repository*
[SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 288
[marqo-ai/marqo](https://github.com/marqo-ai/marqo): 265
[AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 114
[graphcore/tutorials](https://github.com/graphcore/tutorials): 65
[huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 61
[huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 34
[huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 24
[philschmid/optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization): 20
[philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 15
[girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 15
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 0 packages that have more than 200 forks.
There are 0 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 82
[huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 18
[huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 10
[AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 6
[huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 3
[bhavsarpratik/easy-transformers](https://github.com/bhavsarpratik/easy-transformers): 2
*Repository*
[SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 82
[graphcore/tutorials](https://github.com/graphcore/tutorials): 33
[huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 18
[girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 14
[huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 10
[marqo-ai/marqo](https://github.com/marqo-ai/marqo): 6
[AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 6
[whatofit/LevelWordWithFreq](https://github.com/whatofit/LevelWordWithFreq): 5
[philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 3
[huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 3
|
merve/my_notes | ---
license: apache-2.0
---
## My Notes 📓
This repository contains my lecture notes from graduate school on following topics 👇🏼
- Data Science: 8 cheatsheets
- Machine Learning (follows [Tom Mitchell's book](http://www.cs.cmu.edu/~tom/mlbook.html)): 25 pages of notes
- Statistics: 9 cheatsheets
- Deep Learning: 12 cheatsheets, will upload more
- Image Processing (follows [digital image processing book](https://www.amazon.fr/Digital-Image-Processing-Rafael-Gonzalez/dp/013168728X)): 21 cheatsheets
- Data Structures and Algorithms (follows [this book by Goodrich](https://www.wiley.com/en-us/Data+Structures+and+Algorithms+in+Python-p-9781118549582)): 26 cheatsheets
✨ *Some notes* ✨
- Most of these notes aren't intended to teach a topic from scratch but are rather notes that I took and compiled during my midterm & finals, might help you remember things, study for exams, and prepare for job interviews.
- There might be very small Turkish notes in few of the pages, you can ignore them.
- I will upload more notes as I find or create them. Will soon compile my Hugging Face cheatsheets so stay tuned!
- It's appreciated if you could improve the quality of PDF handwritten scans or convert them to JPEG, you can open a PR to this repository.
*Updates* 🎉
- I uploaded hierarchical clustering and improved version of K-means.
- I compiled every lecture in separate PDFs, and also compiled those into single PDF, found under `Compiled PDF`s.
- I uploaded Hugging Face cheatsheets. |
open-llm-leaderboard/details_chasedreaminf__Dream-7B-slerp | ---
pretty_name: Evaluation run of chasedreaminf/Dream-7B-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [chasedreaminf/Dream-7B-slerp](https://huggingface.co/chasedreaminf/Dream-7B-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chasedreaminf__Dream-7B-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-16T20:05:27.586603](https://huggingface.co/datasets/open-llm-leaderboard/details_chasedreaminf__Dream-7B-slerp/blob/main/results_2024-02-16T20-05-27.586603.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6507213038118388,\n\
\ \"acc_stderr\": 0.03214268781825063,\n \"acc_norm\": 0.6503644948036131,\n\
\ \"acc_norm_stderr\": 0.03280953874341137,\n \"mc1\": 0.4724602203182375,\n\
\ \"mc1_stderr\": 0.017476930190712187,\n \"mc2\": 0.6184735043628918,\n\
\ \"mc2_stderr\": 0.015107906651203224\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6663822525597269,\n \"acc_stderr\": 0.013778687054176538,\n\
\ \"acc_norm\": 0.6851535836177475,\n \"acc_norm_stderr\": 0.013572657703084948\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6787492531368253,\n\
\ \"acc_stderr\": 0.004660025270817022,\n \"acc_norm\": 0.8634734116709819,\n\
\ \"acc_norm_stderr\": 0.0034264517445078474\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\
\ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\
\ \"acc_stderr\": 0.03586879280080341,\n \"acc_norm\": 0.7569444444444444,\n\
\ \"acc_norm_stderr\": 0.03586879280080341\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n\
\ \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5957446808510638,\n \"acc_stderr\": 0.032081157507886836,\n\
\ \"acc_norm\": 0.5957446808510638,\n \"acc_norm_stderr\": 0.032081157507886836\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42328042328042326,\n \"acc_stderr\": 0.02544636563440679,\n \"\
acc_norm\": 0.42328042328042326,\n \"acc_norm_stderr\": 0.02544636563440679\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7677419354838709,\n \"acc_stderr\": 0.024022256130308235,\n \"\
acc_norm\": 0.7677419354838709,\n \"acc_norm_stderr\": 0.024022256130308235\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"\
acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n\
\ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\
: 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768766,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768766\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n\
\ \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \
\ \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n\
\ \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374308,\n \"\
acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374308\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8088235294117647,\n \"acc_stderr\": 0.027599174300640766,\n \"\
acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.027599174300640766\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7932489451476793,\n \"acc_stderr\": 0.0263616516683891,\n \
\ \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.0263616516683891\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.0364129708131373,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.0364129708131373\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n\
\ \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n\
\ \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.02126271940040697,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.02126271940040697\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.01358661921990334,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.01358661921990334\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7485549132947977,\n \"acc_stderr\": 0.02335736578587403,\n\
\ \"acc_norm\": 0.7485549132947977,\n \"acc_norm_stderr\": 0.02335736578587403\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.42905027932960893,\n\
\ \"acc_stderr\": 0.016553287863116037,\n \"acc_norm\": 0.42905027932960893,\n\
\ \"acc_norm_stderr\": 0.016553287863116037\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n\
\ \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\
\ \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n\
\ \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7530864197530864,\n \"acc_stderr\": 0.02399350170904211,\n\
\ \"acc_norm\": 0.7530864197530864,\n \"acc_norm_stderr\": 0.02399350170904211\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.475177304964539,\n \"acc_stderr\": 0.029790719243829727,\n \
\ \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.029790719243829727\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4576271186440678,\n\
\ \"acc_stderr\": 0.012724296550980188,\n \"acc_norm\": 0.4576271186440678,\n\
\ \"acc_norm_stderr\": 0.012724296550980188\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396556,\n\
\ \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396556\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6683006535947712,\n \"acc_stderr\": 0.01904748523936038,\n \
\ \"acc_norm\": 0.6683006535947712,\n \"acc_norm_stderr\": 0.01904748523936038\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n\
\ \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.02619392354445412,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.02619392354445412\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n\
\ \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4724602203182375,\n\
\ \"mc1_stderr\": 0.017476930190712187,\n \"mc2\": 0.6184735043628918,\n\
\ \"mc2_stderr\": 0.015107906651203224\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8034727703235991,\n \"acc_stderr\": 0.011168120593569562\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7225170583775588,\n \
\ \"acc_stderr\": 0.01233344758104754\n }\n}\n```"
repo_url: https://huggingface.co/chasedreaminf/Dream-7B-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|arc:challenge|25_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|gsm8k|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hellaswag|10_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-16T20-05-27.586603.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-16T20-05-27.586603.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- '**/details_harness|winogrande|5_2024-02-16T20-05-27.586603.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-16T20-05-27.586603.parquet'
- config_name: results
data_files:
- split: 2024_02_16T20_05_27.586603
path:
- results_2024-02-16T20-05-27.586603.parquet
- split: latest
path:
- results_2024-02-16T20-05-27.586603.parquet
---
# Dataset Card for Evaluation run of chasedreaminf/Dream-7B-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [chasedreaminf/Dream-7B-slerp](https://huggingface.co/chasedreaminf/Dream-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chasedreaminf__Dream-7B-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-16T20:05:27.586603](https://huggingface.co/datasets/open-llm-leaderboard/details_chasedreaminf__Dream-7B-slerp/blob/main/results_2024-02-16T20-05-27.586603.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6507213038118388,
"acc_stderr": 0.03214268781825063,
"acc_norm": 0.6503644948036131,
"acc_norm_stderr": 0.03280953874341137,
"mc1": 0.4724602203182375,
"mc1_stderr": 0.017476930190712187,
"mc2": 0.6184735043628918,
"mc2_stderr": 0.015107906651203224
},
"harness|arc:challenge|25": {
"acc": 0.6663822525597269,
"acc_stderr": 0.013778687054176538,
"acc_norm": 0.6851535836177475,
"acc_norm_stderr": 0.013572657703084948
},
"harness|hellaswag|10": {
"acc": 0.6787492531368253,
"acc_stderr": 0.004660025270817022,
"acc_norm": 0.8634734116709819,
"acc_norm_stderr": 0.0034264517445078474
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569525,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569525
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.03586879280080341,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.03586879280080341
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.45098039215686275,
"acc_stderr": 0.049512182523962625,
"acc_norm": 0.45098039215686275,
"acc_norm_stderr": 0.049512182523962625
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5957446808510638,
"acc_stderr": 0.032081157507886836,
"acc_norm": 0.5957446808510638,
"acc_norm_stderr": 0.032081157507886836
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42328042328042326,
"acc_stderr": 0.02544636563440679,
"acc_norm": 0.42328042328042326,
"acc_norm_stderr": 0.02544636563440679
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.0442626668137991,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.0442626668137991
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7677419354838709,
"acc_stderr": 0.024022256130308235,
"acc_norm": 0.7677419354838709,
"acc_norm_stderr": 0.024022256130308235
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768766,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768766
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616255,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374308,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374308
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.027599174300640766,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.027599174300640766
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7932489451476793,
"acc_stderr": 0.0263616516683891,
"acc_norm": 0.7932489451476793,
"acc_norm_stderr": 0.0263616516683891
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.0364129708131373,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.0364129708131373
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.02126271940040697,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.02126271940040697
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.01358661921990334,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.01358661921990334
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7485549132947977,
"acc_stderr": 0.02335736578587403,
"acc_norm": 0.7485549132947977,
"acc_norm_stderr": 0.02335736578587403
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.42905027932960893,
"acc_stderr": 0.016553287863116037,
"acc_norm": 0.42905027932960893,
"acc_norm_stderr": 0.016553287863116037
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242557,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242557
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.02399350170904211,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.02399350170904211
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.029790719243829727,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.029790719243829727
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4576271186440678,
"acc_stderr": 0.012724296550980188,
"acc_norm": 0.4576271186440678,
"acc_norm_stderr": 0.012724296550980188
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6727941176470589,
"acc_stderr": 0.028501452860396556,
"acc_norm": 0.6727941176470589,
"acc_norm_stderr": 0.028501452860396556
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6683006535947712,
"acc_stderr": 0.01904748523936038,
"acc_norm": 0.6683006535947712,
"acc_norm_stderr": 0.01904748523936038
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.02619392354445412,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.02619392354445412
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4724602203182375,
"mc1_stderr": 0.017476930190712187,
"mc2": 0.6184735043628918,
"mc2_stderr": 0.015107906651203224
},
"harness|winogrande|5": {
"acc": 0.8034727703235991,
"acc_stderr": 0.011168120593569562
},
"harness|gsm8k|5": {
"acc": 0.7225170583775588,
"acc_stderr": 0.01233344758104754
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
zolak/twitter_dataset_50_1713181945 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 374526
num_examples: 897
download_size: 184190
dataset_size: 374526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zolak/twitter_dataset_80_1713203438 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 4036256
num_examples: 10039
download_size: 2034440
dataset_size: 4036256
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
learnanything/feedback-series | ---
license: other
---
|
HuggingFaceM4/enwiki-v2_valid | Invalid username or password. |
louam123/darja-tounsi1 | ---
task_categories:
- translation
language:
- ar
- en
tags:
- darja
- tunisian
pretty_name: 'darja-tounsi-louam '
size_categories:
- 1K<n<10K
--- |
baptistecolle/sam-controlnet-3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
sequence: int64
- name: filename
dtype: string
- name: imgid
dtype: int64
- name: split
dtype: string
- name: sentences
struct:
- name: imgid
dtype: int64
- name: raw
dtype: string
- name: sentid
dtype: int64
- name: tokens
sequence: string
- name: cocoid
dtype: int64
- name: masks
sequence:
sequence:
sequence: bool
- name: scores
sequence: float32
splits:
- name: train
num_bytes: 2086529.0
num_examples: 1
download_size: 150908
dataset_size: 2086529.0
---
# Dataset Card for "sam-controlnet-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rthatha/GLDv2-All-Caption-Filename | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 109018225.0
num_examples: 203
download_size: 108837389
dataset_size: 109018225.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hlillemark/flores200_devtest_mt5-600m-flores200-baseline | ---
dataset_info:
features:
- name: id
dtype: int32
- name: source_lang
dtype: string
- name: target_lang
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: prediction
dtype: string
- name: chrf_unreduced
dtype: string
splits:
- name: devtest
num_bytes: 734237740
num_examples: 1000000
download_size: 514219403
dataset_size: 734237740
---
# Dataset Card for "flores200_devtest_mt5-600m-flores200-baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marup/kotonoha-sisters-rvc-50epochs | ---
license: openrail
---
|
Nexdata/20_People_Chinese_Mandarin_Multi_emotional_Synthesis_Corpus | ---
license: cc-by-nc-nd-4.0
---
## Description
20 People - Chinese Mandarin Multi-emotional Synthesis Corpus. It is recorded by Chinese native speaker, covering different ages and genders. seven emotional texts, are all from novels and the syllables, phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1213?source=Huggingface
## Format
48,000Hz, 24bit, uncompressed wav, mono channel
## Recording environment
professional recording studio
## Recording content
seven emotions (happiness, anger, sadness, surprise, fear, disgust, neutral) ;texts are all from novels
## Speaker
20 persons, different age groups and genders
## Device
microphone
## Language
Mandarin
## Annotation
word and pinyin transcription, prosodic boundary annotation
## Application scenarios
speech synthesis
## The amount of data
The amount of data for per person is 140 minutes, each emotion is 20 minutes
# Licensing Information
Commercial License
|
codedog-lee/llama2-tut | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5826
num_examples: 39
download_size: 2572
dataset_size: 5826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
showery/my_dataset | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 641917.0
num_examples: 22
download_size: 0
dataset_size: 641917.0
---
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kirim9001/Tryin | ---
license: other
---
|
mach-12/ecommerce-ner-conll-2003-data | ---
license: mit
dataset_info:
features:
- name: tokens
sequence: string
- name: pos_tag
sequence: int64
- name: chunk_tag
sequence: int64
- name: ner_tag
sequence: int64
- name: sentiment
dtype: string
splits:
- name: train
num_bytes: 13886708.0
num_examples: 31520
- name: test
num_bytes: 3471677.0
num_examples: 7880
download_size: 1323190
dataset_size: 17358385.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
pwc-india/fabric_dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 41259319.0
num_examples: 20
download_size: 41261924
dataset_size: 41259319.0
---
# Dataset Card for "fabric_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/fc019c9e | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1340
dataset_size: 182
---
# Dataset Card for "fc019c9e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LucasThil/miniwob_snippets | ---
dataset_info:
features:
- name: episodes
dtype: string
- name: refs
dtype: int64
- name: click
dtype: int64
- name: dblclick
dtype: int64
- name: keydown
dtype: int64
- name: keypress
dtype: int64
- name: keyup
dtype: int64
- name: mousedown
dtype: int64
- name: mouseup
dtype: int64
- name: scroll
dtype: int64
splits:
- name: train
num_bytes: 674906155
num_examples: 587268
- name: test
num_bytes: 84528980
num_examples: 73420
- name: validate
num_bytes: 84695448
num_examples: 73318
download_size: 140471741
dataset_size: 844130583
---
# Dataset Card for "miniwob_snippets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rishikesh/mini-sarcasm-data | ---
license: mit
---
|
Wulichao/single_cell_RNA_seq | ---
license: mit
---
|
arthurmluz/wikilingua_data-wiki_temario_results | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: summary
dtype: string
- name: gen_summary
dtype: string
- name: rouge
struct:
- name: rouge1
dtype: float64
- name: rouge2
dtype: float64
- name: rougeL
dtype: float64
- name: rougeLsum
dtype: float64
- name: bert
struct:
- name: f1
sequence: float64
- name: hashcode
dtype: string
- name: precision
sequence: float64
- name: recall
sequence: float64
- name: moverScore
dtype: float64
splits:
- name: validation
num_bytes: 23695646
num_examples: 8165
download_size: 14096829
dataset_size: 23695646
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "wikilingua_data-wiki_temario_results"
rouge= {'rouge1': 0.32186631280345346, 'rouge2': 0.11643663933189041, 'rougeL': 0.22613896268852546, 'rougeLsum': 0.22613896268852546}
bert= {'precision': 0.7288013449272931, 'recall': 0.7668859778849245, 'f1': 0.7463569133178047}
mover = 0.6063795152381846 |
njoeljack/hai-babi | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1465597.0
num_examples: 50
download_size: 1465887
dataset_size: 1465597.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yiyic/clirmatrix_query_doc | ---
dataset_info:
features:
- name: text
dtype: string
- name: __index_level_0__
dtype: string
splits:
- name: de_en_multi8_test1
num_bytes: 1342125
num_examples: 2000
- name: de_fr_multi8_test1
num_bytes: 1346729
num_examples: 2000
- name: de_es_multi8_test1
num_bytes: 1346235
num_examples: 2000
- name: en_de_multi8_test1
num_bytes: 1155356
num_examples: 2000
- name: en_fr_multi8_test1
num_bytes: 1158530
num_examples: 2000
- name: en_es_multi8_test1
num_bytes: 1158036
num_examples: 2000
- name: es_en_multi8_test1
num_bytes: 1127237
num_examples: 2000
- name: es_fr_multi8_test1
num_bytes: 1131841
num_examples: 2000
- name: es_de_multi8_test1
num_bytes: 1128667
num_examples: 2000
- name: fr_en_multi8_test1
num_bytes: 1168520
num_examples: 2000
- name: fr_de_multi8_test1
num_bytes: 1169950
num_examples: 2000
- name: fr_es_multi8_test1
num_bytes: 1172630
num_examples: 2000
download_size: 8905953
dataset_size: 14405856
configs:
- config_name: default
data_files:
- split: de_en_multi8_test1
path: data/de_en_multi8_test1-*
- split: de_fr_multi8_test1
path: data/de_fr_multi8_test1-*
- split: de_es_multi8_test1
path: data/de_es_multi8_test1-*
- split: en_de_multi8_test1
path: data/en_de_multi8_test1-*
- split: en_fr_multi8_test1
path: data/en_fr_multi8_test1-*
- split: en_es_multi8_test1
path: data/en_es_multi8_test1-*
- split: es_en_multi8_test1
path: data/es_en_multi8_test1-*
- split: es_fr_multi8_test1
path: data/es_fr_multi8_test1-*
- split: es_de_multi8_test1
path: data/es_de_multi8_test1-*
- split: fr_en_multi8_test1
path: data/fr_en_multi8_test1-*
- split: fr_de_multi8_test1
path: data/fr_de_multi8_test1-*
- split: fr_es_multi8_test1
path: data/fr_es_multi8_test1-*
---
|
JuneKo/book_sf_ch | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1889814.0
num_examples: 39
download_size: 1847651
dataset_size: 1889814.0
---
# Dataset Card for "book_sf_ch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/tashkent_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tashkent/タシュケント/塔什干 (Azur Lane)
This is the dataset of tashkent/タシュケント/塔什干 (Azur Lane), containing 400 images and their tags.
The core tags of this character are `long_hair, purple_hair, blue_eyes, very_long_hair, bangs, animal_ears, ribbon, hair_ribbon, fake_animal_ears, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 400 | 686.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tashkent_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 400 | 330.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tashkent_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1039 | 759.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tashkent_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 400 | 577.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tashkent_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1039 | 1.15 GiB | [Download](https://huggingface.co/datasets/CyberHarem/tashkent_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tashkent_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, blush, brown_sweater, fur_trim, looking_at_viewer, solo, thigh_boots, white_coat, white_thighhighs, winter_clothes, full_body, open_mouth, simple_background, sleeves_past_fingers, very_long_sleeves, white_background, white_footwear, dress, pom_pom_(clothes), smile, standing, sitting, torpedo_tubes |
| 1 | 16 |  |  |  |  |  | 1girl, blush, dress, long_sleeves, looking_at_viewer, solo, white_coat, sleeves_past_fingers, white_background, white_thighhighs, winter_clothes, brown_sweater, open_mouth, simple_background, ahoge, fur-trimmed_sleeves, smile |
| 2 | 12 |  |  |  |  |  | 1girl, hair_bow, looking_at_viewer, solo, bare_shoulders, fur_trim, blush, paw_gloves, plaid, purple_skirt, white_thighhighs, white_background, open_mouth, off_shoulder, simple_background |
| 3 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, navel, nipples, nude, small_breasts, white_thighhighs, pussy, ass, collar, ponytail, tongue_out |
| 4 | 55 |  |  |  |  |  | 1girl, white_shirt, blush, ponytail, solo, white_thighhighs, black_shorts, looking_at_viewer, suspender_shorts, chain, short_shorts, collar, bound_arms, open_mouth, sitting, cuffs, necktie |
| 5 | 22 |  |  |  |  |  | 1girl, blush, solo, white_shirt, looking_at_viewer, long_sleeves, off_shoulder, naked_shirt, collared_shirt, sitting, collarbone, barefoot, dress_shirt, bare_shoulders, official_alternate_costume, ahoge, closed_mouth, small_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | brown_sweater | fur_trim | looking_at_viewer | solo | thigh_boots | white_coat | white_thighhighs | winter_clothes | full_body | open_mouth | simple_background | sleeves_past_fingers | very_long_sleeves | white_background | white_footwear | dress | pom_pom_(clothes) | smile | standing | sitting | torpedo_tubes | long_sleeves | ahoge | fur-trimmed_sleeves | hair_bow | bare_shoulders | paw_gloves | plaid | purple_skirt | off_shoulder | navel | nipples | nude | small_breasts | pussy | ass | collar | ponytail | tongue_out | white_shirt | black_shorts | suspender_shorts | chain | short_shorts | bound_arms | cuffs | necktie | naked_shirt | collared_shirt | collarbone | barefoot | dress_shirt | official_alternate_costume | closed_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:----------------|:-----------|:--------------------|:-------|:--------------|:-------------|:-------------------|:-----------------|:------------|:-------------|:--------------------|:-----------------------|:--------------------|:-------------------|:-----------------|:--------|:--------------------|:--------|:-----------|:----------|:----------------|:---------------|:--------|:----------------------|:-----------|:-----------------|:-------------|:--------|:---------------|:---------------|:--------|:----------|:-------|:----------------|:--------|:------|:---------|:-----------|:-------------|:--------------|:---------------|:-------------------|:--------|:---------------|:-------------|:--------|:----------|:--------------|:-----------------|:-------------|:-----------|:--------------|:-----------------------------|:---------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 16 |  |  |  |  |  | X | X | X | | X | X | | X | X | X | | X | X | X | | X | | X | | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 12 |  |  |  |  |  | X | X | | X | X | X | | | X | | | X | X | | | X | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 4 | 55 |  |  |  |  |  | X | X | | | X | X | | | X | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | | X | X | X | X | X | X | X | X | | | | | | | |
| 5 | 22 |  |  |  |  |  | X | X | | | X | X | | | | | | | | | | | | | | | | X | | X | X | | | X | | | | X | | | | X | | | | | | X | | | | | | | | X | X | X | X | X | X | X |
|
tyson0420/stackexchange-4dpo-filby-ai-keywords | ---
license: cc-by-4.0
---
|
davanstrien/fuego-20230428-014702-04e0f5 | ---
tags:
- fuego
fuego:
id: 20230428-014702-04e0f5
status: running
script: scratchpad.py
requirements_file: requirements.txt
space_id: davanstrien/fuego-20230428-014702-04e0f5
space_hardware: cpu-basic
---
|
liuyanchen1015/MULTI_VALUE_mrpc_negative_inversion | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: train
num_bytes: 3645
num_examples: 13
- name: validation
num_bytes: 294
num_examples: 1
download_size: 10298
dataset_size: 3939
---
# Dataset Card for "MULTI_VALUE_mrpc_negative_inversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lionelchg/dolly-information-extraction | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4656769.3625498
num_examples: 1430
- name: test
num_bytes: 247492.6374501992
num_examples: 76
download_size: 2854506
dataset_size: 4904262.0
---
# Dataset Card for "dolly-information-extraction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713199899 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 20766
num_examples: 57
download_size: 19065
dataset_size: 20766
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713199899"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Isamu136/oxford_pets_with_l14_emb | ---
dataset_info:
features:
- name: path
dtype: string
- name: label
dtype: string
- name: dog
dtype: bool
- name: image
dtype: image
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 257189204.25
num_examples: 7390
download_size: 261518905
dataset_size: 257189204.25
---
# Dataset Card for "oxford_pets_with_l14_emb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_logicker__SkkuDataScienceGlobal-10.7b | ---
pretty_name: Evaluation run of logicker/SkkuDataScienceGlobal-10.7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [logicker/SkkuDataScienceGlobal-10.7b](https://huggingface.co/logicker/SkkuDataScienceGlobal-10.7b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_logicker__SkkuDataScienceGlobal-10.7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-04T13:43:44.357190](https://huggingface.co/datasets/open-llm-leaderboard/details_logicker__SkkuDataScienceGlobal-10.7b/blob/main/results_2024-01-04T13-43-44.357190.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6670223395120712,\n\
\ \"acc_stderr\": 0.031613933491978684,\n \"acc_norm\": 0.6677265646501975,\n\
\ \"acc_norm_stderr\": 0.03225832144808963,\n \"mc1\": 0.5716034271725826,\n\
\ \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7192487275907953,\n\
\ \"mc2_stderr\": 0.015001105564856102\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6851535836177475,\n \"acc_stderr\": 0.01357265770308495,\n\
\ \"acc_norm\": 0.712457337883959,\n \"acc_norm_stderr\": 0.013226719056266125\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7131049591714798,\n\
\ \"acc_stderr\": 0.004513877465062106,\n \"acc_norm\": 0.8840868352917746,\n\
\ \"acc_norm_stderr\": 0.003194665266078602\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n\
\ \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n\
\ \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.756578947368421,\n \"acc_stderr\": 0.034923496688842384,\n\
\ \"acc_norm\": 0.756578947368421,\n \"acc_norm_stderr\": 0.034923496688842384\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n\
\ \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \
\ \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n\
\ \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.51,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082636,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082636\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.625531914893617,\n \"acc_stderr\": 0.03163910665367291,\n\
\ \"acc_norm\": 0.625531914893617,\n \"acc_norm_stderr\": 0.03163910665367291\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n\
\ \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"\
acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n\
\ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n\
\ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8161290322580645,\n \"acc_stderr\": 0.022037217340267822,\n \"\
acc_norm\": 0.8161290322580645,\n \"acc_norm_stderr\": 0.022037217340267822\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n \"\
acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\"\
: 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n\
\ \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"\
acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603348,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603348\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n\
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.37037037037037035,\n \"acc_stderr\": 0.02944316932303154,\n \
\ \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.02944316932303154\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7184873949579832,\n \"acc_stderr\": 0.02921354941437217,\n \
\ \"acc_norm\": 0.7184873949579832,\n \"acc_norm_stderr\": 0.02921354941437217\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"\
acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5833333333333334,\n \"acc_stderr\": 0.033622774366080424,\n \"\
acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.033622774366080424\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"\
acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8481012658227848,\n \"acc_stderr\": 0.023363878096632446,\n \
\ \"acc_norm\": 0.8481012658227848,\n \"acc_norm_stderr\": 0.023363878096632446\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n\
\ \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n\
\ \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n\
\ \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n\
\ \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n\
\ \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n\
\ \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8045977011494253,\n\
\ \"acc_stderr\": 0.014179171373424383,\n \"acc_norm\": 0.8045977011494253,\n\
\ \"acc_norm_stderr\": 0.014179171373424383\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7601156069364162,\n \"acc_stderr\": 0.022989592543123563,\n\
\ \"acc_norm\": 0.7601156069364162,\n \"acc_norm_stderr\": 0.022989592543123563\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.394413407821229,\n\
\ \"acc_stderr\": 0.01634538676210397,\n \"acc_norm\": 0.394413407821229,\n\
\ \"acc_norm_stderr\": 0.01634538676210397\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.729903536977492,\n\
\ \"acc_stderr\": 0.02521804037341062,\n \"acc_norm\": 0.729903536977492,\n\
\ \"acc_norm_stderr\": 0.02521804037341062\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0227797190887334,\n\
\ \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0227797190887334\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4954367666232073,\n\
\ \"acc_stderr\": 0.012769704263117522,\n \"acc_norm\": 0.4954367666232073,\n\
\ \"acc_norm_stderr\": 0.012769704263117522\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7389705882352942,\n \"acc_stderr\": 0.026679252270103128,\n\
\ \"acc_norm\": 0.7389705882352942,\n \"acc_norm_stderr\": 0.026679252270103128\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6813725490196079,\n \"acc_stderr\": 0.018850084696468712,\n \
\ \"acc_norm\": 0.6813725490196079,\n \"acc_norm_stderr\": 0.018850084696468712\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n\
\ \"acc_stderr\": 0.02650859065623327,\n \"acc_norm\": 0.8308457711442786,\n\
\ \"acc_norm_stderr\": 0.02650859065623327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n\
\ \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n\
\ \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n\
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5716034271725826,\n\
\ \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7192487275907953,\n\
\ \"mc2_stderr\": 0.015001105564856102\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8334648776637726,\n \"acc_stderr\": 0.010470796496781093\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6573161485974223,\n \
\ \"acc_stderr\": 0.013073030230827915\n }\n}\n```"
repo_url: https://huggingface.co/logicker/SkkuDataScienceGlobal-10.7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|arc:challenge|25_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|gsm8k|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hellaswag|10_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T13-43-44.357190.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T13-43-44.357190.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- '**/details_harness|winogrande|5_2024-01-04T13-43-44.357190.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-04T13-43-44.357190.parquet'
- config_name: results
data_files:
- split: 2024_01_04T13_43_44.357190
path:
- results_2024-01-04T13-43-44.357190.parquet
- split: latest
path:
- results_2024-01-04T13-43-44.357190.parquet
---
# Dataset Card for Evaluation run of logicker/SkkuDataScienceGlobal-10.7b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [logicker/SkkuDataScienceGlobal-10.7b](https://huggingface.co/logicker/SkkuDataScienceGlobal-10.7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_logicker__SkkuDataScienceGlobal-10.7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-04T13:43:44.357190](https://huggingface.co/datasets/open-llm-leaderboard/details_logicker__SkkuDataScienceGlobal-10.7b/blob/main/results_2024-01-04T13-43-44.357190.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6670223395120712,
"acc_stderr": 0.031613933491978684,
"acc_norm": 0.6677265646501975,
"acc_norm_stderr": 0.03225832144808963,
"mc1": 0.5716034271725826,
"mc1_stderr": 0.017323088597314747,
"mc2": 0.7192487275907953,
"mc2_stderr": 0.015001105564856102
},
"harness|arc:challenge|25": {
"acc": 0.6851535836177475,
"acc_stderr": 0.01357265770308495,
"acc_norm": 0.712457337883959,
"acc_norm_stderr": 0.013226719056266125
},
"harness|hellaswag|10": {
"acc": 0.7131049591714798,
"acc_stderr": 0.004513877465062106,
"acc_norm": 0.8840868352917746,
"acc_norm_stderr": 0.003194665266078602
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.756578947368421,
"acc_stderr": 0.034923496688842384,
"acc_norm": 0.756578947368421,
"acc_norm_stderr": 0.034923496688842384
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6830188679245283,
"acc_stderr": 0.02863723563980089,
"acc_norm": 0.6830188679245283,
"acc_norm_stderr": 0.02863723563980089
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956913,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956913
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082636,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082636
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.625531914893617,
"acc_stderr": 0.03163910665367291,
"acc_norm": 0.625531914893617,
"acc_norm_stderr": 0.03163910665367291
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6344827586206897,
"acc_stderr": 0.040131241954243856,
"acc_norm": 0.6344827586206897,
"acc_norm_stderr": 0.040131241954243856
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4947089947089947,
"acc_stderr": 0.02574986828855657,
"acc_norm": 0.4947089947089947,
"acc_norm_stderr": 0.02574986828855657
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8161290322580645,
"acc_stderr": 0.022037217340267822,
"acc_norm": 0.8161290322580645,
"acc_norm_stderr": 0.022037217340267822
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.806060606060606,
"acc_stderr": 0.03087414513656209,
"acc_norm": 0.806060606060606,
"acc_norm_stderr": 0.03087414513656209
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.024063156416822516,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.024063156416822516
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603348,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603348
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563976,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563976
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.02944316932303154,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.02944316932303154
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7184873949579832,
"acc_stderr": 0.02921354941437217,
"acc_norm": 0.7184873949579832,
"acc_norm_stderr": 0.02921354941437217
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.033622774366080424,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.033622774366080424
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8578431372549019,
"acc_stderr": 0.02450980392156862,
"acc_norm": 0.8578431372549019,
"acc_norm_stderr": 0.02450980392156862
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8481012658227848,
"acc_stderr": 0.023363878096632446,
"acc_norm": 0.8481012658227848,
"acc_norm_stderr": 0.023363878096632446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306086,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306086
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228733,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228733
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.03492606476623791,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.03492606476623791
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.0230866350868414,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.0230866350868414
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8045977011494253,
"acc_stderr": 0.014179171373424383,
"acc_norm": 0.8045977011494253,
"acc_norm_stderr": 0.014179171373424383
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7601156069364162,
"acc_stderr": 0.022989592543123563,
"acc_norm": 0.7601156069364162,
"acc_norm_stderr": 0.022989592543123563
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.394413407821229,
"acc_stderr": 0.01634538676210397,
"acc_norm": 0.394413407821229,
"acc_norm_stderr": 0.01634538676210397
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.729903536977492,
"acc_stderr": 0.02521804037341062,
"acc_norm": 0.729903536977492,
"acc_norm_stderr": 0.02521804037341062
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0227797190887334,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0227797190887334
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4954367666232073,
"acc_stderr": 0.012769704263117522,
"acc_norm": 0.4954367666232073,
"acc_norm_stderr": 0.012769704263117522
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7389705882352942,
"acc_stderr": 0.026679252270103128,
"acc_norm": 0.7389705882352942,
"acc_norm_stderr": 0.026679252270103128
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6813725490196079,
"acc_stderr": 0.018850084696468712,
"acc_norm": 0.6813725490196079,
"acc_norm_stderr": 0.018850084696468712
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142783,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142783
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.02650859065623327,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.02650859065623327
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466125,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466125
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598053,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598053
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03188578017686398,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03188578017686398
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5716034271725826,
"mc1_stderr": 0.017323088597314747,
"mc2": 0.7192487275907953,
"mc2_stderr": 0.015001105564856102
},
"harness|winogrande|5": {
"acc": 0.8334648776637726,
"acc_stderr": 0.010470796496781093
},
"harness|gsm8k|5": {
"acc": 0.6573161485974223,
"acc_stderr": 0.013073030230827915
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
freshpearYoon/vr_train_free_66 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 6222416946
num_examples: 10000
download_size: 935984914
dataset_size: 6222416946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-25000 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 6316415857
num_examples: 1000
download_size: 1243432299
dataset_size: 6316415857
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815008 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xtreme
eval_info:
task: entity_extraction
model: PdF/xlm-roberta-base-finetuned-panx-de
metrics: []
dataset_name: xtreme
dataset_config: PAN-X.de
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: PdF/xlm-roberta-base-finetuned-panx-de
* Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
Multimodal-Fatima/Caltech101_not_background_test_facebook_opt_2.7b_Visclues_ns_5647 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_16
num_bytes: 84812225.125
num_examples: 5647
- name: fewshot_1_bs_16
num_bytes: 86719164.125
num_examples: 5647
- name: fewshot_3_bs_16
num_bytes: 90542888.125
num_examples: 5647
- name: fewshot_5_bs_16
num_bytes: 94354856.125
num_examples: 5647
download_size: 333860025
dataset_size: 356429133.5
---
# Dataset Card for "Caltech101_not_background_test_facebook_opt_2.7b_Visclues_ns_5647"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NepaliAI/Nepali-HealthChat | ---
license: apache-2.0
task_categories:
- question-answering
language:
- ne
tags:
- medical
- health
--- |
paullatham1/headlines-train-balanced | ---
dataset_info:
features:
- name: data
dtype: string
- name: is_sarcastic
dtype: int64
splits:
- name: train
num_bytes: 1440540
num_examples: 19632
download_size: 969261
dataset_size: 1440540
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ismaeltorres00/Dataset_de_prueba | ---
size_categories:
- 1K<n<10K
---
Extraído de <https://github.com/anthony-wang/BestPractices/tree/master/data>.
Campos:
* Formula (`string`)
* T (`float64`): Temperatura (K)
* CP (`float64`): Capacidad calorífica (J/mol K) |
Nexdata/96_Hours_Japanese_Children_Real_world_Casual_Conversation_and_Monologue_speech_dataset | ---
license: cc-by-nc-nd-4.0
---
## Description
Japanese(Japan) Children Real-world Casual Conversation and Monologue speech dataset, covers self-media, conversation, live, lecture, variety show and other generic domains, mirrors real-world interactions. Transcribed with text content, speaker's ID, gender, age, accent and other attributes. Our dataset was collected from extensive and diversify speakers(12 years old and younger children), geographicly speaking, enhancing model performance in real and complex tasks.rnQuality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1328?source=Huggingface
## Format
16kHz, 16 bit, wav, mono channel
## Age
12 years old and younger children
## Content category
including interview, self-meida,variety show, etc.
## Recording environment
Low background noise
## Country
Japan(JPN)
## Language(Region) Code
ja-JP
## Language
Japanese
## Features of annotation
Transcription text, timestamp, speaker ID, gender, noise
## Accuracy
Word Accuracy Rate (WAR) 98%
# Licensing Information
Commercial License
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667118 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_v5
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-13b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_v5
dataset_config: mathemakitten--winobias_antistereotype_test_v5
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.