datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
Cohere/miracl-de-queries-22-12 | ---
annotations_creators:
- expert-generated
language:
- de
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-de-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-de-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-de-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-de-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-de-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-de-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-de-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-de-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-de-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
rombodawg/code_wizard_vicuna_10k_from70kunfiltered_backup | ---
license: other
---
Backup of code_wizard_vicuna_10k_from70kunfiltered used in rombodawg/MegaCodeTraining112k
Link to the combined dataset bellow
https://huggingface.co/datasets/rombodawg/MegaCodeTraining112k |
fivetech/forums | ---
license: mit
---
|
tyzhu/wiki_find_passage_train100_eval10_rare | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 133959
num_examples: 210
- name: validation
num_bytes: 7007
num_examples: 10
download_size: 59183
dataset_size: 140966
---
# Dataset Card for "wiki_find_passage_train100_eval10_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/44212031 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1339
dataset_size: 182
---
# Dataset Card for "44212031"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Norod78/il-license-plates | ---
license: mit
size_categories:
- n<1K
task_categories:
- object-detection
---
Images of Israeli License Plates with annotation for Plate-Object detection |
qanastek/LLaMaInstructionsFrenchMedMCQA | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- fr
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1k<n<10k
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: frenchmedmcqa
pretty_name: FrenchMedMCQA
---
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain
## Table of Contents
- [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-for-medical-domain)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://deft2023.univ-avignon.fr/
- **Repository:** https://deft2023.univ-avignon.fr/
- **Paper:** [FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain](https://hal.science/hal-03824241/document)
- **Leaderboard:** Coming soon
- **Point of Contact:** [Yanis LABRAK](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers.
Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s).
We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
### Supported Tasks and Leaderboards
Multiple-Choice Question Answering (MCQA)
### Languages
The questions and answers are available in French.
## Dataset Structure
### Data Instances
```json
{
"id": "230bac49b0fe863b772410bc8d01a025f63c3c999065480131d6334abd2efeff",
"prompt": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: We are giving you a scientific question (easy level) and five answers options (associated to « A », « B », « C », « D », « E »). Your task is to find the correct(s) answer(s) based on scientific facts, knowledge and reasoning. Don't generate anything other than one of the following characters : 'A B C D E'. ### Input: Parmi les affirmations suivantes, une seule est fausse, indiquer laquelle: les particules alpha (A) Sont formées de noyaux d'hélium (B) Sont peu pénétrantes (C) Toute l'énergie qu'elles transportent est cédée au long d'un parcours de quelques centimètres dans l'air (D) Sont arrêtées par une feuille de papier (E) Sont peu ionisantes ### Response: E",
"prompt_no_answer": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: We are giving you a scientific question (easy level) and five answers options (associated to « A », « B », « C », « D », « E »). Your task is to find the correct(s) answer(s) based on scientific facts, knowledge and reasoning. Don't generate anything other than one of the following characters : 'A B C D E'. ### Input: Parmi les affirmations suivantes, une seule est fausse, indiquer laquelle: les particules alpha (A) Sont formées de noyaux d'hélium (B) Sont peu pénétrantes (C) Toute l'énergie qu'elles transportent est cédée au long d'un parcours de quelques centimètres dans l'air (D) Sont arrêtées par une feuille de papier (E) Sont peu ionisantes ### Response:",
"correct_answers": [4],
}
```
### Data Fields
- `id` : a string question identifier for each example
- `prompt` : prompt text formatted for LLaMa (a string)
- `correct_answers` : Correct options, i.e., A, D and E
### Data Splits
| # Answers | Training | Validation | Test | Total |
|:---------:|:--------:|:----------:|:----:|:-----:|
| 1 | 595 | 164 | 321 | 1,080 |
| 2 | 528 | 45 | 97 | 670 |
| 3 | 718 | 71 | 141 | 930 |
| 4 | 296 | 30 | 56 | 382 |
| 5 | 34 | 2 | 7 | 43 |
| Total | 2171 | 312 | 622 | 3,105 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
The dataset was created by Labrak Yanis and Bazoge Adrien and Dufour Richard and Daille Béatrice and Gourraud Pierre-Antoine and Morin Emmanuel and Rouvier Mickael.
### Licensing Information
Apache 2.0
### Citation Information
If you find this useful in your research, please consider citing the dataset paper :
```latex
@inproceedings{labrak-etal-2022-frenchmedmcqa,
title = "{F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain",
author = "Labrak, Yanis and
Bazoge, Adrien and
Dufour, Richard and
Daille, Beatrice and
Gourraud, Pierre-Antoine and
Morin, Emmanuel and
Rouvier, Mickael",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.5",
pages = "41--46",
abstract = "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.",
}
```
### Contact
Thanks to contact [Yanis LABRAK](https://github.com/qanastek) for more information about this dataset.
|
oeg/software_benchmark_v2 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- es
pretty_name: Software Benchmark Multidomain
---
The corpus have been built using two corpora in software mentions.
* SoMESCi [1]. We have used the corpus uploaded to [Github](https://github.com/dave-s477/SoMeSci/tree/9f17a43f342be026f97f03749457d4abb1b01dbf/PLoS_sentences), more specifically, the corpus created with sentences.
* Softcite [2]. This project has published another corpus for software mentions, which is also available on [Github](https://github.com/howisonlab/softcite-dataset/tree/master/data/corpus). We have used the annotations from bio and economics domain.
* Papers with code. We have downloaded a list of publications from the [Papers with Code](https://paperswithcode.com/) site. You can find there publications and software from machine learning domain. To build this corpus, we have selected texts where you can find mentions of the software related with the publication.
To build this corpus, we have removed the annotations of other entities such as version, url and those which are related with the relation of teh entity with the text. IN this case, we only use the label Application_Mention.
To reconciliate both corpora, we have mapping the labels of both corpora. Also, some decisions about the annotations have been taken, for example, in the case of Microsoft Excel, we have decided to annotate Excel as software mention, not the whole text.
## References
1. Schindler, D., Bensmann, F., Dietze, S., & Krüger, F. (2021, October). Somesci-A 5 star open data gold standard knowledge graph of software mentions in scientific articles. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (pp. 4574-4583).
2. Du, C., Cohoon, J., Lopez, P., & Howison, J. (2021). Softcite dataset: A dataset of software mentions in biomedical and economic research publications. Journal of the Association for Information Science and Technology, 72(7), 870-884. |
dennisc1/mc4_nl_sentences_test | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 14701
num_examples: 121
download_size: 14203
dataset_size: 14701
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mc4_nl_sentences_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/ncbi_genbank_part_40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 35016354335
num_examples: 80526
download_size: 15795680024
dataset_size: 35016354335
---
# Dataset Card for "ncbi_genbank_part_40"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shjwudp/shu | ---
language: zh
license: cc-by-4.0
---
收集中文书籍总计14363本,用于学术研究和工业生产使用,书籍持续收录中,参与贡献请移步[代码仓库](https://github.com/shjwudp/shu)。
The dataset constructed from Chinese books. Books are being collected continuously. Please move to [code warehouse](https://github.com/shjwudp/shu) to contribute.
|
polejowska/mist1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
list:
- name: category_id
dtype:
class_label:
names:
'0': mist1
- name: image_id
dtype: string
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: segmentation
list:
list: float32
- name: iscrowd
dtype: bool
splits:
- name: train
num_bytes: 8875489623.982
num_examples: 9166
- name: valid
num_bytes: 478378507.0
num_examples: 514
- name: test
num_bytes: 912518506.0
num_examples: 940
download_size: 10243106879
dataset_size: 10266386636.982
---
# Dataset Card for "mist1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jholst/test-upload | ---
license: apache-2.0
---
|
Smuggling1710/VTnsfw-r-3.6k | ---
license: apache-2.0
---
|
asas-ai/joud_cleaned_sample | ---
dataset_info:
features:
- name: index_file
dtype: string
- name: index_line
dtype: string
- name: index
dtype: string
- name: Reviewed by
dtype: string
- name: dataset_name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 324507
num_examples: 323
download_size: 164515
dataset_size: 324507
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bot-yaya/new_prompt_gpt_joined_en_paragraph | ---
dataset_info:
features:
- name: record
dtype: string
- name: raw_text
dtype: string
- name: is_hard_linebreak
sequence: bool
splits:
- name: train
num_bytes: 925969
num_examples: 49
download_size: 471184
dataset_size: 925969
---
# Dataset Card for "new_prompt_gpt_joined_en_paragraph"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/nilou_genshin | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nilou/ニィロウ/妮露 (Genshin Impact)
This is the dataset of nilou/ニィロウ/妮露 (Genshin Impact), containing 400 images and their tags.
The core tags of this character are `long_hair, red_hair, breasts, twintails, fake_horns, horns, parted_bangs, medium_breasts, white_headwear, aqua_eyes, low_twintails, very_long_hair, sidelocks`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 400 | 1.23 GiB | [Download](https://huggingface.co/datasets/CyberHarem/nilou_genshin/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 400 | 989.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nilou_genshin/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1137 | 1.96 GiB | [Download](https://huggingface.co/datasets/CyberHarem/nilou_genshin/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nilou_genshin',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, blue_eyes, blue_skirt, bracer, brooch, circlet, crop_top, detached_sleeves, gold_trim, harem_outfit, looking_at_viewer, neck_ring, puffy_long_sleeves, smile, solo, stomach, white_veil, closed_mouth, cowboy_shot, navel, blue_gemstone, midriff, bare_shoulders, blush, hand_up, blue_bow, vision_(genshin_impact) |
| 1 | 6 |  |  |  |  |  | 1girl, blue_bow, blue_skirt, bracer, brooch, circlet, crop_top, harem_outfit, looking_at_viewer, midriff, navel, neck_ring, puffy_long_sleeves, solo, stomach, detached_sleeves, gold_trim, parted_lips, simple_background, white_background, white_veil, blue_gemstone, cowboy_shot, thighlet, thighs, smile, water |
| 2 | 6 |  |  |  |  |  | 1girl, blue_gemstone, blue_skirt, brooch, crop_top, detached_sleeves, gold_trim, harem_outfit, looking_at_viewer, navel, neck_ring, puffy_long_sleeves, smile, solo, stomach, water, arm_up, bracer, circlet, closed_mouth, white_background, armpits, blush, midriff, simple_background, white_veil, bare_shoulders, thighlet |
| 3 | 6 |  |  |  |  |  | 1girl, blue_gemstone, blue_nails, blue_skirt, blush, bracer, brooch, circlet, crop_top, gold_trim, harem_outfit, looking_at_viewer, nail_polish, navel, neck_ring, puffy_long_sleeves, solo, stomach, closed_mouth, cowboy_shot, detached_sleeves, smile, thighlet, thighs, bare_shoulders, green_eyes, midriff, water, arm_up, dancer, simple_background, white_background, white_veil |
| 4 | 13 |  |  |  |  |  | 1girl, blue_skirt, bracer, brooch, circlet, crop_top, gold_trim, harem_outfit, looking_at_viewer, neck_ring, puffy_long_sleeves, smile, solo, closed_mouth, dancer, detached_sleeves, gladiator_sandals, stomach, thighs, white_veil, arm_up, blue_gemstone, navel, blush, floating_hair, water, gold_footwear, leg_up, standing_on_one_leg, thighlet, nail_polish, blue_nails, midriff |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_eyes | blue_skirt | bracer | brooch | circlet | crop_top | detached_sleeves | gold_trim | harem_outfit | looking_at_viewer | neck_ring | puffy_long_sleeves | smile | solo | stomach | white_veil | closed_mouth | cowboy_shot | navel | blue_gemstone | midriff | bare_shoulders | blush | hand_up | blue_bow | vision_(genshin_impact) | parted_lips | simple_background | white_background | thighlet | thighs | water | arm_up | armpits | blue_nails | nail_polish | green_eyes | dancer | gladiator_sandals | floating_hair | gold_footwear | leg_up | standing_on_one_leg |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:-------------|:---------|:---------|:----------|:-----------|:-------------------|:------------|:---------------|:--------------------|:------------|:---------------------|:--------|:-------|:----------|:-------------|:---------------|:--------------|:--------|:----------------|:----------|:-----------------|:--------|:----------|:-----------|:--------------------------|:--------------|:--------------------|:-------------------|:-----------|:---------|:--------|:---------|:----------|:-------------|:--------------|:-------------|:---------|:--------------------|:----------------|:----------------|:---------|:----------------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | | | | X | | X | X | X | X | X | X | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | | | | | X | X | X | | X | X | X | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | X | X | X | X | X | X | | X | X | X | X | | | | | |
| 4 | 13 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | | X | | | | | | | X | X | X | X | | X | X | | X | X | X | X | X | X |
|
AdapterOcean/python3-standardized_cluster_7 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 41110307
num_examples: 3782
download_size: 0
dataset_size: 41110307
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_cluster_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KoddaDuck/fleurs | ---
license: apache-2.0
task_categories:
- automatic-speech-recognition
language:
- zh
size_categories:
- 10M<n<100M
--- |
wooden-ufo/MyStorage | ---
license: other
---
|
FaalSa/cluster0_3 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 749588
num_examples: 19
- name: validation
num_bytes: 758708
num_examples: 19
- name: test
num_bytes: 767828
num_examples: 19
download_size: 713497
dataset_size: 2276124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
zxh4546/ntu60-2048 | ---
dataset_info:
features:
- name: frame_dir
dtype: string
- name: video
sequence:
sequence:
sequence: float64
- name: label
dtype: int64
- name: frame_len
dtype: int64
- name: subject_name
dtype: int64
- name: camera_view
dtype: int64
splits:
- name: train
num_bytes: 138656023668
num_examples: 56880
download_size: 91460931404
dataset_size: 138656023668
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Renatanimareli/Renatasantosaudios | ---
license: openrail
---
|
open-llm-leaderboard/details_4season__alignment-model-test3 | ---
pretty_name: Evaluation run of 4season/alignment-model-test3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [4season/alignment-model-test3](https://huggingface.co/4season/alignment-model-test3)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_4season__alignment-model-test3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-29T21:49:56.480307](https://huggingface.co/datasets/open-llm-leaderboard/details_4season__alignment-model-test3/blob/main/results_2024-03-29T21-49-56.480307.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6828093590987985,\n\
\ \"acc_stderr\": 0.031379698998993684,\n \"acc_norm\": 0.6861672788340304,\n\
\ \"acc_norm_stderr\": 0.03201970285060687,\n \"mc1\": 0.6940024479804161,\n\
\ \"mc1_stderr\": 0.016132229728155038,\n \"mc2\": 0.8088413049033801,\n\
\ \"mc2_stderr\": 0.013121290704624325\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7627986348122867,\n \"acc_stderr\": 0.012430399829260856,\n\
\ \"acc_norm\": 0.7824232081911263,\n \"acc_norm_stderr\": 0.012057262020972499\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7511451902011551,\n\
\ \"acc_stderr\": 0.004314659034649386,\n \"acc_norm\": 0.8968333001394144,\n\
\ \"acc_norm_stderr\": 0.003035548306420554\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\
\ \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n\
\ \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8092105263157895,\n \"acc_stderr\": 0.031975658210325,\n\
\ \"acc_norm\": 0.8092105263157895,\n \"acc_norm_stderr\": 0.031975658210325\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.73,\n\
\ \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \
\ \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7396226415094339,\n \"acc_stderr\": 0.027008766090708045,\n\
\ \"acc_norm\": 0.7396226415094339,\n \"acc_norm_stderr\": 0.027008766090708045\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8194444444444444,\n\
\ \"acc_stderr\": 0.03216600808802268,\n \"acc_norm\": 0.8194444444444444,\n\
\ \"acc_norm_stderr\": 0.03216600808802268\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\"\
: 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6936416184971098,\n\
\ \"acc_stderr\": 0.035149425512674394,\n \"acc_norm\": 0.6936416184971098,\n\
\ \"acc_norm_stderr\": 0.035149425512674394\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.82,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.82,\n\
\ \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6595744680851063,\n \"acc_stderr\": 0.03097669299853443,\n\
\ \"acc_norm\": 0.6595744680851063,\n \"acc_norm_stderr\": 0.03097669299853443\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5263157894736842,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.5263157894736842,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6551724137931034,\n \"acc_stderr\": 0.03960933549451207,\n\
\ \"acc_norm\": 0.6551724137931034,\n \"acc_norm_stderr\": 0.03960933549451207\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4894179894179894,\n \"acc_stderr\": 0.02574554227604548,\n \"\
acc_norm\": 0.4894179894179894,\n \"acc_norm_stderr\": 0.02574554227604548\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.49206349206349204,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.49206349206349204,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8387096774193549,\n \"acc_stderr\": 0.020923327006423294,\n \"\
acc_norm\": 0.8387096774193549,\n \"acc_norm_stderr\": 0.020923327006423294\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5467980295566502,\n \"acc_stderr\": 0.03502544650845872,\n \"\
acc_norm\": 0.5467980295566502,\n \"acc_norm_stderr\": 0.03502544650845872\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\"\
: 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.030117688929503564,\n\
\ \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.030117688929503564\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8535353535353535,\n \"acc_stderr\": 0.025190921114603908,\n \"\
acc_norm\": 0.8535353535353535,\n \"acc_norm_stderr\": 0.025190921114603908\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644244,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644244\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7025641025641025,\n \"acc_stderr\": 0.023177408131465946,\n\
\ \"acc_norm\": 0.7025641025641025,\n \"acc_norm_stderr\": 0.023177408131465946\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3962962962962963,\n \"acc_stderr\": 0.029822619458534004,\n \
\ \"acc_norm\": 0.3962962962962963,\n \"acc_norm_stderr\": 0.029822619458534004\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7815126050420168,\n \"acc_stderr\": 0.026841514322958945,\n\
\ \"acc_norm\": 0.7815126050420168,\n \"acc_norm_stderr\": 0.026841514322958945\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4304635761589404,\n \"acc_stderr\": 0.04042809961395634,\n \"\
acc_norm\": 0.4304635761589404,\n \"acc_norm_stderr\": 0.04042809961395634\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8623853211009175,\n \"acc_stderr\": 0.0147701058786494,\n \"acc_norm\"\
: 0.8623853211009175,\n \"acc_norm_stderr\": 0.0147701058786494\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5740740740740741,\n\
\ \"acc_stderr\": 0.03372343271653063,\n \"acc_norm\": 0.5740740740740741,\n\
\ \"acc_norm_stderr\": 0.03372343271653063\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.8774509803921569,\n \"acc_stderr\": 0.023015389732458265,\n\
\ \"acc_norm\": 0.8774509803921569,\n \"acc_norm_stderr\": 0.023015389732458265\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8312236286919831,\n \"acc_stderr\": 0.024381406832586237,\n \
\ \"acc_norm\": 0.8312236286919831,\n \"acc_norm_stderr\": 0.024381406832586237\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7399103139013453,\n\
\ \"acc_stderr\": 0.029442495585857476,\n \"acc_norm\": 0.7399103139013453,\n\
\ \"acc_norm_stderr\": 0.029442495585857476\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6793893129770993,\n \"acc_stderr\": 0.04093329229834278,\n\
\ \"acc_norm\": 0.6793893129770993,\n \"acc_norm_stderr\": 0.04093329229834278\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8429752066115702,\n \"acc_stderr\": 0.03321244842547128,\n \"\
acc_norm\": 0.8429752066115702,\n \"acc_norm_stderr\": 0.03321244842547128\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n\
\ \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n\
\ \"acc_stderr\": 0.018315891685625845,\n \"acc_norm\": 0.9145299145299145,\n\
\ \"acc_norm_stderr\": 0.018315891685625845\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8186462324393359,\n\
\ \"acc_stderr\": 0.013778693778464093,\n \"acc_norm\": 0.8186462324393359,\n\
\ \"acc_norm_stderr\": 0.013778693778464093\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069356,\n\
\ \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069356\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4849162011173184,\n\
\ \"acc_stderr\": 0.01671489037999606,\n \"acc_norm\": 0.4849162011173184,\n\
\ \"acc_norm_stderr\": 0.01671489037999606\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.02440439492808787,\n\
\ \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.02440439492808787\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7395498392282959,\n\
\ \"acc_stderr\": 0.024926723224845532,\n \"acc_norm\": 0.7395498392282959,\n\
\ \"acc_norm_stderr\": 0.024926723224845532\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.022779719088733396,\n\
\ \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.022779719088733396\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5354609929078015,\n \"acc_stderr\": 0.02975238965742705,\n \
\ \"acc_norm\": 0.5354609929078015,\n \"acc_norm_stderr\": 0.02975238965742705\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4915254237288136,\n\
\ \"acc_stderr\": 0.012768401697269057,\n \"acc_norm\": 0.4915254237288136,\n\
\ \"acc_norm_stderr\": 0.012768401697269057\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6985294117647058,\n \"acc_stderr\": 0.027875982114273168,\n\
\ \"acc_norm\": 0.6985294117647058,\n \"acc_norm_stderr\": 0.027875982114273168\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6928104575163399,\n \"acc_stderr\": 0.018663359671463656,\n \
\ \"acc_norm\": 0.6928104575163399,\n \"acc_norm_stderr\": 0.018663359671463656\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.04494290866252091,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.04494290866252091\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7510204081632653,\n \"acc_stderr\": 0.027682979522960238,\n\
\ \"acc_norm\": 0.7510204081632653,\n \"acc_norm_stderr\": 0.027682979522960238\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036624,\n \
\ \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036624\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n\
\ \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n\
\ \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n\
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6940024479804161,\n\
\ \"mc1_stderr\": 0.016132229728155038,\n \"mc2\": 0.8088413049033801,\n\
\ \"mc2_stderr\": 0.013121290704624325\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8650355169692187,\n \"acc_stderr\": 0.00960306491321905\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.467778620166793,\n \
\ \"acc_stderr\": 0.0137438573030738\n }\n}\n```"
repo_url: https://huggingface.co/4season/alignment-model-test3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|arc:challenge|25_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|gsm8k|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hellaswag|10_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T21-49-56.480307.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T21-49-56.480307.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- '**/details_harness|winogrande|5_2024-03-29T21-49-56.480307.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-29T21-49-56.480307.parquet'
- config_name: results
data_files:
- split: 2024_03_29T21_49_56.480307
path:
- results_2024-03-29T21-49-56.480307.parquet
- split: latest
path:
- results_2024-03-29T21-49-56.480307.parquet
---
# Dataset Card for Evaluation run of 4season/alignment-model-test3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [4season/alignment-model-test3](https://huggingface.co/4season/alignment-model-test3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_4season__alignment-model-test3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-29T21:49:56.480307](https://huggingface.co/datasets/open-llm-leaderboard/details_4season__alignment-model-test3/blob/main/results_2024-03-29T21-49-56.480307.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6828093590987985,
"acc_stderr": 0.031379698998993684,
"acc_norm": 0.6861672788340304,
"acc_norm_stderr": 0.03201970285060687,
"mc1": 0.6940024479804161,
"mc1_stderr": 0.016132229728155038,
"mc2": 0.8088413049033801,
"mc2_stderr": 0.013121290704624325
},
"harness|arc:challenge|25": {
"acc": 0.7627986348122867,
"acc_stderr": 0.012430399829260856,
"acc_norm": 0.7824232081911263,
"acc_norm_stderr": 0.012057262020972499
},
"harness|hellaswag|10": {
"acc": 0.7511451902011551,
"acc_stderr": 0.004314659034649386,
"acc_norm": 0.8968333001394144,
"acc_norm_stderr": 0.003035548306420554
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595853,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595853
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8092105263157895,
"acc_stderr": 0.031975658210325,
"acc_norm": 0.8092105263157895,
"acc_norm_stderr": 0.031975658210325
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7396226415094339,
"acc_stderr": 0.027008766090708045,
"acc_norm": 0.7396226415094339,
"acc_norm_stderr": 0.027008766090708045
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8194444444444444,
"acc_stderr": 0.03216600808802268,
"acc_norm": 0.8194444444444444,
"acc_norm_stderr": 0.03216600808802268
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.035149425512674394,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.035149425512674394
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6595744680851063,
"acc_stderr": 0.03097669299853443,
"acc_norm": 0.6595744680851063,
"acc_norm_stderr": 0.03097669299853443
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5263157894736842,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.5263157894736842,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6551724137931034,
"acc_stderr": 0.03960933549451207,
"acc_norm": 0.6551724137931034,
"acc_norm_stderr": 0.03960933549451207
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4894179894179894,
"acc_stderr": 0.02574554227604548,
"acc_norm": 0.4894179894179894,
"acc_norm_stderr": 0.02574554227604548
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8387096774193549,
"acc_stderr": 0.020923327006423294,
"acc_norm": 0.8387096774193549,
"acc_norm_stderr": 0.020923327006423294
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5467980295566502,
"acc_stderr": 0.03502544650845872,
"acc_norm": 0.5467980295566502,
"acc_norm_stderr": 0.03502544650845872
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.030117688929503564,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.030117688929503564
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8535353535353535,
"acc_stderr": 0.025190921114603908,
"acc_norm": 0.8535353535353535,
"acc_norm_stderr": 0.025190921114603908
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.021995311963644244,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.021995311963644244
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7025641025641025,
"acc_stderr": 0.023177408131465946,
"acc_norm": 0.7025641025641025,
"acc_norm_stderr": 0.023177408131465946
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3962962962962963,
"acc_stderr": 0.029822619458534004,
"acc_norm": 0.3962962962962963,
"acc_norm_stderr": 0.029822619458534004
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7815126050420168,
"acc_stderr": 0.026841514322958945,
"acc_norm": 0.7815126050420168,
"acc_norm_stderr": 0.026841514322958945
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4304635761589404,
"acc_stderr": 0.04042809961395634,
"acc_norm": 0.4304635761589404,
"acc_norm_stderr": 0.04042809961395634
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8623853211009175,
"acc_stderr": 0.0147701058786494,
"acc_norm": 0.8623853211009175,
"acc_norm_stderr": 0.0147701058786494
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5740740740740741,
"acc_stderr": 0.03372343271653063,
"acc_norm": 0.5740740740740741,
"acc_norm_stderr": 0.03372343271653063
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8774509803921569,
"acc_stderr": 0.023015389732458265,
"acc_norm": 0.8774509803921569,
"acc_norm_stderr": 0.023015389732458265
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8312236286919831,
"acc_stderr": 0.024381406832586237,
"acc_norm": 0.8312236286919831,
"acc_norm_stderr": 0.024381406832586237
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7399103139013453,
"acc_stderr": 0.029442495585857476,
"acc_norm": 0.7399103139013453,
"acc_norm_stderr": 0.029442495585857476
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6793893129770993,
"acc_stderr": 0.04093329229834278,
"acc_norm": 0.6793893129770993,
"acc_norm_stderr": 0.04093329229834278
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8429752066115702,
"acc_stderr": 0.03321244842547128,
"acc_norm": 0.8429752066115702,
"acc_norm_stderr": 0.03321244842547128
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.03408997886857529,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.03408997886857529
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9145299145299145,
"acc_stderr": 0.018315891685625845,
"acc_norm": 0.9145299145299145,
"acc_norm_stderr": 0.018315891685625845
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8186462324393359,
"acc_stderr": 0.013778693778464093,
"acc_norm": 0.8186462324393359,
"acc_norm_stderr": 0.013778693778464093
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069356,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069356
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4849162011173184,
"acc_stderr": 0.01671489037999606,
"acc_norm": 0.4849162011173184,
"acc_norm_stderr": 0.01671489037999606
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.02440439492808787,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.02440439492808787
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7395498392282959,
"acc_stderr": 0.024926723224845532,
"acc_norm": 0.7395498392282959,
"acc_norm_stderr": 0.024926723224845532
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.022779719088733396,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.022779719088733396
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5354609929078015,
"acc_stderr": 0.02975238965742705,
"acc_norm": 0.5354609929078015,
"acc_norm_stderr": 0.02975238965742705
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4915254237288136,
"acc_stderr": 0.012768401697269057,
"acc_norm": 0.4915254237288136,
"acc_norm_stderr": 0.012768401697269057
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6985294117647058,
"acc_stderr": 0.027875982114273168,
"acc_norm": 0.6985294117647058,
"acc_norm_stderr": 0.027875982114273168
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6928104575163399,
"acc_stderr": 0.018663359671463656,
"acc_norm": 0.6928104575163399,
"acc_norm_stderr": 0.018663359671463656
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.04494290866252091,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.04494290866252091
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7510204081632653,
"acc_stderr": 0.027682979522960238,
"acc_norm": 0.7510204081632653,
"acc_norm_stderr": 0.027682979522960238
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616914,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036624,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036624
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598053,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598053
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03188578017686398,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03188578017686398
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6940024479804161,
"mc1_stderr": 0.016132229728155038,
"mc2": 0.8088413049033801,
"mc2_stderr": 0.013121290704624325
},
"harness|winogrande|5": {
"acc": 0.8650355169692187,
"acc_stderr": 0.00960306491321905
},
"harness|gsm8k|5": {
"acc": 0.467778620166793,
"acc_stderr": 0.0137438573030738
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Nerfgun3/John_Kafka_LoRA | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/datasets/Nerfgun3/John_Kafka_LoRA/resolve/main/preview/preview%20(1).png"
tags:
- stable-diffusion
- text-to-image
- image-to-image
inference: false
---
# John Kafka Artstyle LoRA
# Use Cases
The LoRA is in itself very compatible with the most diverse model. However, it is most effective when used with Kenshi or AbyssOrangeMix2.
The LoRA itself was trained with the token: ```skistyle```.
The models mentioned right now
1. AbyssOrangeMix2 from [WarriorMama777](https://huggingface.co/WarriorMama777/OrangeMixs)
2. Kenshi Model from [Luna](https://huggingface.co/SweetLuna/Kenshi)
## Strength
I would personally use these strength with the assosiated model:
Soft-Version:
- 0.85-1 for AbyssOrangeMix2
- 0.75-0.9 for Kenshi
Hard-Version:
- 0.6-0.8 for AbyssOrangeMix2
- 0.55-0.75 for Kenshi
# Showcase
**Example 1**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/John_Kafka_LoRA/resolve/main/preview/preview%20(2).png"/>
```
skistyle,
1girl, small cute red nose, animal_ears, artist_name, bangs, some freckles, black_hair, black_skirt, blue_ribbon, smiling, solo, looking at viewer, collared_shirt, flower, fox_ears, grey_flower, hair_flower, hair_ornament, highres, league_of_legends, long_hair, looking_at_viewer, neck_ribbon, orange_eyes, pleated_skirt, ribbon, shirt, sitting, skirt, solo, white_shirt, shy
Steps: 32, Sampler: Euler a, CFG scale: 7
```
**Example 2**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/John_Kafka_LoRA/resolve/main/preview/preview%20(3).png"/>
```
skistyle,
1girl, small cute red nose, animal_ears, artist_name, bangs, some freckles, black_hair, black_skirt, blue_ribbon, smiling, solo, looking at viewer, collared_shirt, flower, fox_ears, grey_flower, hair_flower, hair_ornament, highres, league_of_legends, long_hair, looking_at_viewer, neck_ribbon, orange_eyes, pleated_skirt, ribbon, shirt, sitting, skirt, solo, white_shirt, shy
Steps: 32, Sampler: Euler a, CFG scale: 7
```
**Example 3**
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/John_Kafka_LoRA/resolve/main/preview/preview%20(4).png"/>
```
skistyle,
1girl, (masterpiece:1.2), (highly detailed), ((best quality)), (ultra-detailed)
Steps: 32, Sampler: Euler a, CFG scale: 7
```
# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
LDJnr/Pure-Dove | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
- Roleplay
pretty_name: Pure-Dove
size_categories:
- 1K<n<10K
---
## This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon!
This dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4.
Steps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only include ChatBot Arena responses that are voted as being better even against another instance of GPT-4.
- Comprised of over 3000 highly filtered multi-turn conversations between GPT-4 and real humans.
- Average context length per conversation is over 800 tokens.
## Purpose?
- This dataset is not particularly intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!
## Quality filtering and cleaning.
- The conversations were sourced from openly datasets such as ShareGPT and ChatBotArena by Lmsys, however, a large portion of these chats were riddled with hallucinations and abnormal distributions of different languages.
- Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021", not just in english, but other languages too!
## Credits
During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.
Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big credit to J-Supha within NousResearch specifically for making these types of significant contributions.
## Future Plans & How you can help!
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
Citation:
```
@article{daniele2023amplify-instruct,
title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.},
author={Daniele, Luigi and Suphavadeeprasit},
journal={arXiv preprint arXiv:(comming soon)},
year={2023}
}
``` |
nyu-mll/multi_nli | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-3.0
- cc-by-sa-3.0
- mit
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: multinli
pretty_name: Multi-Genre Natural Language Inference
license_details: Open Portion of the American National Corpus
dataset_info:
features:
- name: promptID
dtype: int32
- name: pairID
dtype: string
- name: premise
dtype: string
- name: premise_binary_parse
dtype: string
- name: premise_parse
dtype: string
- name: hypothesis
dtype: string
- name: hypothesis_binary_parse
dtype: string
- name: hypothesis_parse
dtype: string
- name: genre
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 410210306
num_examples: 392702
- name: validation_matched
num_bytes: 10063907
num_examples: 9815
- name: validation_mismatched
num_bytes: 10610189
num_examples: 9832
download_size: 224005223
dataset_size: 430884402
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation_matched
path: data/validation_matched-*
- split: validation_mismatched
path: data/validation_mismatched-*
---
# Dataset Card for Multi-Genre Natural Language Inference (MultiNLI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.nyu.edu/projects/bowman/multinli/](https://www.nyu.edu/projects/bowman/multinli/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 226.85 MB
- **Size of the generated dataset:** 76.95 MB
- **Total amount of disk used:** 303.81 MB
### Dataset Summary
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
crowd-sourced collection of 433k sentence pairs annotated with textual
entailment information. The corpus is modeled on the SNLI corpus, but differs in
that covers a range of genres of spoken and written text, and supports a
distinctive cross-genre generalization evaluation. The corpus served as the
basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 226.85 MB
- **Size of the generated dataset:** 76.95 MB
- **Total amount of disk used:** 303.81 MB
Example of a data instance:
```
{
"promptID": 31193,
"pairID": "31193n",
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"premise_binary_parse": "( ( Conceptually ( cream skimming ) ) ( ( has ( ( ( two ( basic dimensions ) ) - ) ( ( product and ) geography ) ) ) . ) )",
"premise_parse": "(ROOT (S (NP (JJ Conceptually) (NN cream) (NN skimming)) (VP (VBZ has) (NP (NP (CD two) (JJ basic) (NNS dimensions)) (: -) (NP (NN product) (CC and) (NN geography)))) (. .)))",
"hypothesis": "Product and geography are what make cream skimming work. ",
"hypothesis_binary_parse": "( ( ( Product and ) geography ) ( ( are ( what ( make ( cream ( skimming work ) ) ) ) ) . ) )",
"hypothesis_parse": "(ROOT (S (NP (NN Product) (CC and) (NN geography)) (VP (VBP are) (SBAR (WHNP (WP what)) (S (VP (VBP make) (NP (NP (NN cream)) (VP (VBG skimming) (NP (NN work)))))))) (. .)))",
"genre": "government",
"label": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `promptID`: Unique identifier for prompt
- `pairID`: Unique identifier for pair
- `{premise,hypothesis}`: combination of `premise` and `hypothesis`
- `{premise,hypothesis} parse`: Each sentence as parsed by the Stanford PCFG Parser 3.5.2
- `{premise,hypothesis} binary parse`: parses in unlabeled binary-branching format
- `genre`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). Dataset instances which don't have any gold label are marked with -1 label. Make sure you filter them before starting the training using `datasets.Dataset.filter`.
### Data Splits
|train |validation_matched|validation_mismatched|
|-----:|-----------------:|--------------------:|
|392702| 9815| 9832|
## Dataset Creation
### Curation Rationale
They constructed MultiNLI so as to make it possible to explicitly evaluate models both on the quality of their sentence representations within the training domain and on their ability to derive reasonable representations in unfamiliar domains.
### Source Data
#### Initial Data Collection and Normalization
They created each sentence pair by selecting a premise sentence from a preexisting text source and asked a human annotator to compose a novel sentence to pair with it as a hypothesis.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere).
### Citation Information
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
woz_dialogue | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- de
- en
- it
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- token-classification
- text-classification
task_ids:
- dialogue-modeling
- multi-class-classification
- parsing
paperswithcode_id: wizard-of-oz
pretty_name: Wizard-of-Oz
dataset_info:
- config_name: en
features:
- name: dialogue_idx
dtype: int32
- name: dialogue
list:
- name: turn_label
sequence:
sequence: string
- name: asr
sequence:
sequence: string
- name: system_transcript
dtype: string
- name: turn_idx
dtype: int32
- name: belief_state
list:
- name: slots
sequence:
sequence: string
- name: act
dtype: string
- name: transcript
dtype: string
- name: system_acts
sequence:
sequence: string
splits:
- name: train
num_bytes: 827189
num_examples: 600
- name: validation
num_bytes: 265684
num_examples: 200
- name: test
num_bytes: 537557
num_examples: 400
download_size: 7529221
dataset_size: 1630430
- config_name: de
features:
- name: dialogue_idx
dtype: int32
- name: dialogue
list:
- name: turn_label
sequence:
sequence: string
- name: asr
sequence:
sequence: string
- name: system_transcript
dtype: string
- name: turn_idx
dtype: int32
- name: belief_state
list:
- name: slots
sequence:
sequence: string
- name: act
dtype: string
- name: transcript
dtype: string
- name: system_acts
sequence:
sequence: string
splits:
- name: train
num_bytes: 881478
num_examples: 600
- name: validation
num_bytes: 276758
num_examples: 200
- name: test
num_bytes: 569703
num_examples: 400
download_size: 7626734
dataset_size: 1727939
- config_name: de_en
features:
- name: dialogue_idx
dtype: int32
- name: dialogue
list:
- name: turn_label
sequence:
sequence: string
- name: asr
sequence:
sequence: string
- name: system_transcript
dtype: string
- name: turn_idx
dtype: int32
- name: belief_state
list:
- name: slots
sequence:
sequence: string
- name: act
dtype: string
- name: transcript
dtype: string
- name: system_acts
sequence:
sequence: string
splits:
- name: train
num_bytes: 860151
num_examples: 600
- name: validation
num_bytes: 269966
num_examples: 200
- name: test
num_bytes: 555841
num_examples: 400
download_size: 7584753
dataset_size: 1685958
- config_name: it
features:
- name: dialogue_idx
dtype: int32
- name: dialogue
list:
- name: turn_label
sequence:
sequence: string
- name: asr
sequence:
sequence: string
- name: system_transcript
dtype: string
- name: turn_idx
dtype: int32
- name: belief_state
list:
- name: slots
sequence:
sequence: string
- name: act
dtype: string
- name: transcript
dtype: string
- name: system_acts
sequence:
sequence: string
splits:
- name: train
num_bytes: 842799
num_examples: 600
- name: validation
num_bytes: 270258
num_examples: 200
- name: test
num_bytes: 547759
num_examples: 400
download_size: 7559615
dataset_size: 1660816
- config_name: it_en
features:
- name: dialogue_idx
dtype: int32
- name: dialogue
list:
- name: turn_label
sequence:
sequence: string
- name: asr
sequence:
sequence: string
- name: system_transcript
dtype: string
- name: turn_idx
dtype: int32
- name: belief_state
list:
- name: slots
sequence:
sequence: string
- name: act
dtype: string
- name: transcript
dtype: string
- name: system_acts
sequence:
sequence: string
splits:
- name: train
num_bytes: 845095
num_examples: 600
- name: validation
num_bytes: 270942
num_examples: 200
- name: test
num_bytes: 548979
num_examples: 400
download_size: 7563815
dataset_size: 1665016
config_names:
- de
- de_en
- en
- it
- it_en
---
# Dataset Card for Wizard-of-Oz
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More info needed]
- **Repository:** [GitHub](https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz)
- **Paper:** [A Network-based End-to-End Trainable Task-oriented Dialogue System](https://arxiv.org/abs/1604.04562)
- **Leaderboard:** [More info needed]
- **Point of Contact:** [More info needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
jan-hq/openmath_instruct_dpo_binarized | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 3181155497.814446
num_examples: 1640641
- name: test
num_bytes: 353462799.1855541
num_examples: 182294
download_size: 1601938768
dataset_size: 3534618297.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
PA0703/English-to-Thanglish-dataset | ---
license: mit
language:
- en
- ta
tags:
- croissant
--- |
Bucharest-NLP/dgt-tm-hu-ro | ---
license: apache-2.0
---
|
Nexdata/40_People_Safety_Dressing_Collection_Data | ---
license: cc-by-nc-nd-4.0
---
## Description
40 People – Safety Dressing Collection Data. Each subject collects 24 videos, each video lasts about 30 seconds. The gender distribution includes male and female, the age distribution is young and middle-aged. Collecting scenes include 2 indoor scenes and 2 outdoor scenes. The collecting angles are looking down angle, looking up angle. The data diversity includes multiple scenes, multiple actions, multiple angles, multiple safety dressing equipment. The data can be used for tasks such as detection and recognition of safety dressing for power personnel.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1341?source=Huggingface
## Data size
40 people, each subject collects 24 videos, each video lasts about 30 seconds
## Population distribution
gender distribution: 14 males, 16 females; age distribution: 33 young people, 7 middle-aged people; race distribution: Asian
## Collecting environment
2 indoor scenes, 2 outdoor scenes
## Data diversity
multiple scenes, multiple actions, multiple angles, multiple safety dressing equipment
## Device
surveillance camera, the resolution is 1,920*1,080
## Collecting angle
looking down angle, looking up angle
## Data format
.mp4
## Collection content
simulating the collection of video data on electrical personnel wearing safety equipment while working
## Accuracy rate
according to the accuracy of the collection content, the accuracy is not less than 97%
# Licensing Information
Commercial License
|
zolak/twitter_dataset_78_1713099962 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3273491
num_examples: 8198
download_size: 1680368
dataset_size: 3273491
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HiTZ/meta4xnli | ---
license: apache-2.0
task_categories:
- token-classification
- text-classification
language:
- en
- es
pretty_name: meta4xnli
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
Meta4XNLI is a parallel dataset with annotations in English and Spanish for metaphor detection at token level (13320 sentences) and metaphor interpretation framed within NLI the task (9990 premise-hypothesis pairs).
It is a collection of existing NLI datasets manually labeled for both metaphor tasks.
- **Repository**: data available also in .tsv format at https://github.com/elisanchez-beep/meta4xnli
- **Paper**: [Meta4XNLI: A Crosslingual Parallel Corpus for Metaphor Detection and Interpretation](https://arxiv.org/pdf/2404.07053.pdf)
### Dataset Sources
Meta4XNLI is a collection of [XNLI](xnli) and [esXNLI](https://aclanthology.org/2020.emnlp-main.618/) datasets with metaphor annotations.
## Dataset Structure
The dataset is divided according to detection and interpretation tasks.
- Detection: labels at token level.
- splits: train, dev and test files for fine-tuning and evaluation.
- source_datasets: splits by original source dataset and premises and hypotheses for evaluation.
- Intepretation: set of sentences split by metaphor occurrence. Non-relevant cases include sentences with metaphors, however, their literal interpretation is not necessary to extract the inference label.
- splits: train, dev and test files for fine-tuning and evaluation.
- source_datasets: splits by original source dataset and metaphor presence.
## Dataset Fields
- Detection:
- "id": example id
- "tokens": list of text split.
- "tags": list of metaphor annotations for each token.
- 0: literal
- 1: metaphor
- Interpretation:
- "language": Spanish (es) or English (en)
- "gold_label": inference label: entailment, neutral or contradiction
- "sentence1": premise
- "sentence2": hypothesis
- "promptID": premise id
- "pairID": premise and hypothesis pair id
- "genre": text domain
- "source_dataset": original dataset: {xnli.dev, xnli.test, esxnli}
## Citation [optional]
If you use Meta4XNLI, please cite our work:
```
@misc{sanchezbayona2024meta4xnli,
title={Meta4XNLI: A Crosslingual Parallel Corpus for Metaphor Detection and Interpretation},
author={Elisa Sanchez-Bayona and Rodrigo Agerri},
year={2024},
eprint={2404.07053},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Dataset Card Contact
{elisa.sanchez, rodrigo.agerri}@ehu.eus |
open-llm-leaderboard/details_harshitv804__MetaMath-Mistral-2x7B | ---
pretty_name: Evaluation run of harshitv804/MetaMath-Mistral-2x7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [harshitv804/MetaMath-Mistral-2x7B](https://huggingface.co/harshitv804/MetaMath-Mistral-2x7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_harshitv804__MetaMath-Mistral-2x7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-10T01:05:45.632321](https://huggingface.co/datasets/open-llm-leaderboard/details_harshitv804__MetaMath-Mistral-2x7B/blob/main/results_2024-03-10T01-05-45.632321.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6218089799272568,\n\
\ \"acc_stderr\": 0.03263681999096668,\n \"acc_norm\": 0.6219459868041436,\n\
\ \"acc_norm_stderr\": 0.03330427342622862,\n \"mc1\": 0.3023255813953488,\n\
\ \"mc1_stderr\": 0.016077509266133026,\n \"mc2\": 0.4479737874141746,\n\
\ \"mc2_stderr\": 0.015466809789155087\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5708191126279863,\n \"acc_stderr\": 0.014464085894870653,\n\
\ \"acc_norm\": 0.60580204778157,\n \"acc_norm_stderr\": 0.014280522667467325\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6441943835889266,\n\
\ \"acc_stderr\": 0.004777782584817784,\n \"acc_norm\": 0.8259310894244174,\n\
\ \"acc_norm_stderr\": 0.003783938150151617\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \
\ \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"\
acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.618421052631579,\n \"acc_stderr\": 0.03953173377749194,\n\
\ \"acc_norm\": 0.618421052631579,\n \"acc_norm_stderr\": 0.03953173377749194\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n\
\ \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \
\ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n\
\ \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n\
\ \"acc_stderr\": 0.038009680605548594,\n \"acc_norm\": 0.7083333333333334,\n\
\ \"acc_norm_stderr\": 0.038009680605548594\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n\
\ \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n\
\ \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.04755129616062946,\n\
\ \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.04755129616062946\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146267,\n\
\ \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146267\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878151,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878151\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3994708994708995,\n \"acc_stderr\": 0.025225450284067877,\n \"\
acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.025225450284067877\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.042857142857142816,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.042857142857142816\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7258064516129032,\n\
\ \"acc_stderr\": 0.025378139970885196,\n \"acc_norm\": 0.7258064516129032,\n\
\ \"acc_norm_stderr\": 0.025378139970885196\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\"\
: 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n\
\ \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7575757575757576,\n \"acc_stderr\": 0.030532892233932022,\n \"\
acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.030532892233932022\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.025416343096306433,\n\
\ \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.025416343096306433\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6051282051282051,\n \"acc_stderr\": 0.0247843169421564,\n \
\ \"acc_norm\": 0.6051282051282051,\n \"acc_norm_stderr\": 0.0247843169421564\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465073,\n \
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465073\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.634453781512605,\n \"acc_stderr\": 0.031282177063684614,\n \
\ \"acc_norm\": 0.634453781512605,\n \"acc_norm_stderr\": 0.031282177063684614\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8073394495412844,\n \"acc_stderr\": 0.016909276884936073,\n \"\
acc_norm\": 0.8073394495412844,\n \"acc_norm_stderr\": 0.016909276884936073\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4398148148148148,\n \"acc_stderr\": 0.03385177976044811,\n \"\
acc_norm\": 0.4398148148148148,\n \"acc_norm_stderr\": 0.03385177976044811\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588663,\n \"\
acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588663\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7552742616033755,\n \"acc_stderr\": 0.027985699387036423,\n \
\ \"acc_norm\": 0.7552742616033755,\n \"acc_norm_stderr\": 0.027985699387036423\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6636771300448431,\n\
\ \"acc_stderr\": 0.031708824268455,\n \"acc_norm\": 0.6636771300448431,\n\
\ \"acc_norm_stderr\": 0.031708824268455\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n\
\ \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092368,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092368\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7969348659003831,\n\
\ \"acc_stderr\": 0.01438552507661157,\n \"acc_norm\": 0.7969348659003831,\n\
\ \"acc_norm_stderr\": 0.01438552507661157\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.708092485549133,\n \"acc_stderr\": 0.024476994076247326,\n\
\ \"acc_norm\": 0.708092485549133,\n \"acc_norm_stderr\": 0.024476994076247326\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.35083798882681566,\n\
\ \"acc_stderr\": 0.015961036675230952,\n \"acc_norm\": 0.35083798882681566,\n\
\ \"acc_norm_stderr\": 0.015961036675230952\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7418300653594772,\n \"acc_stderr\": 0.025058503316958147,\n\
\ \"acc_norm\": 0.7418300653594772,\n \"acc_norm_stderr\": 0.025058503316958147\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.684887459807074,\n\
\ \"acc_stderr\": 0.02638527370346449,\n \"acc_norm\": 0.684887459807074,\n\
\ \"acc_norm_stderr\": 0.02638527370346449\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.025407197798890162,\n\
\ \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.025407197798890162\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4716312056737589,\n \"acc_stderr\": 0.029779450957303062,\n \
\ \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.029779450957303062\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4498044328552803,\n\
\ \"acc_stderr\": 0.012705721498565109,\n \"acc_norm\": 0.4498044328552803,\n\
\ \"acc_norm_stderr\": 0.012705721498565109\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n\
\ \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6454248366013072,\n \"acc_stderr\": 0.019353360547553697,\n \
\ \"acc_norm\": 0.6454248366013072,\n \"acc_norm_stderr\": 0.019353360547553697\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6816326530612244,\n \"acc_stderr\": 0.029822533793982066,\n\
\ \"acc_norm\": 0.6816326530612244,\n \"acc_norm_stderr\": 0.029822533793982066\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n\
\ \"acc_stderr\": 0.02650859065623326,\n \"acc_norm\": 0.8308457711442786,\n\
\ \"acc_norm_stderr\": 0.02650859065623326\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774711,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774711\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368043,\n\
\ \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368043\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3023255813953488,\n\
\ \"mc1_stderr\": 0.016077509266133026,\n \"mc2\": 0.4479737874141746,\n\
\ \"mc2_stderr\": 0.015466809789155087\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7600631412786109,\n \"acc_stderr\": 0.01200207862948574\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6921910538286581,\n \
\ \"acc_stderr\": 0.01271440100992365\n }\n}\n```"
repo_url: https://huggingface.co/harshitv804/MetaMath-Mistral-2x7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|arc:challenge|25_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|gsm8k|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hellaswag|10_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-10T01-05-45.632321.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-10T01-05-45.632321.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- '**/details_harness|winogrande|5_2024-03-10T01-05-45.632321.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-10T01-05-45.632321.parquet'
- config_name: results
data_files:
- split: 2024_03_10T01_05_45.632321
path:
- results_2024-03-10T01-05-45.632321.parquet
- split: latest
path:
- results_2024-03-10T01-05-45.632321.parquet
---
# Dataset Card for Evaluation run of harshitv804/MetaMath-Mistral-2x7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [harshitv804/MetaMath-Mistral-2x7B](https://huggingface.co/harshitv804/MetaMath-Mistral-2x7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_harshitv804__MetaMath-Mistral-2x7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-10T01:05:45.632321](https://huggingface.co/datasets/open-llm-leaderboard/details_harshitv804__MetaMath-Mistral-2x7B/blob/main/results_2024-03-10T01-05-45.632321.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6218089799272568,
"acc_stderr": 0.03263681999096668,
"acc_norm": 0.6219459868041436,
"acc_norm_stderr": 0.03330427342622862,
"mc1": 0.3023255813953488,
"mc1_stderr": 0.016077509266133026,
"mc2": 0.4479737874141746,
"mc2_stderr": 0.015466809789155087
},
"harness|arc:challenge|25": {
"acc": 0.5708191126279863,
"acc_stderr": 0.014464085894870653,
"acc_norm": 0.60580204778157,
"acc_norm_stderr": 0.014280522667467325
},
"harness|hellaswag|10": {
"acc": 0.6441943835889266,
"acc_stderr": 0.004777782584817784,
"acc_norm": 0.8259310894244174,
"acc_norm_stderr": 0.003783938150151617
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.618421052631579,
"acc_stderr": 0.03953173377749194,
"acc_norm": 0.618421052631579,
"acc_norm_stderr": 0.03953173377749194
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.028450154794118637,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.028450154794118637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7083333333333334,
"acc_stderr": 0.038009680605548594,
"acc_norm": 0.7083333333333334,
"acc_norm_stderr": 0.038009680605548594
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099521,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099521
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.04755129616062946,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.04755129616062946
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146267,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146267
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878151,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878151
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3994708994708995,
"acc_stderr": 0.025225450284067877,
"acc_norm": 0.3994708994708995,
"acc_norm_stderr": 0.025225450284067877
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.042857142857142816,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.042857142857142816
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7258064516129032,
"acc_stderr": 0.025378139970885196,
"acc_norm": 0.7258064516129032,
"acc_norm_stderr": 0.025378139970885196
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4729064039408867,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.4729064039408867,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.030532892233932022,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.030532892233932022
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.025416343096306433,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.025416343096306433
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6051282051282051,
"acc_stderr": 0.0247843169421564,
"acc_norm": 0.6051282051282051,
"acc_norm_stderr": 0.0247843169421564
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465073,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465073
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.634453781512605,
"acc_stderr": 0.031282177063684614,
"acc_norm": 0.634453781512605,
"acc_norm_stderr": 0.031282177063684614
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.038615575462551684,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.038615575462551684
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8073394495412844,
"acc_stderr": 0.016909276884936073,
"acc_norm": 0.8073394495412844,
"acc_norm_stderr": 0.016909276884936073
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4398148148148148,
"acc_stderr": 0.03385177976044811,
"acc_norm": 0.4398148148148148,
"acc_norm_stderr": 0.03385177976044811
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.028379449451588663,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.028379449451588663
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7552742616033755,
"acc_stderr": 0.027985699387036423,
"acc_norm": 0.7552742616033755,
"acc_norm_stderr": 0.027985699387036423
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6636771300448431,
"acc_stderr": 0.031708824268455,
"acc_norm": 0.6636771300448431,
"acc_norm_stderr": 0.031708824268455
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228733,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228733
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092368,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092368
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7969348659003831,
"acc_stderr": 0.01438552507661157,
"acc_norm": 0.7969348659003831,
"acc_norm_stderr": 0.01438552507661157
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.708092485549133,
"acc_stderr": 0.024476994076247326,
"acc_norm": 0.708092485549133,
"acc_norm_stderr": 0.024476994076247326
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.35083798882681566,
"acc_stderr": 0.015961036675230952,
"acc_norm": 0.35083798882681566,
"acc_norm_stderr": 0.015961036675230952
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7418300653594772,
"acc_stderr": 0.025058503316958147,
"acc_norm": 0.7418300653594772,
"acc_norm_stderr": 0.025058503316958147
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.684887459807074,
"acc_stderr": 0.02638527370346449,
"acc_norm": 0.684887459807074,
"acc_norm_stderr": 0.02638527370346449
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.025407197798890162,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.025407197798890162
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4716312056737589,
"acc_stderr": 0.029779450957303062,
"acc_norm": 0.4716312056737589,
"acc_norm_stderr": 0.029779450957303062
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4498044328552803,
"acc_stderr": 0.012705721498565109,
"acc_norm": 0.4498044328552803,
"acc_norm_stderr": 0.012705721498565109
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6066176470588235,
"acc_stderr": 0.029674288281311155,
"acc_norm": 0.6066176470588235,
"acc_norm_stderr": 0.029674288281311155
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6454248366013072,
"acc_stderr": 0.019353360547553697,
"acc_norm": 0.6454248366013072,
"acc_norm_stderr": 0.019353360547553697
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6816326530612244,
"acc_stderr": 0.029822533793982066,
"acc_norm": 0.6816326530612244,
"acc_norm_stderr": 0.029822533793982066
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.02650859065623326,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.02650859065623326
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774711,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774711
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.029913127232368043,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.029913127232368043
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3023255813953488,
"mc1_stderr": 0.016077509266133026,
"mc2": 0.4479737874141746,
"mc2_stderr": 0.015466809789155087
},
"harness|winogrande|5": {
"acc": 0.7600631412786109,
"acc_stderr": 0.01200207862948574
},
"harness|gsm8k|5": {
"acc": 0.6921910538286581,
"acc_stderr": 0.01271440100992365
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
wtcherr/unsplash_5k_blur_17KS | ---
dataset_info:
features:
- name: image
dtype: image
- name: guide
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1699759141.0
num_examples: 5000
download_size: 1699659134
dataset_size: 1699759141.0
---
# Dataset Card for "unsplash_5k_blur_17KS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ErfanMoosaviMonazzah/esnli-seq2seq | ---
dataset_info:
features:
- name: input_seq2seq
dtype: string
- name: output_seq2seq
dtype: string
splits:
- name: train
num_bytes: 115466448
num_examples: 549367
- name: validation
num_bytes: 3815481
num_examples: 9842
- name: test
num_bytes: 3778177
num_examples: 9824
download_size: 44410181
dataset_size: 123060106
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
NyxSlee/translating_mplm_dataset_thre | ---
dataset_info:
features:
- name: number
dtype: string
- name: sentence
dtype: string
- name: word_translations
struct:
- name: 一个 (yī gè)
dtype: string
- name: 一尊 (yī zūn)
dtype: string
- name: 下来 (xià lái)
dtype: string
- name: 仿佛 (fǎng fú)
dtype: string
- name: 会 (huì)
dtype: string
- name: 凝固 (níng gù)
dtype: string
- name: 动过 (dòng guò)
dtype: string
- name: 只余 (zhǐ yú)
dtype: string
- name: 坐在 (zuò zài)
dtype: string
- name: 天色 (Tiān sè)
dtype: string
- name: 完全 (wán quán)
dtype: string
- name: 屋内 (wū nèi)
dtype: string
- name: 床边 (chuáng biān)
dtype: string
- name: 捧着 (pěng zhe)
dtype: string
- name: 放在 (fàng zài)
dtype: string
- name: 是 (shì)
dtype: string
- name: 暗了 (àn le)
dtype: string
- name: 暮色 (mù sè)
dtype: string
- name: 没有 (méi yǒu)
dtype: string
- name: 浅浅 (qiǎn qiǎn)
dtype: string
- name: 燃烛 (rán zhú)
dtype: string
- name: 的 (de)
dtype: string
- name: 糕点 (gāo diǎn)
dtype: string
- name: 许久 (xǔ jiǔ)
dtype: string
- name: 谁 (shuí)
dtype: string
- name: 身影 (shēn yǐng)
dtype: string
- name: 轮廓 (lún kuò)
dtype: string
- name: 这儿 (zhèr)
dtype: string
- name: 逐渐 (zhú jiàn)
dtype: string
- name: 都没有 (dōu méi yǒu)
dtype: string
- name: 阚闻萧 (Kàn wén xiāo)
dtype: string
- name: 隐没 (yǐn mò)
dtype: string
- name: 黑漆漆的 (hēi qī qī de)
dtype: string
- name: best_translation
dtype: string
- name: alternative_translations
sequence: string
splits:
- name: train
num_bytes: 3429
num_examples: 3
download_size: 27294
dataset_size: 3429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "translating_mplm_dataset_thre"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/francis_drake_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of francis_drake/フランシス・ドレイク/弗朗西斯·德雷克 (Fate/Grand Order)
This is the dataset of francis_drake/フランシス・ドレイク/弗朗西斯·德雷克 (Fate/Grand Order), containing 254 images and their tags.
The core tags of this character are `long_hair, pink_hair, breasts, blue_eyes, large_breasts, scar_on_face, ahoge`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 254 | 328.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/francis_drake_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 254 | 290.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/francis_drake_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 580 | 542.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/francis_drake_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/francis_drake_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, scar, smile, solo, cleavage, looking_at_viewer, eyepatch, ponytail, bare_shoulders, corset, simple_background, white_background |
| 1 | 8 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, pirate_hat, solo, scar, smile, simple_background, upper_body, white_background |
| 2 | 6 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, scar, smile, solo, white_pants, boots, pirate_hat, simple_background, white_background, epaulettes |
| 3 | 8 |  |  |  |  |  | 1boy, hetero, nipples, penis, scar, 1girl, blush, sex, solo_focus, spread_legs, vaginal, girl_on_top, navel, pussy, sweat, looking_at_viewer, open_mouth, pov, cowgirl_position, nude, pirate_hat, boots, grin, mosaic_censoring, testicles, thighhighs |
| 4 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, nipples, scar, smile, solo, barefoot, completely_nude, huge_breasts, ass, navel, armpits, barrel, blush, large_areolae, pussy, spread_legs, uncensored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | scar | smile | solo | cleavage | looking_at_viewer | eyepatch | ponytail | bare_shoulders | corset | simple_background | white_background | pirate_hat | upper_body | white_pants | boots | epaulettes | 1boy | hetero | nipples | penis | blush | sex | solo_focus | spread_legs | vaginal | girl_on_top | navel | pussy | sweat | open_mouth | pov | cowgirl_position | nude | grin | mosaic_censoring | testicles | thighhighs | barefoot | completely_nude | huge_breasts | ass | armpits | barrel | large_areolae | uncensored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-------|:-----------|:--------------------|:-----------|:-----------|:-----------------|:---------|:--------------------|:-------------------|:-------------|:-------------|:--------------|:--------|:-------------|:-------|:---------|:----------|:--------|:--------|:------|:-------------|:--------------|:----------|:--------------|:--------|:--------|:--------|:-------------|:------|:-------------------|:-------|:-------|:-------------------|:------------|:-------------|:-----------|:------------------|:---------------|:------|:----------|:---------|:----------------|:-------------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | X | | | | | X | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | X | | | | X | | | | | | | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | | | | | | | | | | | X | | X | | | X | | | X | X | | | | | | | | | | X | X | X | X | X | X | X | X |
|
one-sec-cv12/chunk_51 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 23367421872.875
num_examples: 243289
download_size: 21370502681
dataset_size: 23367421872.875
---
# Dataset Card for "chunk_51"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lipez/luisvozz | ---
license: openrail
---
|
lighteval/trivia_qa | ---
dataset_info:
- config_name: default
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: entity_pages
sequence:
- name: doc_source
dtype: string
- name: filename
dtype: string
- name: title
dtype: string
- name: wiki_context
dtype: string
- name: search_results
sequence:
- name: description
dtype: string
- name: filename
dtype: string
- name: rank
dtype: int32
- name: title
dtype: string
- name: url
dtype: string
- name: search_context
dtype: string
- name: answer
struct:
- name: aliases
sequence: string
- name: normalized_aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 106882730
num_examples: 138384
- name: validation
num_bytes: 14059830
num_examples: 17944
- name: test
num_bytes: 3667903
num_examples: 17210
download_size: 63926518
dataset_size: 124610463
- config_name: rc.nocontext
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: entity_pages
sequence:
- name: doc_source
dtype: string
- name: filename
dtype: string
- name: title
dtype: string
- name: wiki_context
dtype: string
- name: search_results
sequence:
- name: description
dtype: string
- name: filename
dtype: string
- name: rank
dtype: int32
- name: title
dtype: string
- name: url
dtype: string
- name: search_context
dtype: string
- name: answer
struct:
- name: aliases
sequence: string
- name: normalized_aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 106882730
num_examples: 138384
- name: validation
num_bytes: 14059830
num_examples: 17944
- name: test
num_bytes: 3667903
num_examples: 17210
download_size: 63926518
dataset_size: 124610463
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- config_name: rc.nocontext
data_files:
- split: train
path: rc.nocontext/train-*
- split: validation
path: rc.nocontext/validation-*
- split: test
path: rc.nocontext/test-*
---
# Dataset Card for "trivia_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thangvip/openhermes-vi | ---
dataset_info:
features:
- name: conversation
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: float64
- name: type
dtype: string
- name: id
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 97171889
num_examples: 26278
download_size: 44630184
dataset_size: 97171889
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_TomGrc__FusionNet_34Bx2_MoE | ---
pretty_name: Evaluation run of TomGrc/FusionNet_34Bx2_MoE
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TomGrc/FusionNet_34Bx2_MoE](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TomGrc__FusionNet_34Bx2_MoE\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-22T11:29:51.974520](https://huggingface.co/datasets/open-llm-leaderboard/details_TomGrc__FusionNet_34Bx2_MoE/blob/main/results_2024-01-22T11-29-51.974520.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7677884016423521,\n\
\ \"acc_stderr\": 0.028039750027124166,\n \"acc_norm\": 0.7713984723671282,\n\
\ \"acc_norm_stderr\": 0.028574402204719553,\n \"mc1\": 0.5520195838433293,\n\
\ \"mc1_stderr\": 0.017408513063422906,\n \"mc2\": 0.7131206524056665,\n\
\ \"mc2_stderr\": 0.014366676245195859\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6962457337883959,\n \"acc_stderr\": 0.01343890918477876,\n\
\ \"acc_norm\": 0.7295221843003413,\n \"acc_norm_stderr\": 0.012980954547659556\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6693885680143398,\n\
\ \"acc_stderr\": 0.004694718918225755,\n \"acc_norm\": 0.8621788488348935,\n\
\ \"acc_norm_stderr\": 0.003440076775300576\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7333333333333333,\n\
\ \"acc_stderr\": 0.038201699145179055,\n \"acc_norm\": 0.7333333333333333,\n\
\ \"acc_norm_stderr\": 0.038201699145179055\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.881578947368421,\n \"acc_stderr\": 0.026293995855474938,\n\
\ \"acc_norm\": 0.881578947368421,\n \"acc_norm_stderr\": 0.026293995855474938\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.77,\n\
\ \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \
\ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8075471698113208,\n \"acc_stderr\": 0.024262979839372274,\n\
\ \"acc_norm\": 0.8075471698113208,\n \"acc_norm_stderr\": 0.024262979839372274\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8680555555555556,\n\
\ \"acc_stderr\": 0.02830096838204443,\n \"acc_norm\": 0.8680555555555556,\n\
\ \"acc_norm_stderr\": 0.02830096838204443\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.59,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\": 0.59,\n\
\ \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7456647398843931,\n\
\ \"acc_stderr\": 0.0332055644308557,\n \"acc_norm\": 0.7456647398843931,\n\
\ \"acc_norm_stderr\": 0.0332055644308557\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5686274509803921,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.5686274509803921,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n\
\ \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7829787234042553,\n \"acc_stderr\": 0.02694748312149622,\n\
\ \"acc_norm\": 0.7829787234042553,\n \"acc_norm_stderr\": 0.02694748312149622\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6140350877192983,\n\
\ \"acc_stderr\": 0.04579639422070434,\n \"acc_norm\": 0.6140350877192983,\n\
\ \"acc_norm_stderr\": 0.04579639422070434\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.7517241379310344,\n \"acc_stderr\": 0.036001056927277696,\n\
\ \"acc_norm\": 0.7517241379310344,\n \"acc_norm_stderr\": 0.036001056927277696\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.7222222222222222,\n \"acc_stderr\": 0.02306818884826112,\n \"\
acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.02306818884826112\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5555555555555556,\n\
\ \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.5555555555555556,\n\
\ \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9161290322580645,\n\
\ \"acc_stderr\": 0.015769027496775664,\n \"acc_norm\": 0.9161290322580645,\n\
\ \"acc_norm_stderr\": 0.015769027496775664\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6502463054187192,\n \"acc_stderr\": 0.03355400904969566,\n\
\ \"acc_norm\": 0.6502463054187192,\n \"acc_norm_stderr\": 0.03355400904969566\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\"\
: 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8727272727272727,\n \"acc_stderr\": 0.026024657651656177,\n\
\ \"acc_norm\": 0.8727272727272727,\n \"acc_norm_stderr\": 0.026024657651656177\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9191919191919192,\n \"acc_stderr\": 0.019417681889724536,\n \"\
acc_norm\": 0.9191919191919192,\n \"acc_norm_stderr\": 0.019417681889724536\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9740932642487047,\n \"acc_stderr\": 0.011464523356953162,\n\
\ \"acc_norm\": 0.9740932642487047,\n \"acc_norm_stderr\": 0.011464523356953162\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8179487179487179,\n \"acc_stderr\": 0.019565236782930893,\n\
\ \"acc_norm\": 0.8179487179487179,\n \"acc_norm_stderr\": 0.019565236782930893\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.45925925925925926,\n \"acc_stderr\": 0.03038416923235083,\n \
\ \"acc_norm\": 0.45925925925925926,\n \"acc_norm_stderr\": 0.03038416923235083\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8487394957983193,\n \"acc_stderr\": 0.023274255898707946,\n\
\ \"acc_norm\": 0.8487394957983193,\n \"acc_norm_stderr\": 0.023274255898707946\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4966887417218543,\n \"acc_stderr\": 0.04082393379449654,\n \"\
acc_norm\": 0.4966887417218543,\n \"acc_norm_stderr\": 0.04082393379449654\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9155963302752294,\n \"acc_stderr\": 0.011918819327334879,\n \"\
acc_norm\": 0.9155963302752294,\n \"acc_norm_stderr\": 0.011918819327334879\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6666666666666666,\n \"acc_stderr\": 0.03214952147802749,\n \"\
acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03214952147802749\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9215686274509803,\n \"acc_stderr\": 0.018869514646658928,\n \"\
acc_norm\": 0.9215686274509803,\n \"acc_norm_stderr\": 0.018869514646658928\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8987341772151899,\n \"acc_stderr\": 0.019637720526065522,\n \
\ \"acc_norm\": 0.8987341772151899,\n \"acc_norm_stderr\": 0.019637720526065522\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7802690582959642,\n\
\ \"acc_stderr\": 0.027790177064383595,\n \"acc_norm\": 0.7802690582959642,\n\
\ \"acc_norm_stderr\": 0.027790177064383595\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8778625954198473,\n \"acc_stderr\": 0.028718776889342323,\n\
\ \"acc_norm\": 0.8778625954198473,\n \"acc_norm_stderr\": 0.028718776889342323\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8925619834710744,\n \"acc_stderr\": 0.028268812192540637,\n \"\
acc_norm\": 0.8925619834710744,\n \"acc_norm_stderr\": 0.028268812192540637\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.03038159675665167,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.03038159675665167\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.901840490797546,\n \"acc_stderr\": 0.023376180231059602,\n\
\ \"acc_norm\": 0.901840490797546,\n \"acc_norm_stderr\": 0.023376180231059602\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5892857142857143,\n\
\ \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.5892857142857143,\n\
\ \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.883495145631068,\n \"acc_stderr\": 0.03176683948640407,\n\
\ \"acc_norm\": 0.883495145631068,\n \"acc_norm_stderr\": 0.03176683948640407\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n\
\ \"acc_stderr\": 0.015537514263253862,\n \"acc_norm\": 0.9401709401709402,\n\
\ \"acc_norm_stderr\": 0.015537514263253862\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.92,\n \"acc_stderr\": 0.027265992434429093,\n \
\ \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.027265992434429093\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9106002554278416,\n\
\ \"acc_stderr\": 0.010203017847688298,\n \"acc_norm\": 0.9106002554278416,\n\
\ \"acc_norm_stderr\": 0.010203017847688298\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8265895953757225,\n \"acc_stderr\": 0.020383229551135026,\n\
\ \"acc_norm\": 0.8265895953757225,\n \"acc_norm_stderr\": 0.020383229551135026\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7865921787709497,\n\
\ \"acc_stderr\": 0.01370285993219609,\n \"acc_norm\": 0.7865921787709497,\n\
\ \"acc_norm_stderr\": 0.01370285993219609\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.020279402936174588,\n\
\ \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.020279402936174588\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8360128617363344,\n\
\ \"acc_stderr\": 0.021029576464662695,\n \"acc_norm\": 0.8360128617363344,\n\
\ \"acc_norm_stderr\": 0.021029576464662695\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8796296296296297,\n \"acc_stderr\": 0.01810541409432967,\n\
\ \"acc_norm\": 0.8796296296296297,\n \"acc_norm_stderr\": 0.01810541409432967\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6453900709219859,\n \"acc_stderr\": 0.028538650028878627,\n \
\ \"acc_norm\": 0.6453900709219859,\n \"acc_norm_stderr\": 0.028538650028878627\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5938722294654498,\n\
\ \"acc_stderr\": 0.012543154588412923,\n \"acc_norm\": 0.5938722294654498,\n\
\ \"acc_norm_stderr\": 0.012543154588412923\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8308823529411765,\n \"acc_stderr\": 0.022770868010113018,\n\
\ \"acc_norm\": 0.8308823529411765,\n \"acc_norm_stderr\": 0.022770868010113018\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.826797385620915,\n \"acc_stderr\": 0.015309329266969133,\n \
\ \"acc_norm\": 0.826797385620915,\n \"acc_norm_stderr\": 0.015309329266969133\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7454545454545455,\n\
\ \"acc_stderr\": 0.041723430387053825,\n \"acc_norm\": 0.7454545454545455,\n\
\ \"acc_norm_stderr\": 0.041723430387053825\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8489795918367347,\n \"acc_stderr\": 0.022923004094736847,\n\
\ \"acc_norm\": 0.8489795918367347,\n \"acc_norm_stderr\": 0.022923004094736847\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.900497512437811,\n\
\ \"acc_stderr\": 0.021166216304659393,\n \"acc_norm\": 0.900497512437811,\n\
\ \"acc_norm_stderr\": 0.021166216304659393\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \
\ \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8830409356725146,\n \"acc_stderr\": 0.024648068961366152,\n\
\ \"acc_norm\": 0.8830409356725146,\n \"acc_norm_stderr\": 0.024648068961366152\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5520195838433293,\n\
\ \"mc1_stderr\": 0.017408513063422906,\n \"mc2\": 0.7131206524056665,\n\
\ \"mc2_stderr\": 0.014366676245195859\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8397790055248618,\n \"acc_stderr\": 0.010309209498187479\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7088703563305534,\n \
\ \"acc_stderr\": 0.012513215297888463\n }\n}\n```"
repo_url: https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|arc:challenge|25_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|gsm8k|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hellaswag|10_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T11-29-51.974520.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-22T11-29-51.974520.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- '**/details_harness|winogrande|5_2024-01-22T11-29-51.974520.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-22T11-29-51.974520.parquet'
- config_name: results
data_files:
- split: 2024_01_22T11_29_51.974520
path:
- results_2024-01-22T11-29-51.974520.parquet
- split: latest
path:
- results_2024-01-22T11-29-51.974520.parquet
---
# Dataset Card for Evaluation run of TomGrc/FusionNet_34Bx2_MoE
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [TomGrc/FusionNet_34Bx2_MoE](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TomGrc__FusionNet_34Bx2_MoE",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-22T11:29:51.974520](https://huggingface.co/datasets/open-llm-leaderboard/details_TomGrc__FusionNet_34Bx2_MoE/blob/main/results_2024-01-22T11-29-51.974520.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7677884016423521,
"acc_stderr": 0.028039750027124166,
"acc_norm": 0.7713984723671282,
"acc_norm_stderr": 0.028574402204719553,
"mc1": 0.5520195838433293,
"mc1_stderr": 0.017408513063422906,
"mc2": 0.7131206524056665,
"mc2_stderr": 0.014366676245195859
},
"harness|arc:challenge|25": {
"acc": 0.6962457337883959,
"acc_stderr": 0.01343890918477876,
"acc_norm": 0.7295221843003413,
"acc_norm_stderr": 0.012980954547659556
},
"harness|hellaswag|10": {
"acc": 0.6693885680143398,
"acc_stderr": 0.004694718918225755,
"acc_norm": 0.8621788488348935,
"acc_norm_stderr": 0.003440076775300576
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.038201699145179055,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.038201699145179055
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.881578947368421,
"acc_stderr": 0.026293995855474938,
"acc_norm": 0.881578947368421,
"acc_norm_stderr": 0.026293995855474938
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8075471698113208,
"acc_stderr": 0.024262979839372274,
"acc_norm": 0.8075471698113208,
"acc_norm_stderr": 0.024262979839372274
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8680555555555556,
"acc_stderr": 0.02830096838204443,
"acc_norm": 0.8680555555555556,
"acc_norm_stderr": 0.02830096838204443
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.0332055644308557,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.0332055644308557
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5686274509803921,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.5686274509803921,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7829787234042553,
"acc_stderr": 0.02694748312149622,
"acc_norm": 0.7829787234042553,
"acc_norm_stderr": 0.02694748312149622
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6140350877192983,
"acc_stderr": 0.04579639422070434,
"acc_norm": 0.6140350877192983,
"acc_norm_stderr": 0.04579639422070434
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7517241379310344,
"acc_stderr": 0.036001056927277696,
"acc_norm": 0.7517241379310344,
"acc_norm_stderr": 0.036001056927277696
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.02306818884826112,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.02306818884826112
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9161290322580645,
"acc_stderr": 0.015769027496775664,
"acc_norm": 0.9161290322580645,
"acc_norm_stderr": 0.015769027496775664
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6502463054187192,
"acc_stderr": 0.03355400904969566,
"acc_norm": 0.6502463054187192,
"acc_norm_stderr": 0.03355400904969566
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8727272727272727,
"acc_stderr": 0.026024657651656177,
"acc_norm": 0.8727272727272727,
"acc_norm_stderr": 0.026024657651656177
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9191919191919192,
"acc_stderr": 0.019417681889724536,
"acc_norm": 0.9191919191919192,
"acc_norm_stderr": 0.019417681889724536
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9740932642487047,
"acc_stderr": 0.011464523356953162,
"acc_norm": 0.9740932642487047,
"acc_norm_stderr": 0.011464523356953162
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8179487179487179,
"acc_stderr": 0.019565236782930893,
"acc_norm": 0.8179487179487179,
"acc_norm_stderr": 0.019565236782930893
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.45925925925925926,
"acc_stderr": 0.03038416923235083,
"acc_norm": 0.45925925925925926,
"acc_norm_stderr": 0.03038416923235083
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8487394957983193,
"acc_stderr": 0.023274255898707946,
"acc_norm": 0.8487394957983193,
"acc_norm_stderr": 0.023274255898707946
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4966887417218543,
"acc_stderr": 0.04082393379449654,
"acc_norm": 0.4966887417218543,
"acc_norm_stderr": 0.04082393379449654
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9155963302752294,
"acc_stderr": 0.011918819327334879,
"acc_norm": 0.9155963302752294,
"acc_norm_stderr": 0.011918819327334879
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.03214952147802749,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.03214952147802749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9215686274509803,
"acc_stderr": 0.018869514646658928,
"acc_norm": 0.9215686274509803,
"acc_norm_stderr": 0.018869514646658928
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8987341772151899,
"acc_stderr": 0.019637720526065522,
"acc_norm": 0.8987341772151899,
"acc_norm_stderr": 0.019637720526065522
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7802690582959642,
"acc_stderr": 0.027790177064383595,
"acc_norm": 0.7802690582959642,
"acc_norm_stderr": 0.027790177064383595
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8778625954198473,
"acc_stderr": 0.028718776889342323,
"acc_norm": 0.8778625954198473,
"acc_norm_stderr": 0.028718776889342323
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8925619834710744,
"acc_stderr": 0.028268812192540637,
"acc_norm": 0.8925619834710744,
"acc_norm_stderr": 0.028268812192540637
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.03038159675665167,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.03038159675665167
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.901840490797546,
"acc_stderr": 0.023376180231059602,
"acc_norm": 0.901840490797546,
"acc_norm_stderr": 0.023376180231059602
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5892857142857143,
"acc_stderr": 0.04669510663875191,
"acc_norm": 0.5892857142857143,
"acc_norm_stderr": 0.04669510663875191
},
"harness|hendrycksTest-management|5": {
"acc": 0.883495145631068,
"acc_stderr": 0.03176683948640407,
"acc_norm": 0.883495145631068,
"acc_norm_stderr": 0.03176683948640407
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.015537514263253862,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.015537514263253862
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.92,
"acc_stderr": 0.027265992434429093,
"acc_norm": 0.92,
"acc_norm_stderr": 0.027265992434429093
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9106002554278416,
"acc_stderr": 0.010203017847688298,
"acc_norm": 0.9106002554278416,
"acc_norm_stderr": 0.010203017847688298
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8265895953757225,
"acc_stderr": 0.020383229551135026,
"acc_norm": 0.8265895953757225,
"acc_norm_stderr": 0.020383229551135026
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7865921787709497,
"acc_stderr": 0.01370285993219609,
"acc_norm": 0.7865921787709497,
"acc_norm_stderr": 0.01370285993219609
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.020279402936174588,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.020279402936174588
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8360128617363344,
"acc_stderr": 0.021029576464662695,
"acc_norm": 0.8360128617363344,
"acc_norm_stderr": 0.021029576464662695
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8796296296296297,
"acc_stderr": 0.01810541409432967,
"acc_norm": 0.8796296296296297,
"acc_norm_stderr": 0.01810541409432967
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6453900709219859,
"acc_stderr": 0.028538650028878627,
"acc_norm": 0.6453900709219859,
"acc_norm_stderr": 0.028538650028878627
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5938722294654498,
"acc_stderr": 0.012543154588412923,
"acc_norm": 0.5938722294654498,
"acc_norm_stderr": 0.012543154588412923
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8308823529411765,
"acc_stderr": 0.022770868010113018,
"acc_norm": 0.8308823529411765,
"acc_norm_stderr": 0.022770868010113018
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.826797385620915,
"acc_stderr": 0.015309329266969133,
"acc_norm": 0.826797385620915,
"acc_norm_stderr": 0.015309329266969133
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.041723430387053825,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.041723430387053825
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8489795918367347,
"acc_stderr": 0.022923004094736847,
"acc_norm": 0.8489795918367347,
"acc_norm_stderr": 0.022923004094736847
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.900497512437811,
"acc_stderr": 0.021166216304659393,
"acc_norm": 0.900497512437811,
"acc_norm_stderr": 0.021166216304659393
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8830409356725146,
"acc_stderr": 0.024648068961366152,
"acc_norm": 0.8830409356725146,
"acc_norm_stderr": 0.024648068961366152
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5520195838433293,
"mc1_stderr": 0.017408513063422906,
"mc2": 0.7131206524056665,
"mc2_stderr": 0.014366676245195859
},
"harness|winogrande|5": {
"acc": 0.8397790055248618,
"acc_stderr": 0.010309209498187479
},
"harness|gsm8k|5": {
"acc": 0.7088703563305534,
"acc_stderr": 0.012513215297888463
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
rookshanks/gsm8k | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3566510.564699585
num_examples: 6725
- name: test
num_bytes: 713732
num_examples: 1319
- name: validation
num_bytes: 396691.4353004148
num_examples: 748
download_size: 2306142
dataset_size: 4676933.999999999
---
# Dataset Card for "gsm8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/Occluded_and_Multi-pose_Face_Recognition_Data | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/MOccluded_and_Multi-pose_Face_Recognition_Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1073?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
1,930 People with Occlusion and Multi-pose Face Recognition Data, for each subject, 200 images were collected. The 200 images includes 4 kinds of light conditions * 10 kinds of occlusion cases (including non-occluded case) * 5 kinds of face pose. This data can be applied to computer vision tasks such as occluded face detection and recognition.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1073?source=Huggingface
### Supported Tasks and Leaderboards
face-detection, computer-vision: The dataset can be used to train a model for face detection.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
jahb57/bert_embeddings_BATCH_2 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: last_hidden_state
sequence:
sequence: float32
- name: pooler_output
sequence: float32
splits:
- name: train
num_bytes: 19612067644
num_examples: 100000
download_size: 19736597181
dataset_size: 19612067644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
azad-wolf-se/MH-FED | ---
language: en
tags:
- Computer Vision
- Machine Learning
- Deep Learning
---
# FExGAN-Meta: Facial Expression Generation with Meta-Humans

This is a facial expression image dataset proposed in the following article:
[FExGAN-Meta: Facial Expression Generation with Meta-Humans](https://arxiv.org/abs/2203.05975)
## Fill the Below Form to get the password
[Dataset Request Form](https://forms.gle/JwjNhaYG8gHNFCPo6)
# Citation
If you use any part of this dataset for experiments, please cite the following article.
```
@article{Siddiqui_FExGAN-Meta_2022,
author = {{Siddiqui}, J. Rafid},
title = {{FExGAN-Meta: Facial Expression Generation with Meta-Humans}},
journal = {ArXiv e-prints},
archivePrefix = "arXiv:2203.05975",
keywords = {Deep Learning, GAN, Facial Expressions},
year = {2022}
url = {https://arxiv.org/abs/2203.05975},
DOI: 10.31219/osf.io/ygdrt
}
```
|
danielroncel/dstc2_audios_input_hubert | ---
dataset_info:
features:
- name: session_ids
dtype: string
- name: path
dtype: string
- name: input_values
sequence: float32
splits:
- name: train
num_bytes: 4900832859
num_examples: 25516
download_size: 3317106728
dataset_size: 4900832859
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nuprl/leetcode-js | ---
dataset_info:
features:
- name: QID
dtype: int64
- name: titleSlug
dtype: string
- name: Hints
sequence: string
- name: Code
dtype: string
- name: Body
dtype: string
- name: Difficulty
dtype: string
- name: Topics
sequence: string
- name: Definitions
dtype: string
- name: Solutions
sequence: string
splits:
- name: train
num_bytes: 5041708
num_examples: 2155
download_size: 2136839
dataset_size: 5041708
---
# Dataset Card for "leetcode-js"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thiomajid/clarins_products | ---
dataset_info:
features:
- name: url
dtype: string
- name: category
dtype: string
- name: price
dtype: string
- name: long_description
dtype: string
- name: name
dtype: string
- name: rating
dtype: string
- name: short_description
dtype: string
- name: size
dtype: string
- name: texture
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 18164
num_examples: 13
download_size: 20615
dataset_size: 18164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_stsb_drop_aux_yn | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 2916
num_examples: 15
- name: test
num_bytes: 2673
num_examples: 20
- name: train
num_bytes: 9761
num_examples: 74
download_size: 18386
dataset_size: 15350
---
# Dataset Card for "MULTI_VALUE_stsb_drop_aux_yn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
abhishek/autotrain-data-imgtestadv1 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': daisy
'1': dandelion
'2': rose
'3': sunflower
'4': tulip
splits:
- name: train
num_bytes: 114899554.104
num_examples: 2196
- name: validation
num_bytes: 33595969.0
num_examples: 550
download_size: 167066023
dataset_size: 148495523.104
---
# Dataset Card for "autotrain-data-imgtestadv1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AzerKBU/turkishReviews-ds-mini | ---
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 1252876.2642514652
num_examples: 3378
- name: validation
num_bytes: 139455.7357485349
num_examples: 376
download_size: 0
dataset_size: 1392332.0
---
# Dataset Card for "turkishReviews-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lasha-nlp/CONDAQA |
---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: condaqa
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- negation
- reading comprehension
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
## Dataset Description
- **Repository:** [https://github.com/AbhilashaRavichander/CondaQA](https://github.com/AbhilashaRavichander/CondaQA)
- **Paper:** [https://arxiv.org/abs/2211.00295](https://arxiv.org/abs/2211.00295)
- **Point of Contact:** aravicha@andrew.cmu.edu
## Dataset Summary
Data from the EMNLP 2022 paper by Ravichander et al.: "CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation".
If you use this dataset, we would appreciate you citing our work:
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
```
From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues."
### Supported Tasks and Leaderboards
The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.
### Language
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{"QuestionID": "q10",
"original cue": "rarely",
"PassageEditID": 0,
"original passage": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws.",
"SampleID": 5294,
"label": "YES",
"original sentence": "Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time.",
"sentence2": "If a drug addict is caught with marijuana, is there a chance he will be jailed?",
"PassageID": 444,
"sentence1": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws."
}
```
### Data Fields
* `QuestionID`: unique ID for this question (might be asked for multiple passages)
* `original cue`: Negation cue that was used to select this passage from Wikipedia
* `PassageEditID`: 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage
* `original passage`: Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)
* `SampleID`: unique ID for this passage-question pair
* `label`: answer
* `original sentence`: Sentence that contains the negated statement
* `sentence2`: question
* `PassageID`: unique ID for the Wikipedia passage
* `sentence1`: passage
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
train_set = load_dataset("condaqa", "train")
dev_set = load_dataset("condaqa", "dev")
test_set = load_dataset("condaqa", "test")
```
## Dataset Creation
Full details are in the paper.
### Curation Rationale
From the paper: "Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:
1. The dataset should include a wide variety of negation cues, not just negative particles.
2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).
3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.
4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope."
### Source Data
From the paper: "To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation."
"We use negation cues from [Morante et al. (2011)](https://aclanthology.org/L12-1077/) and [van Son et al. (2016)](https://aclanthology.org/W16-5007/) as a starting point which we extend."
#### Initial Data Collection and Normalization
We show ten passages to crowdworkers and allow them to choose a passage they would like to work on.
#### Who are the source language producers?
Original passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.
### Annotations
#### Annotation process
From the paper: "In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages."
Full details are in the paper.
#### Who are the annotators?
From the paper: "Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.
### Personal and Sensitive Information
We expect that such information has already been redacted from Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.
### Discussion of Biases
We are not aware of societal biases that are exhibited in this dataset.
### Other Known Limitations
From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study."
## Additional Information
### Dataset Curators
From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.
### Licensing Information
license: apache-2.0
### Citation Information
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
``` |
quan246/doc_test | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: translation
struct:
- name: en
dtype: string
- name: vi
dtype: string
splits:
- name: test
num_bytes: 464211
num_examples: 4230
download_size: 262750
dataset_size: 464211
---
# Dataset Card for "doc_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
080-ai/mcq_ps_v1 | ---
license: cc-by-4.0
---
### Model - Multiple Choice Questions - Primary Sources
These 1000 multiple choice(MC) questions are derived from publically avaliable primary source documents. These MC questions were generated by using GPT-3.5-Turbo-0125, to summarize the documents or obtain direct quotes
, then generate a question based on that paragraph. It then generated multiple choices questions, along with the reasons for the correct answer.
### Limitations
The python script used to generate these questions and the reply from GPT-3.5-Turbo, tended to make "A" as the correct answer, more often than not. I have found, from a pervious
dataset that this can cause the newly trained model to select "A" more often if it is trying to guess. An edit might be needed to properly generate a random distribution of A-D options
for this dataset. In order to limit this possible problem in the future. As of right now this error, or problem persists in the training data. |
CyberHarem/saphy_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of saphy (Fire Emblem)
This is the dataset of saphy (Fire Emblem), containing 17 images and their tags.
The core tags of this character are `green_hair, green_eyes, long_hair, bangs, breasts, hat`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 17 | 13.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saphy_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 17 | 8.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saphy_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 26 | 12.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saphy_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 17 | 11.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saphy_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 26 | 16.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saphy_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/saphy_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, long_sleeves, hood, solo, jewelry, smile, white_background, full_body, holding_staff, looking_at_viewer, simple_background, long_dress, open_mouth, white_cape |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | hood | solo | jewelry | smile | white_background | full_body | holding_staff | looking_at_viewer | simple_background | long_dress | open_mouth | white_cape |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:-------|:----------|:--------|:-------------------|:------------|:----------------|:--------------------|:--------------------|:-------------|:-------------|:-------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
llm-book/jawiki-20220404-c400 | ---
license: mit
task_categories:
- question-answering
language:
- ja
size_categories:
- 10M<n<100M
---
# Dataset Card for jawiki-20220404-c400
This dataset contains passages, each of which consists of consecutive sentences no longer than 400 characters from Japanese Wikipedia as of 2022-04-04.
This dataset is used in baseline systems for [the AI王 question answering competition](https://sites.google.com/view/project-aio/home), such as [cl-tohoku/AIO3_BPR_baseline](https://github.com/cl-tohoku/AIO3_BPR_baseline).
Please refer to [the original repository](https://github.com/cl-tohoku/quiz-datasets) for further details. |
mask-distilled-one-sec-cv12/chunk_41 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1270362344
num_examples: 249482
download_size: 1289494342
dataset_size: 1270362344
---
# Dataset Card for "chunk_41"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MuhammadAtif3/toxic_comment_detection | ---
license: afl-3.0
---
|
Jaimefebe/Eus02 | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int64
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int64
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int64
- name: name
sequence: string
- name: value
sequence: float64
splits:
- name: train
num_bytes: 78710329
num_examples: 77537
- name: validation
num_bytes: 2431893
num_examples: 2351
download_size: 27172494
dataset_size: 81142222
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
chuquan282/Vision_Assistant_CBD | ---
license: apache-2.0
dataset_info:
features:
- name: image_path
dtype: string
- name: status
dtype: string
- name: item_name
dtype: string
- name: possition
dtype: string
- name: caption_en
dtype: string
- name: Describe_en
dtype: string
- name: caption_vi
dtype: string
- name: Describe_vi
dtype: string
- name: prompt
sequence: string
splits:
- name: train
num_bytes: 253007.42105263157
num_examples: 85
- name: test
num_bytes: 29765.57894736842
num_examples: 10
download_size: 55836
dataset_size: 282773.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Mutonix/RefGPT-Fact-v2 | ---
dataset_info:
features:
- name: dialogue
dtype: string
- name: reference
dtype: string
- name: language
dtype: string
- name: type
dtype: string
splits:
- name: zh
num_bytes: 213783237
num_examples: 60662
- name: en
num_bytes: 551465272
num_examples: 58580
download_size: 451398576
dataset_size: 765248509
---
# Dataset Card for "RefGPT-Fact-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jakartaresearch/news-title-gen | ---
annotations_creators:
- no-annotation
language:
- id
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Indonesian News Title Generation
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- newspapers
- title
- news
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for Indonesian News Title Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. |
yzhuang/autotree_automl_bank-marketing_gosdt_l512_d3_sd3 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float64
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float64
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 5538400000
num_examples: 100000
- name: validation
num_bytes: 553840000
num_examples: 10000
download_size: 811018150
dataset_size: 6092240000
---
# Dataset Card for "autotree_automl_bank-marketing_gosdt_l512_d3_sd3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sargishunanyan/thermo-classification | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="sargishunanyan/thermo-classification" src="https://huggingface.co/datasets/sargishunanyan/thermo-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Thermostat', 'Housing', 'Insert']
```
### Number of Images
```json
{'valid': 102, 'test': 52, 'train': 372}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("sargishunanyan/thermo-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0/dataset/3](https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ proj-2-qmdk0_dataset,
title = { proj 2 Dataset },
type = { Open Source Dataset },
author = { Yolo },
howpublished = { \\url{ https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0 } },
url = { https://universe.roboflow.com/yolo-po0ro/proj-2-qmdk0 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { oct },
note = { visited on 2023-10-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on October 8, 2023 at 7:58 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 526 images.
Car-parts are annotated in folder format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
|
open-llm-leaderboard/details_KnutJaegersberg__Deacon-1b | ---
pretty_name: Evaluation run of KnutJaegersberg/Deacon-1b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [KnutJaegersberg/Deacon-1b](https://huggingface.co/KnutJaegersberg/Deacon-1b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KnutJaegersberg__Deacon-1b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-04T17:32:52.596072](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deacon-1b/blob/main/results_2023-12-04T17-32-52.596072.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2547551700512293,\n\
\ \"acc_stderr\": 0.030605522190513053,\n \"acc_norm\": 0.2559364936006559,\n\
\ \"acc_norm_stderr\": 0.03137480856769965,\n \"mc1\": 0.2178702570379437,\n\
\ \"mc1_stderr\": 0.01445084671412389,\n \"mc2\": 0.35049035383875937,\n\
\ \"mc2_stderr\": 0.014299155547047497\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.3003412969283277,\n \"acc_stderr\": 0.013395909309957004,\n\
\ \"acc_norm\": 0.3242320819112628,\n \"acc_norm_stderr\": 0.013678810399518827\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.44722166899024096,\n\
\ \"acc_stderr\": 0.004961904949171387,\n \"acc_norm\": 0.5862378012348137,\n\
\ \"acc_norm_stderr\": 0.004915003499517835\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3925925925925926,\n\
\ \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.3925925925925926,\n\
\ \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.16447368421052633,\n \"acc_stderr\": 0.030167533468632702,\n\
\ \"acc_norm\": 0.16447368421052633,\n \"acc_norm_stderr\": 0.030167533468632702\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2528301886792453,\n \"acc_stderr\": 0.026749899771241238,\n\
\ \"acc_norm\": 0.2528301886792453,\n \"acc_norm_stderr\": 0.026749899771241238\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.039427724440366234,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.039427724440366234\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.2,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\"\
: 0.2,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.20809248554913296,\n\
\ \"acc_stderr\": 0.030952890217749884,\n \"acc_norm\": 0.20809248554913296,\n\
\ \"acc_norm_stderr\": 0.030952890217749884\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.18627450980392157,\n \"acc_stderr\": 0.038739587141493524,\n\
\ \"acc_norm\": 0.18627450980392157,\n \"acc_norm_stderr\": 0.038739587141493524\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2425531914893617,\n \"acc_stderr\": 0.028020226271200217,\n\
\ \"acc_norm\": 0.2425531914893617,\n \"acc_norm_stderr\": 0.028020226271200217\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.0414243971948936,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.0414243971948936\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n\
\ \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.24338624338624337,\n \"acc_stderr\": 0.02210112878741543,\n \"\
acc_norm\": 0.24338624338624337,\n \"acc_norm_stderr\": 0.02210112878741543\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.03718489006818114,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.03718489006818114\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2064516129032258,\n\
\ \"acc_stderr\": 0.02302589961718872,\n \"acc_norm\": 0.2064516129032258,\n\
\ \"acc_norm_stderr\": 0.02302589961718872\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.18226600985221675,\n \"acc_stderr\": 0.02716334085964515,\n\
\ \"acc_norm\": 0.18226600985221675,\n \"acc_norm_stderr\": 0.02716334085964515\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\"\
: 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.23030303030303031,\n \"acc_stderr\": 0.03287666758603489,\n\
\ \"acc_norm\": 0.23030303030303031,\n \"acc_norm_stderr\": 0.03287666758603489\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.21212121212121213,\n \"acc_stderr\": 0.029126522834586818,\n \"\
acc_norm\": 0.21212121212121213,\n \"acc_norm_stderr\": 0.029126522834586818\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.21761658031088082,\n \"acc_stderr\": 0.029778663037752937,\n\
\ \"acc_norm\": 0.21761658031088082,\n \"acc_norm_stderr\": 0.029778663037752937\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.24871794871794872,\n \"acc_stderr\": 0.021916957709213796,\n\
\ \"acc_norm\": 0.24871794871794872,\n \"acc_norm_stderr\": 0.021916957709213796\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.26296296296296295,\n \"acc_stderr\": 0.026842057873833706,\n \
\ \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.026842057873833706\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.19327731092436976,\n \"acc_stderr\": 0.02564947026588919,\n\
\ \"acc_norm\": 0.19327731092436976,\n \"acc_norm_stderr\": 0.02564947026588919\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2052980132450331,\n \"acc_stderr\": 0.03297986648473835,\n \"\
acc_norm\": 0.2052980132450331,\n \"acc_norm_stderr\": 0.03297986648473835\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.1761467889908257,\n \"acc_stderr\": 0.016332882393431378,\n \"\
acc_norm\": 0.1761467889908257,\n \"acc_norm_stderr\": 0.016332882393431378\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.3055555555555556,\n \"acc_stderr\": 0.03141554629402544,\n \"\
acc_norm\": 0.3055555555555556,\n \"acc_norm_stderr\": 0.03141554629402544\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.23529411764705882,\n \"acc_stderr\": 0.029771775228145635,\n \"\
acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.029771775228145635\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.26582278481012656,\n \"acc_stderr\": 0.02875679962965834,\n \
\ \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.02875679962965834\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.36771300448430494,\n\
\ \"acc_stderr\": 0.03236198350928275,\n \"acc_norm\": 0.36771300448430494,\n\
\ \"acc_norm_stderr\": 0.03236198350928275\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.22137404580152673,\n \"acc_stderr\": 0.03641297081313729,\n\
\ \"acc_norm\": 0.22137404580152673,\n \"acc_norm_stderr\": 0.03641297081313729\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794088,\n \"\
acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794088\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.24074074074074073,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.24074074074074073,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.2822085889570552,\n \"acc_stderr\": 0.03536117886664742,\n\
\ \"acc_norm\": 0.2822085889570552,\n \"acc_norm_stderr\": 0.03536117886664742\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.04547960999764376,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.04547960999764376\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.21359223300970873,\n \"acc_stderr\": 0.04058042015646036,\n\
\ \"acc_norm\": 0.21359223300970873,\n \"acc_norm_stderr\": 0.04058042015646036\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.3034188034188034,\n\
\ \"acc_stderr\": 0.03011821010694266,\n \"acc_norm\": 0.3034188034188034,\n\
\ \"acc_norm_stderr\": 0.03011821010694266\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.27330779054916987,\n\
\ \"acc_stderr\": 0.015936681062628556,\n \"acc_norm\": 0.27330779054916987,\n\
\ \"acc_norm_stderr\": 0.015936681062628556\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.2514450867052023,\n \"acc_stderr\": 0.02335736578587404,\n\
\ \"acc_norm\": 0.2514450867052023,\n \"acc_norm_stderr\": 0.02335736578587404\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n\
\ \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n\
\ \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.023929155517351284,\n\
\ \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.023929155517351284\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2765273311897106,\n\
\ \"acc_stderr\": 0.02540383297817962,\n \"acc_norm\": 0.2765273311897106,\n\
\ \"acc_norm_stderr\": 0.02540383297817962\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2191358024691358,\n \"acc_stderr\": 0.023016705640262206,\n\
\ \"acc_norm\": 0.2191358024691358,\n \"acc_norm_stderr\": 0.023016705640262206\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.26595744680851063,\n \"acc_stderr\": 0.026358065698880596,\n \
\ \"acc_norm\": 0.26595744680851063,\n \"acc_norm_stderr\": 0.026358065698880596\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.25358539765319427,\n\
\ \"acc_stderr\": 0.01111171533610113,\n \"acc_norm\": 0.25358539765319427,\n\
\ \"acc_norm_stderr\": 0.01111171533610113\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.029029422815681404,\n\
\ \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.029029422815681404\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.2565359477124183,\n \"acc_stderr\": 0.017667841612378988,\n \
\ \"acc_norm\": 0.2565359477124183,\n \"acc_norm_stderr\": 0.017667841612378988\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n\
\ \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n\
\ \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.17959183673469387,\n \"acc_stderr\": 0.024573293589585637,\n\
\ \"acc_norm\": 0.17959183673469387,\n \"acc_norm_stderr\": 0.024573293589585637\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n\
\ \"acc_stderr\": 0.03036049015401465,\n \"acc_norm\": 0.24378109452736318,\n\
\ \"acc_norm_stderr\": 0.03036049015401465\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.28313253012048195,\n\
\ \"acc_stderr\": 0.03507295431370519,\n \"acc_norm\": 0.28313253012048195,\n\
\ \"acc_norm_stderr\": 0.03507295431370519\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.3157894736842105,\n \"acc_stderr\": 0.03565079670708311,\n\
\ \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.03565079670708311\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2178702570379437,\n\
\ \"mc1_stderr\": 0.01445084671412389,\n \"mc2\": 0.35049035383875937,\n\
\ \"mc2_stderr\": 0.014299155547047497\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.595895816890292,\n \"acc_stderr\": 0.013791610664670849\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \
\ \"acc_stderr\": 0.002267537102254515\n }\n}\n```"
repo_url: https://huggingface.co/KnutJaegersberg/Deacon-1b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|arc:challenge|25_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|gsm8k|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hellaswag|10_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T17-32-52.596072.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T17-32-52.596072.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- '**/details_harness|winogrande|5_2023-12-04T17-32-52.596072.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-04T17-32-52.596072.parquet'
- config_name: results
data_files:
- split: 2023_12_04T17_32_52.596072
path:
- results_2023-12-04T17-32-52.596072.parquet
- split: latest
path:
- results_2023-12-04T17-32-52.596072.parquet
---
# Dataset Card for Evaluation run of KnutJaegersberg/Deacon-1b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/KnutJaegersberg/Deacon-1b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [KnutJaegersberg/Deacon-1b](https://huggingface.co/KnutJaegersberg/Deacon-1b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KnutJaegersberg__Deacon-1b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-04T17:32:52.596072](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deacon-1b/blob/main/results_2023-12-04T17-32-52.596072.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2547551700512293,
"acc_stderr": 0.030605522190513053,
"acc_norm": 0.2559364936006559,
"acc_norm_stderr": 0.03137480856769965,
"mc1": 0.2178702570379437,
"mc1_stderr": 0.01445084671412389,
"mc2": 0.35049035383875937,
"mc2_stderr": 0.014299155547047497
},
"harness|arc:challenge|25": {
"acc": 0.3003412969283277,
"acc_stderr": 0.013395909309957004,
"acc_norm": 0.3242320819112628,
"acc_norm_stderr": 0.013678810399518827
},
"harness|hellaswag|10": {
"acc": 0.44722166899024096,
"acc_stderr": 0.004961904949171387,
"acc_norm": 0.5862378012348137,
"acc_norm_stderr": 0.004915003499517835
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3925925925925926,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.3925925925925926,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.16447368421052633,
"acc_stderr": 0.030167533468632702,
"acc_norm": 0.16447368421052633,
"acc_norm_stderr": 0.030167533468632702
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2528301886792453,
"acc_stderr": 0.026749899771241238,
"acc_norm": 0.2528301886792453,
"acc_norm_stderr": 0.026749899771241238
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.25,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.19,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.19,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.20809248554913296,
"acc_stderr": 0.030952890217749884,
"acc_norm": 0.20809248554913296,
"acc_norm_stderr": 0.030952890217749884
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.18627450980392157,
"acc_stderr": 0.038739587141493524,
"acc_norm": 0.18627450980392157,
"acc_norm_stderr": 0.038739587141493524
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2425531914893617,
"acc_stderr": 0.028020226271200217,
"acc_norm": 0.2425531914893617,
"acc_norm_stderr": 0.028020226271200217
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.0414243971948936,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.0414243971948936
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24338624338624337,
"acc_stderr": 0.02210112878741543,
"acc_norm": 0.24338624338624337,
"acc_norm_stderr": 0.02210112878741543
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.03718489006818114,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.03718489006818114
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2064516129032258,
"acc_stderr": 0.02302589961718872,
"acc_norm": 0.2064516129032258,
"acc_norm_stderr": 0.02302589961718872
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.18226600985221675,
"acc_stderr": 0.02716334085964515,
"acc_norm": 0.18226600985221675,
"acc_norm_stderr": 0.02716334085964515
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.23030303030303031,
"acc_stderr": 0.03287666758603489,
"acc_norm": 0.23030303030303031,
"acc_norm_stderr": 0.03287666758603489
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.21212121212121213,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.21212121212121213,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.21761658031088082,
"acc_stderr": 0.029778663037752937,
"acc_norm": 0.21761658031088082,
"acc_norm_stderr": 0.029778663037752937
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.24871794871794872,
"acc_stderr": 0.021916957709213796,
"acc_norm": 0.24871794871794872,
"acc_norm_stderr": 0.021916957709213796
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26296296296296295,
"acc_stderr": 0.026842057873833706,
"acc_norm": 0.26296296296296295,
"acc_norm_stderr": 0.026842057873833706
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.19327731092436976,
"acc_stderr": 0.02564947026588919,
"acc_norm": 0.19327731092436976,
"acc_norm_stderr": 0.02564947026588919
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2052980132450331,
"acc_stderr": 0.03297986648473835,
"acc_norm": 0.2052980132450331,
"acc_norm_stderr": 0.03297986648473835
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.1761467889908257,
"acc_stderr": 0.016332882393431378,
"acc_norm": 0.1761467889908257,
"acc_norm_stderr": 0.016332882393431378
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3055555555555556,
"acc_stderr": 0.03141554629402544,
"acc_norm": 0.3055555555555556,
"acc_norm_stderr": 0.03141554629402544
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.029771775228145635,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.029771775228145635
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.26582278481012656,
"acc_stderr": 0.02875679962965834,
"acc_norm": 0.26582278481012656,
"acc_norm_stderr": 0.02875679962965834
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.36771300448430494,
"acc_stderr": 0.03236198350928275,
"acc_norm": 0.36771300448430494,
"acc_norm_stderr": 0.03236198350928275
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.22137404580152673,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.22137404580152673,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.23140495867768596,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.23140495867768596,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2822085889570552,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.2822085889570552,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04547960999764376,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04547960999764376
},
"harness|hendrycksTest-management|5": {
"acc": 0.21359223300970873,
"acc_stderr": 0.04058042015646036,
"acc_norm": 0.21359223300970873,
"acc_norm_stderr": 0.04058042015646036
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.3034188034188034,
"acc_stderr": 0.03011821010694266,
"acc_norm": 0.3034188034188034,
"acc_norm_stderr": 0.03011821010694266
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.27330779054916987,
"acc_stderr": 0.015936681062628556,
"acc_norm": 0.27330779054916987,
"acc_norm_stderr": 0.015936681062628556
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2514450867052023,
"acc_stderr": 0.02335736578587404,
"acc_norm": 0.2514450867052023,
"acc_norm_stderr": 0.02335736578587404
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.023929155517351284,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.023929155517351284
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2765273311897106,
"acc_stderr": 0.02540383297817962,
"acc_norm": 0.2765273311897106,
"acc_norm_stderr": 0.02540383297817962
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2191358024691358,
"acc_stderr": 0.023016705640262206,
"acc_norm": 0.2191358024691358,
"acc_norm_stderr": 0.023016705640262206
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.26595744680851063,
"acc_stderr": 0.026358065698880596,
"acc_norm": 0.26595744680851063,
"acc_norm_stderr": 0.026358065698880596
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.25358539765319427,
"acc_stderr": 0.01111171533610113,
"acc_norm": 0.25358539765319427,
"acc_norm_stderr": 0.01111171533610113
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.029029422815681404,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.029029422815681404
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2565359477124183,
"acc_stderr": 0.017667841612378988,
"acc_norm": 0.2565359477124183,
"acc_norm_stderr": 0.017667841612378988
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.17959183673469387,
"acc_stderr": 0.024573293589585637,
"acc_norm": 0.17959183673469387,
"acc_norm_stderr": 0.024573293589585637
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.03036049015401465,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.03036049015401465
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.28313253012048195,
"acc_stderr": 0.03507295431370519,
"acc_norm": 0.28313253012048195,
"acc_norm_stderr": 0.03507295431370519
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3157894736842105,
"acc_stderr": 0.03565079670708311,
"acc_norm": 0.3157894736842105,
"acc_norm_stderr": 0.03565079670708311
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2178702570379437,
"mc1_stderr": 0.01445084671412389,
"mc2": 0.35049035383875937,
"mc2_stderr": 0.014299155547047497
},
"harness|winogrande|5": {
"acc": 0.595895816890292,
"acc_stderr": 0.013791610664670849
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.002267537102254515
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
jacobbieker/eumetsat-cloudmask-0deg | ---
license: mit
---
|
NyxSlee/translating_mplm_dataset_three | ---
dataset_info:
license: "MIT"
name: "translating_mplm_dataset_three"
description: "A dataset for translating sentences from MPLM"
homepage: "https://github.com/yourusername/your-repo"
task_categories:
- "language-translation"
languages:
- "zh"
- "en"
size: "3 examples"
download_size: "27.3 KB"
dataset_size: "3.42 KB"
visibility: public
status: active
authors:
- name: "Your Name"
email: "your.email@example.com"
creation_date: "2023-11-10"
repository: "https://github.com/yourusername/your-repo"
citation: |
@misc{yourcitation,
title={Your Dataset Title},
author={Your Name},
year={2023},
publisher={Your Publisher},
journal={Journal of Datasets},
howpublished={\url{https://github.com/yourusername/your-repo}},
}
---
# Rest of your dataset card
dataset_info:
features:
- name: number
dtype: string
- name: sentence
dtype: string
- name: word_translations
struct:
- name: 一个 (yī gè)
dtype: string
- name: 一尊 (yī zūn)
dtype: string
- name: 下来 (xià lái)
dtype: string
- name: 仿佛 (fǎng fú)
dtype: string
- name: 会 (huì)
dtype: string
- name: 凝固 (níng gù)
dtype: string
- name: 动过 (dòng guò)
dtype: string
- name: 只余 (zhǐ yú)
dtype: string
- name: 坐在 (zuò zài)
dtype: string
- name: 天色 (Tiān sè)
dtype: string
- name: 完全 (wán quán)
dtype: string
- name: 屋内 (wū nèi)
dtype: string
- name: 床边 (chuáng biān)
dtype: string
- name: 捧着 (pěng zhe)
dtype: string
- name: 放在 (fàng zài)
dtype: string
- name: 是 (shì)
dtype: string
- name: 暗了 (àn le)
dtype: string
- name: 暮色 (mù sè)
dtype: string
- name: 没有 (méi yǒu)
dtype: string
- name: 浅浅 (qiǎn qiǎn)
dtype: string
- name: 燃烛 (rán zhú)
dtype: string
- name: 的 (de)
dtype: string
- name: 糕点 (gāo diǎn)
dtype: string
- name: 许久 (xǔ jiǔ)
dtype: string
- name: 谁 (shuí)
dtype: string
- name: 身影 (shēn yǐng)
dtype: string
- name: 轮廓 (lún kuò)
dtype: string
- name: 这儿 (zhèr)
dtype: string
- name: 逐渐 (zhú jiàn)
dtype: string
- name: 都没有 (dōu méi yǒu)
dtype: string
- name: 阚闻萧 (Kàn wén xiāo)
dtype: string
- name: 隐没 (yǐn mò)
dtype: string
- name: 黑漆漆的 (hēi qī qī de)
dtype: string
- name: best_translation
dtype: string
- name: alternative_translations
sequence: string
splits:
- name: train
num_bytes: 3429
num_examples: 3
download_size: 27294
dataset_size: 3429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "translating_mplm_dataset_three"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
explintr/nli-stained | ---
dataset_info:
features:
- name: captionID
dtype: string
- name: pairID
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: is_main_shortcut
dtype: bool
- name: shortcut_id
dtype: int64
- name: shortcut
dtype: string
- name: gold_label
dtype: string
splits:
- name: train
num_bytes: 71646465
num_examples: 264000
- name: test
num_bytes: 41461112
num_examples: 144000
- name: dev
num_bytes: 41610142
num_examples: 144000
download_size: 13856950
dataset_size: 154717719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
---
# Dataset Card for "nli-stained"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kimsiun/clinical_trial_eligibility_crietria_recommendation | ---
license: mit
---
This repository is a public repository of the data used in the paper "CReSE: Enhancing Clinical Trial Design via Contrastive Learning and Rephrasing-based and Clinical Relevance-preserving Sentence Embedding" (under review).
There are three main types of data stored in the repository.
1) Positive-negative EC-title pairs: A dataset that pairs the ECs used in a study with the study's title and other design information. It can be used to train EC recommendation models (binary classification). Different datasets are available in terms of the input type of trial information and the number of ECs in the trial.
- For example, a file named "train_pairs_positive_inputtype_only_title.p" means positive pair data collected using only trial title as the input type.
- On the other hand, the file "train_pairs_negative_Ent8_inputtype_title+CTinfo.p" refers to negative pair data collected using trial title and semi-structured key design factors as input type, for only trials with EC numbers of 8 or more reported through clinicaltrials.gov.
2) original-rephrased EC pairs: The original-rephrased EC pairs data used to develop the CReSE model. EC rephrasing was performed using ChatGPT (gpt-3.5-turbo).
3) Clinical relevance data between EC pairs: A dataset evaluating the clinical relevance between different ECs created to evaluate the EC clustering performance of the CReSE model. It was also created using ChatGPT (gpt-3.5-turbo).
Please refer to our paper for more specific data generation conditions and related prompts.
|
SEACrowd/cod | ---
license: unknown
tags:
- dialogue-system
language:
- ind
---
# cod
Cross-lingual Outline-based Dialogue (COD) is a dataset comprised of manually generated, localized, and cross-lingually aligned Task-Oriented-Dialogue (TOD) data that served as the source of dialogue prompts.
COD enables natural language understanding, dialogue state tracking, and end-to-end dialogue modeling and evaluation.
Majewska et al. (2022) create COD using a novel outline-based annotation pipeline for multilingual TOD by Majewska et al. (2022).
English Schema-Guided Dialogue (SGD; Shah et al., 2018; Rastogi et al., 2020) dataset is automatically sampled and mapped into outlines. The outlines are then paraphrased and adapted to the local target domain by human subjects.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{majewska2022cross,
title={Cross-lingual dialogue dataset creation via outline-based generation},
author={Majewska, Olga and Razumovskaia, Evgeniia and Ponti, Edoardo Maria and Vuli{'c}, Ivan and Korhonen, Anna},
journal={arXiv preprint arXiv:2201.13405},
year={2022}
}
```
## License
Unknown
## Homepage
[https://github.com/cambridgeltl/COD](https://github.com/cambridgeltl/COD)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
anindya-hf-2002/audio_speaker_identification | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: intent_class
dtype: int64
splits:
- name: train
num_bytes: 719744643
num_examples: 1162
- name: test
num_bytes: 178683573
num_examples: 291
download_size: 874032780
dataset_size: 898428216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Multimodal-Fatima/VQAv2_test_split_5 | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: image
dtype: image
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_wo_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_with_openai
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_B_16_with_openai
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
splits:
- name: test
num_bytes: 9194077268.0
num_examples: 44779
download_size: 1868487689
dataset_size: 9194077268.0
---
# Dataset Card for "VQAv2_test_split_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HamdanXI/paradetox-preprocess | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: en_toxic_comment
dtype: string
- name: en_neutral_comment
dtype: string
splits:
- name: train
num_bytes: 2137058
num_examples: 19744
download_size: 1217740
dataset_size: 2137058
---
# Dataset Card for "paradetox-preprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yhyu13/ToolBench_toolllama_G123_dfs | ---
license: apache-2.0
---
Dataset mentioned for ToolBench project https://github.com/OpenBMB/ToolBench
They were in the google drive data.zip https://drive.google.com/drive/folders/1yBUQ732mPu-KclJnuQELEhtKakdXFc3J
These two json are already processed by the original author. Just plugin into the ToolBnech repo deepseed arguments.
```
--data_path ./toolllama_G123_dfs_train.json \
--eval_data_path ./toolllama_G123_dfs_eval.json \
```
~~My objective is to tailer the training data to 1/100 size and used them for the LLaMA-Factory project. https://github.com/hiyouga/LLaMA-Factory~~
So that more open source models could benifit from function calling dataset.
## Edit
The objective is obtained by using another dataset instead: https://huggingface.co/datasets/Yhyu13/glaive-function-calling-v2-llama-factory-convert
It is smaller and better. |
AdapterOcean/data-standardized_cluster_19_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 9165871
num_examples: 8554
download_size: 4002670
dataset_size: 9165871
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data-standardized_cluster_19_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aucelio/rodrigosilva | ---
license: openrail
---
|
sanjay920/MathInstruct-sharegpt | ---
dataset_info:
features:
- name: source
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: tools
dtype: string
- name: conversations
dtype: string
splits:
- name: train
num_bytes: 505278110
num_examples: 262040
download_size: 229311564
dataset_size: 505278110
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heegyu/UltraInteract_pair_longest_multiturn | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: task
dtype: string
- name: dataset
dtype: string
- name: trajectory
list:
- name: from
dtype: string
- name: value
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: id
dtype: string
- name: parent_id
dtype: string
- name: __index_level_0__
dtype: string
- name: __index_level_1__
dtype: int64
splits:
- name: train
num_bytes: 196675986
num_examples: 26658
download_size: 78610248
dataset_size: 196675986
---
# Dataset Card for "UltraInteract_pair_longest_multiturn"
- Original Dataset: [openbmb/UltraInteract_pair](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- Filtered multiturn instances and longest item for each reasoning tree.
Data processing code:
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("openbmb/UltraInteract_pair")
df = dataset['train'].to_pandas()
df["turns"] = df["trajectory"].apply(lambda x: len(x))
df = df[df.turns > 1]
df = df.groupby("parent_id").apply(lambda x: x[x["turns"] == x["turns"].max()])
print(df)
print(df.shape)
df = df.drop(columns=["turns"])
dataset['train'] = Dataset.from_pandas(df)
dataset.push_to_hub("heegyu/UltraInteract_pair_longest_multiturn")
```
|
wangxiaomonyg/newdataset | ---
license: unknown
---
|
CyberHarem/kahili_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kahili (Pokémon)
This is the dataset of kahili (Pokémon), containing 229 images and their tags.
The core tags of this character are `long_hair, ahoge, blue_hair, visor_cap, mole, mole_under_eye, breasts, light_blue_hair, blue_eyes, blue_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 229 | 211.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 229 | 133.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 524 | 266.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 229 | 194.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 524 | 353.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kahili_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kahili_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, blush, hetero, short_sleeves, spread_legs, 1boy, nipples, penis, sex, skirt, socks, striped_shirt, vaginal, cum_in_pussy, girl_on_top, gloves, open_mouth, shirt_lift, sweat, bar_censor, large_breasts, medium_breasts, navel, no_bra, pubic_hair, squatting, straddling, underwear |
| 1 | 25 |  |  |  |  |  | 1girl, collared_shirt, short_sleeves, striped_shirt, closed_mouth, golf_club, holding, kneehighs, shoes, solo, full_body, blue_skirt, simple_background, looking_at_viewer, buttons, white_background, miniskirt, pencil_skirt, standing, white_footwear, frown, v-shaped_eyebrows, medium_breasts, squatting, blush, hat, white_gloves |
| 2 | 5 |  |  |  |  |  | 1girl, closed_mouth, collared_shirt, golf_club, holding, short_sleeves, solo, striped_shirt, blue_skirt, white_background, white_gloves, frown, looking_at_viewer, simple_background, >:(, buttons, hand_on_hip, single_glove, standing |
| 3 | 6 |  |  |  |  |  | 1girl, collared_shirt, short_sleeves, simple_background, solo, upper_body, closed_mouth, striped_shirt, white_background, buttons, looking_at_viewer, blush, eyelashes, medium_breasts, sketch |
| 4 | 6 |  |  |  |  |  | 1girl, blush, collarbone, nipples, looking_at_viewer, pussy, day, large_breasts, outdoors, solo, sweat, censored, cloud, completely_nude, golf_club, grass, navel, open_mouth, sky, socks, squatting, very_long_hair |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | hetero | short_sleeves | spread_legs | 1boy | nipples | penis | sex | skirt | socks | striped_shirt | vaginal | cum_in_pussy | girl_on_top | gloves | open_mouth | shirt_lift | sweat | bar_censor | large_breasts | medium_breasts | navel | no_bra | pubic_hair | squatting | straddling | underwear | collared_shirt | closed_mouth | golf_club | holding | kneehighs | shoes | solo | full_body | blue_skirt | simple_background | looking_at_viewer | buttons | white_background | miniskirt | pencil_skirt | standing | white_footwear | frown | v-shaped_eyebrows | hat | white_gloves | >:( | hand_on_hip | single_glove | upper_body | eyelashes | sketch | collarbone | pussy | day | outdoors | censored | cloud | completely_nude | grass | sky | very_long_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------|:----------------|:--------------|:-------|:----------|:--------|:------|:--------|:--------|:----------------|:----------|:---------------|:--------------|:---------|:-------------|:-------------|:--------|:-------------|:----------------|:-----------------|:--------|:---------|:-------------|:------------|:-------------|:------------|:-----------------|:---------------|:------------|:----------|:------------|:--------|:-------|:------------|:-------------|:--------------------|:--------------------|:----------|:-------------------|:------------|:---------------|:-----------|:-----------------|:--------|:--------------------|:------|:---------------|:------|:--------------|:---------------|:-------------|:------------|:---------|:-------------|:--------|:------|:-----------|:-----------|:--------|:------------------|:--------|:------|:-----------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 25 |  |  |  |  |  | X | X | | X | | | | | | | | X | | | | | | | | | | X | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | | | X | | X | X | X | X | X | | | X | | X | | | X | X | X | X | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | | X | | | | | | | | X | | | | | | | | | | X | | | | | | | X | X | | | | | X | | | X | X | X | X | | | | | | | | | | | | X | X | X | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | | | | | X | | | | X | | | | | | X | | X | | X | | X | | | X | | | | | X | | | | X | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
Anzhe/trash-talk-mandarin | ---
license: apache-2.0
---
|
dongyoung4091/hh-rlhf_with_features_flan_t5_large_flan_t5_base_zeroshot | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: helpfulness_chosen
dtype: int64
- name: helpfulness_rejected
dtype: int64
- name: specificity_chosen
dtype: int64
- name: specificity_rejected
dtype: int64
- name: intent_chosen
dtype: int64
- name: intent_rejected
dtype: int64
- name: factuality_chosen
dtype: int64
- name: factuality_rejected
dtype: int64
- name: easy-to-understand_chosen
dtype: int64
- name: easy-to-understand_rejected
dtype: int64
- name: relevance_chosen
dtype: int64
- name: relevance_rejected
dtype: int64
- name: readability_chosen
dtype: int64
- name: readability_rejected
dtype: int64
- name: enough-detail_chosen
dtype: int64
- name: enough-detail_rejected
dtype: int64
- name: biased:_chosen
dtype: int64
- name: biased:_rejected
dtype: int64
- name: fail-to-consider-individual-preferences_chosen
dtype: int64
- name: fail-to-consider-individual-preferences_rejected
dtype: int64
- name: repetetive_chosen
dtype: int64
- name: repetetive_rejected
dtype: int64
- name: fail-to-consider-context_chosen
dtype: int64
- name: fail-to-consider-context_rejected
dtype: int64
- name: too-long_chosen
dtype: int64
- name: too-long_rejected
dtype: int64
- name: human
dtype: string
- name: assistant_chosen
dtype: string
- name: assistant_rejected
dtype: string
- name: log_score_chosen
dtype: float64
- name: log_score_rejected
dtype: float64
- name: labels
dtype: string
- name: zeroshot_helpfulness_chosen
dtype: float64
- name: zeroshot_helpfulness_rejected
dtype: float64
- name: zeroshot_specificity_chosen
dtype: float64
- name: zeroshot_specificity_rejected
dtype: float64
- name: zeroshot_intent_chosen
dtype: float64
- name: zeroshot_intent_rejected
dtype: float64
- name: zeroshot_factuality_chosen
dtype: float64
- name: zeroshot_factuality_rejected
dtype: float64
- name: zeroshot_easy-to-understand_chosen
dtype: float64
- name: zeroshot_easy-to-understand_rejected
dtype: float64
- name: zeroshot_relevance_chosen
dtype: float64
- name: zeroshot_relevance_rejected
dtype: float64
- name: zeroshot_readability_chosen
dtype: float64
- name: zeroshot_readability_rejected
dtype: float64
- name: zeroshot_enough-detail_chosen
dtype: float64
- name: zeroshot_enough-detail_rejected
dtype: float64
- name: zeroshot_biased:_chosen
dtype: float64
- name: zeroshot_biased:_rejected
dtype: float64
- name: zeroshot_fail-to-consider-individual-preferences_chosen
dtype: float64
- name: zeroshot_fail-to-consider-individual-preferences_rejected
dtype: float64
- name: zeroshot_repetetive_chosen
dtype: float64
- name: zeroshot_repetetive_rejected
dtype: float64
- name: zeroshot_fail-to-consider-context_chosen
dtype: float64
- name: zeroshot_fail-to-consider-context_rejected
dtype: float64
- name: zeroshot_too-long_chosen
dtype: float64
- name: zeroshot_too-long_rejected
dtype: float64
splits:
- name: train
num_bytes: 16425816
num_examples: 9574
- name: test
num_bytes: 16369741
num_examples: 9574
download_size: 0
dataset_size: 32795557
---
# Dataset Card for "hh-rlhf_with_features_flan_t5_large_flan_t5_base_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
abdullbbr/myfirstmodel | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245924
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wojemann/tars_boolq2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5903821
num_examples: 9427
- name: validation
num_bytes: 2023933
num_examples: 3270
download_size: 4944897
dataset_size: 7927754
---
# Dataset Card for "tars_boolq2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davizca87/C01nWorkflow | ---
license: other
---
|
CVasNLPExperiments/DTD_parition1_test_google_flan_t5_xl_mode_T_SPECIFIC_ns_1880 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 224447
num_examples: 1880
download_size: 19238
dataset_size: 224447
---
# Dataset Card for "DTD_parition1_test_google_flan_t5_xl_mode_T_SPECIFIC_ns_1880"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TomTBT/pmc_open_access_figure_comm | ---
license: cc-by-sa-4.0
---
|
Tristan/cc_olm_no_dedup | ---
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: crawl_timestamp
dtype: float64
splits:
- name: train
num_bytes: 249659214
num_examples: 46032
download_size: 148670687
dataset_size: 249659214
---
# Dataset Card for "cc_olm_no_dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dilgam/exploit | ---
license: openrail
language:
- en
--- |
CyberHarem/yamato_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of yamato (Pokémon)
This is the dataset of yamato (Pokémon), containing 58 images and their tags.
The core tags of this character are `long_hair, breasts, blonde_hair, earrings, purple_eyes, large_breasts, twintails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 58 | 45.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 58 | 27.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 103 | 49.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 58 | 41.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 103 | 69.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yamato_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, solo, medium_breasts, nipples, nude, jewelry, female_pubic_hair, lipstick, navel, pussy, orange_hair |
| 1 | 8 |  |  |  |  |  | 1girl, hetero, jewelry, penis, pussy, 1boy, solo_focus, uncensored, nipples, sex, blush, anal, cum, completely_nude, elbow_gloves, navel |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | medium_breasts | nipples | nude | jewelry | female_pubic_hair | lipstick | navel | pussy | orange_hair | hetero | penis | 1boy | solo_focus | uncensored | sex | blush | anal | cum | completely_nude | elbow_gloves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:----------|:-------|:----------|:--------------------|:-----------|:--------|:--------|:--------------|:---------|:--------|:-------|:-------------|:-------------|:------|:--------|:-------|:------|:------------------|:---------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | | X | | X | | | X | X | | X | X | X | X | X | X | X | X | X | X | X |
|
claritylab/utcd | ---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1M<n<10M
annotations_creators:
- no-annotation
multilinguality:
- monolingual
pretty_name: UTCD
dataset_info:
- config_name: in-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add Alarm
'1': Album
'2': Animal
'3': Artist
'4': Athlete
'5': Book Appointment
'6': Book House
'7': Building
'8': Business
'9': Business & Finance
'10': Buy Bus Ticket
'11': Buy Event Tickets
'12': Buy Movie Tickets
'13': Check Balance
'14': Company
'15': Computers & Internet
'16': Education & Reference
'17': Educational Institution
'18': Entertainment & Music
'19': Family & Relationships
'20': Film
'21': Find Apartment
'22': Find Attractions
'23': Find Bus
'24': Find Events
'25': Find Home By Area
'26': Find Movies
'27': Find Provider
'28': Find Restaurants
'29': Find Trains
'30': Get Alarms
'31': Get Available Time
'32': Get Cars Available
'33': Get Event Dates
'34': Get Events
'35': Get Ride
'36': Get Times For Movie
'37': Get Weather
'38': Health
'39': Lookup Music
'40': Lookup Song
'41': Make Payment
'42': Mean Of Transportation
'43': Natural Place
'44': Office Holder
'45': Plant
'46': Play Media
'47': Play Movie
'48': Play Song
'49': Politics & Government
'50': Request Payment
'51': Reserve Car
'52': Reserve Hotel
'53': Reserve One way Flight
'54': Reserve Restaurant
'55': Reserve Round trip Flights
'56': Schedule Visit
'57': Science & Mathematics
'58': Science & Technology
'59': Search Hotel
'60': Search House
'61': Search One way Flight
'62': Search Round trip Flights
'63': Society & Culture
'64': Sports
'65': Transfer Money
'66': Village
'67': World News
'68': Written Work
'69': accept reservations
'70': account blocked
'71': add contact
'72': admiration
'73': alarm
'74': alarm query
'75': alarm remove
'76': alarm set
'77': amusement
'78': anger
'79': annoyance
'80': application status
'81': approval
'82': apr
'83': are you a bot
'84': audio volume down
'85': audio volume mute
'86': audio volume other
'87': audio volume up
'88': balance
'89': bill balance
'90': bill due
'91': book flight
'92': book hotel
'93': calculator
'94': calendar
'95': calendar query
'96': calendar remove
'97': calendar set
'98': calendar update
'99': calories
'100': cancel
'101': cancel reservation
'102': car rental
'103': card declined
'104': caring
'105': carry on
'106': change accent
'107': change ai name
'108': change language
'109': change speed
'110': change user name
'111': change volume
'112': cleaning
'113': coffee
'114': confirm reservation
'115': confusion
'116': convert
'117': cook time
'118': cooking query
'119': cooking recipe
'120': create or add
'121': credit limit
'122': credit limit change
'123': credit score
'124': curiosity
'125': currency
'126': current location
'127': damaged card
'128': date
'129': date time convert
'130': date time query
'131': definition
'132': desire
'133': direct deposit
'134': directions
'135': disappointment
'136': disapproval
'137': disgust
'138': distance
'139': do you have pets
'140': email add contact
'141': email query
'142': email query contact
'143': email send email
'144': embarrassment
'145': events
'146': exchange rate
'147': excitement
'148': expiration date
'149': factoid
'150': fear
'151': find phone
'152': flight status
'153': flip coin
'154': food last
'155': freeze account
'156': fun fact
'157': game
'158': gas
'159': gas type
'160': general greet
'161': general joke
'162': general quirky
'163': goodbye
'164': gratitude
'165': greet
'166': greeting
'167': grief
'168': how busy
'169': how old are you
'170': hue light dim
'171': hue light off
'172': hue light up
'173': improve credit score
'174': income
'175': ingredient substitution
'176': ingredients list
'177': insurance
'178': insurance change
'179': interest rate
'180': international fees
'181': international visa
'182': iot cleaning
'183': iot coffee
'184': iot hue light change
'185': iot hue light dim
'186': iot hue light off
'187': iot hue light on
'188': iot hue light up
'189': iot wemo on
'190': iot wemo plug off
'191': joke
'192': joy
'193': jump start
'194': last maintenance
'195': lists create or add
'196': lists query
'197': lists remove
'198': lost luggage
'199': love
'200': make call
'201': maybe
'202': meal suggestion
'203': meaning of life
'204': measurement conversion
'205': meeting schedule
'206': min payment
'207': mpg
'208': music
'209': music dislike ness
'210': music likeness
'211': music query
'212': music settings
'213': negative
'214': nervousness
'215': neutral
'216': new card
'217': news query
'218': next holiday
'219': next song
'220': 'no'
'221': nutrition info
'222': oil change how
'223': oil change when
'224': optimism
'225': order
'226': order checks
'227': order status
'228': paid time off request status
'229': paid time off used
'230': pay bill
'231': payday
'232': pin change
'233': play audiobook
'234': play game
'235': play music
'236': play podcasts
'237': play radio
'238': plug type
'239': podcasts
'240': positive
'241': post
'242': pride
'243': pto balance
'244': pto request
'245': qa currency
'246': qa definition
'247': qa factoid
'248': qa maths
'249': qa stock
'250': query
'251': query contact
'252': quirky
'253': radio
'254': realization
'255': recipe
'256': recommendation events
'257': recommendation locations
'258': recommendation movies
'259': redeem rewards
'260': relief
'261': reminder
'262': reminder update
'263': remorse
'264': remove
'265': repeat
'266': replacement card duration
'267': report fraud
'268': report lost card
'269': reset settings
'270': restaurant reservation
'271': restaurant reviews
'272': restaurant suggestion
'273': rewards balance
'274': roll dice
'275': rollover 401k
'276': routing
'277': sadness
'278': schedule maintenance
'279': schedule meeting
'280': send email
'281': set
'282': settings
'283': share location
'284': shopping list
'285': shopping list update
'286': smart home
'287': social post
'288': social query
'289': spelling
'290': spending history
'291': surprise
'292': sync device
'293': take away order
'294': take away query
'295': taxes
'296': tell joke
'297': text
'298': thank you
'299': ticket
'300': time
'301': timer
'302': timezone
'303': tire change
'304': tire pressure
'305': todo list
'306': todo list update
'307': traffic
'308': transactions
'309': transfer
'310': translate
'311': transport query
'312': transport taxi
'313': transport ticket
'314': transport traffic
'315': travel alert
'316': travel notification
'317': travel suggestion
'318': uber
'319': update playlist
'320': user name
'321': vaccines
'322': volume other
'323': w2 wage and tax statement
'324': weather
'325': weather query
'326': wemo off
'327': wemo plug on
'328': what are your hobbies
'329': what can i ask you
'330': what is your name
'331': what song
'332': where are you from
'333': whisper mode
'334': who do you work for
'335': who made you
'336': 'yes'
- name: dataset_name
dtype:
class_label:
names:
'0': go_emotion
'1': sentiment_tweets_2020
'2': emotion
'3': sgd
'4': clinc_150
'5': slurp
'6': ag_news
'7': dbpedia
'8': yahoo
splits:
- name: train
num_bytes: 347382307
num_examples: 2192703
- name: test
num_bytes: 36063588
num_examples: 168365
download_size: 1744258165
dataset_size: 383445895
- config_name: aspect-normalized-in-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add Alarm
'1': Album
'2': Animal
'3': Artist
'4': Athlete
'5': Book Appointment
'6': Book House
'7': Building
'8': Business
'9': Business & Finance
'10': Buy Bus Ticket
'11': Buy Event Tickets
'12': Buy Movie Tickets
'13': Check Balance
'14': Company
'15': Computers & Internet
'16': Education & Reference
'17': Educational Institution
'18': Entertainment & Music
'19': Family & Relationships
'20': Film
'21': Find Apartment
'22': Find Attractions
'23': Find Bus
'24': Find Events
'25': Find Home By Area
'26': Find Movies
'27': Find Provider
'28': Find Restaurants
'29': Find Trains
'30': Get Alarms
'31': Get Available Time
'32': Get Cars Available
'33': Get Event Dates
'34': Get Events
'35': Get Ride
'36': Get Times For Movie
'37': Get Weather
'38': Health
'39': Lookup Music
'40': Lookup Song
'41': Make Payment
'42': Mean Of Transportation
'43': Natural Place
'44': Office Holder
'45': Plant
'46': Play Media
'47': Play Movie
'48': Play Song
'49': Politics & Government
'50': Request Payment
'51': Reserve Car
'52': Reserve Hotel
'53': Reserve One way Flight
'54': Reserve Restaurant
'55': Reserve Round trip Flights
'56': Schedule Visit
'57': Science & Mathematics
'58': Science & Technology
'59': Search Hotel
'60': Search House
'61': Search One way Flight
'62': Search Round trip Flights
'63': Society & Culture
'64': Sports
'65': Transfer Money
'66': Village
'67': World News
'68': Written Work
'69': accept reservations
'70': account blocked
'71': add contact
'72': admiration
'73': alarm
'74': alarm query
'75': alarm remove
'76': alarm set
'77': amusement
'78': anger
'79': annoyance
'80': application status
'81': approval
'82': apr
'83': are you a bot
'84': audio volume down
'85': audio volume mute
'86': audio volume other
'87': audio volume up
'88': balance
'89': bill balance
'90': bill due
'91': book flight
'92': book hotel
'93': calculator
'94': calendar
'95': calendar query
'96': calendar remove
'97': calendar set
'98': calendar update
'99': calories
'100': cancel
'101': cancel reservation
'102': car rental
'103': card declined
'104': caring
'105': carry on
'106': change accent
'107': change ai name
'108': change language
'109': change speed
'110': change user name
'111': change volume
'112': cleaning
'113': coffee
'114': confirm reservation
'115': confusion
'116': convert
'117': cook time
'118': cooking query
'119': cooking recipe
'120': create or add
'121': credit limit
'122': credit limit change
'123': credit score
'124': curiosity
'125': currency
'126': current location
'127': damaged card
'128': date
'129': date time convert
'130': date time query
'131': definition
'132': desire
'133': direct deposit
'134': directions
'135': disappointment
'136': disapproval
'137': disgust
'138': distance
'139': do you have pets
'140': email add contact
'141': email query
'142': email query contact
'143': email send email
'144': embarrassment
'145': events
'146': exchange rate
'147': excitement
'148': expiration date
'149': factoid
'150': fear
'151': find phone
'152': flight status
'153': flip coin
'154': food last
'155': freeze account
'156': fun fact
'157': game
'158': gas
'159': gas type
'160': general greet
'161': general joke
'162': general quirky
'163': goodbye
'164': gratitude
'165': greet
'166': greeting
'167': grief
'168': how busy
'169': how old are you
'170': hue light dim
'171': hue light off
'172': hue light up
'173': improve credit score
'174': income
'175': ingredient substitution
'176': ingredients list
'177': insurance
'178': insurance change
'179': interest rate
'180': international fees
'181': international visa
'182': iot cleaning
'183': iot coffee
'184': iot hue light change
'185': iot hue light dim
'186': iot hue light off
'187': iot hue light on
'188': iot hue light up
'189': iot wemo on
'190': iot wemo plug off
'191': joke
'192': joy
'193': jump start
'194': last maintenance
'195': lists create or add
'196': lists query
'197': lists remove
'198': lost luggage
'199': love
'200': make call
'201': maybe
'202': meal suggestion
'203': meaning of life
'204': measurement conversion
'205': meeting schedule
'206': min payment
'207': mpg
'208': music
'209': music dislike ness
'210': music likeness
'211': music query
'212': music settings
'213': negative
'214': nervousness
'215': neutral
'216': new card
'217': news query
'218': next holiday
'219': next song
'220': 'no'
'221': nutrition info
'222': oil change how
'223': oil change when
'224': optimism
'225': order
'226': order checks
'227': order status
'228': paid time off request status
'229': paid time off used
'230': pay bill
'231': payday
'232': pin change
'233': play audiobook
'234': play game
'235': play music
'236': play podcasts
'237': play radio
'238': plug type
'239': podcasts
'240': positive
'241': post
'242': pride
'243': pto balance
'244': pto request
'245': qa currency
'246': qa definition
'247': qa factoid
'248': qa maths
'249': qa stock
'250': query
'251': query contact
'252': quirky
'253': radio
'254': realization
'255': recipe
'256': recommendation events
'257': recommendation locations
'258': recommendation movies
'259': redeem rewards
'260': relief
'261': reminder
'262': reminder update
'263': remorse
'264': remove
'265': repeat
'266': replacement card duration
'267': report fraud
'268': report lost card
'269': reset settings
'270': restaurant reservation
'271': restaurant reviews
'272': restaurant suggestion
'273': rewards balance
'274': roll dice
'275': rollover 401k
'276': routing
'277': sadness
'278': schedule maintenance
'279': schedule meeting
'280': send email
'281': set
'282': settings
'283': share location
'284': shopping list
'285': shopping list update
'286': smart home
'287': social post
'288': social query
'289': spelling
'290': spending history
'291': surprise
'292': sync device
'293': take away order
'294': take away query
'295': taxes
'296': tell joke
'297': text
'298': thank you
'299': ticket
'300': time
'301': timer
'302': timezone
'303': tire change
'304': tire pressure
'305': todo list
'306': todo list update
'307': traffic
'308': transactions
'309': transfer
'310': translate
'311': transport query
'312': transport taxi
'313': transport ticket
'314': transport traffic
'315': travel alert
'316': travel notification
'317': travel suggestion
'318': uber
'319': update playlist
'320': user name
'321': vaccines
'322': volume other
'323': w2 wage and tax statement
'324': weather
'325': weather query
'326': wemo off
'327': wemo plug on
'328': what are your hobbies
'329': what can i ask you
'330': what is your name
'331': what song
'332': where are you from
'333': whisper mode
'334': who do you work for
'335': who made you
'336': 'yes'
- name: dataset_name
dtype:
class_label:
names:
'0': go_emotion
'1': sentiment_tweets_2020
'2': emotion
'3': sgd
'4': clinc_150
'5': slurp
'6': ag_news
'7': dbpedia
'8': yahoo
splits:
- name: train
num_bytes: 28974188
num_examples: 115127
- name: validation
num_bytes: 3213586
num_examples: 12806
- name: test
num_bytes: 36063590
num_examples: 168365
download_size: 1744258165
dataset_size: 68251364
- config_name: out-of-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add To Playlist
'1': Bank account or service
'2': Book Restaurant
'3': Checking or savings account
'4': Chemistry; Metallurgy
'5': Consumer Loan
'6': Credit card
'7': Credit card or prepaid card
'8': Credit reporting
'9': Credit reporting, credit repair services, or other personal consumer
reports
'10': Debt collection
'11': EUROPEAN UNION
'12': Electricity
'13': Fixed Constructions
'14': General tagging of new or cross-sectional technology
'15': Get Weather
'16': Human Necessities
'17': Mechanical Engineering; Lightning; Heating; Weapons; Blasting
'18': Money transfer, virtual currency, or money service
'19': Money transfers
'20': Mortgage
'21': Other financial service
'22': Payday loan
'23': Payday loan, title loan, or personal loan
'24': Performing Operations; Transporting
'25': Physics
'26': Play Music
'27': Prepaid card
'28': Rate Book
'29': Refund not showing up
'30': Search Creative Work
'31': Search Screening Event
'32': Student loan
'33': Textiles; Paper
'34': Vehicle loan or lease
'35': Virtual currency
'36': activate my card
'37': age limit
'38': agri-foodstuffs
'39': agriculture, forestry and fisheries
'40': alarm query
'41': alarm remove
'42': alarm set
'43': apple pay or google pay
'44': atm support
'45': audio volume down
'46': audio volume mute
'47': audio volume other
'48': audio volume up
'49': automatic top up
'50': balance not updated after bank transfer
'51': balance not updated after cheque or cash deposit
'52': beneficiary not allowed
'53': business and competition
'54': calendar query
'55': calendar remove
'56': calendar set
'57': cancel transfer
'58': card about to expire
'59': card acceptance
'60': card arrival
'61': card delivery estimate
'62': card linking
'63': card not working
'64': card payment fee charged
'65': card payment not recognised
'66': card payment wrong exchange rate
'67': card swallowed
'68': cash withdrawal charge
'69': cash withdrawal not recognised
'70': change pin
'71': compromised card
'72': contactless not working
'73': cooking query
'74': cooking recipe
'75': country support
'76': datetime convert
'77': datetime query
'78': declined card payment
'79': declined cash withdrawal
'80': declined transfer
'81': direct debit payment not recognised
'82': disposable card limits
'83': economics
'84': edit personal details
'85': education and communications
'86': email addcontact
'87': email query
'88': email querycontact
'89': email sendemail
'90': employment and working conditions
'91': energy
'92': environment
'93': exchange charge
'94': exchange rate
'95': exchange via app
'96': extra charge on statement
'97': failed transfer
'98': fiat currency support
'99': finance
'100': general affirm
'101': general commandstop
'102': general confirm
'103': general dontcare
'104': general explain
'105': general greet
'106': general joke
'107': general negate
'108': general praise
'109': general quirky
'110': general repeat
'111': geography
'112': get disposable virtual card
'113': get physical card
'114': getting spare card
'115': getting virtual card
'116': industry
'117': international organisations
'118': international relations
'119': iot cleaning
'120': iot coffee
'121': iot hue lightchange
'122': iot hue lightdim
'123': iot hue lightoff
'124': iot hue lighton
'125': iot hue lightup
'126': iot wemo off
'127': iot wemo on
'128': law
'129': lists createoradd
'130': lists query
'131': lists remove
'132': lost or stolen card
'133': lost or stolen phone
'134': music dislikeness
'135': music likeness
'136': music query
'137': music settings
'138': negative
'139': neutral
'140': news query
'141': order physical card
'142': passcode forgotten
'143': pending card payment
'144': pending cash withdrawal
'145': pending top up
'146': pending transfer
'147': pin blocked
'148': play audiobook
'149': play game
'150': play music
'151': play podcasts
'152': play radio
'153': politics
'154': positive
'155': production, technology and research
'156': qa currency
'157': qa definition
'158': qa factoid
'159': qa maths
'160': qa stock
'161': receiving money
'162': recommendation events
'163': recommendation locations
'164': recommendation movies
'165': request refund
'166': reverted card payment?
'167': science
'168': social post
'169': social query
'170': social questions
'171': supported cards and currencies
'172': takeaway order
'173': takeaway query
'174': terminate account
'175': top up by bank transfer charge
'176': top up by card charge
'177': top up by cash or cheque
'178': top up failed
'179': top up limits
'180': top up reverted
'181': topping up by card
'182': trade
'183': transaction charged twice
'184': transfer fee charged
'185': transfer into account
'186': transfer not received by recipient
'187': transfer timing
'188': transport
'189': transport query
'190': transport taxi
'191': transport ticket
'192': transport traffic
'193': unable to verify identity
'194': verify my identity
'195': verify source of funds
'196': verify top up
'197': virtual card not working
'198': visa or mastercard
'199': weather query
'200': why verify identity
'201': wrong amount of cash received
'202': wrong exchange rate for cash withdrawal
- name: dataset_name
dtype:
class_label:
names:
'0': amazon_polarity
'1': finance_sentiment
'2': yelp
'3': banking77
'4': snips
'5': nlu_evaluation
'6': multi_eurlex
'7': patent
'8': consumer_finance
splits:
- name: train
num_bytes: 3608196895
num_examples: 4996673
- name: test
num_bytes: 541174753
num_examples: 625911
download_size: 1744258165
dataset_size: 4149371648
- config_name: aspect-normalized-out-of-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add To Playlist
'1': Bank account or service
'2': Book Restaurant
'3': Checking or savings account
'4': Chemistry; Metallurgy
'5': Consumer Loan
'6': Credit card
'7': Credit card or prepaid card
'8': Credit reporting
'9': Credit reporting, credit repair services, or other personal consumer
reports
'10': Debt collection
'11': EUROPEAN UNION
'12': Electricity
'13': Fixed Constructions
'14': General tagging of new or cross-sectional technology
'15': Get Weather
'16': Human Necessities
'17': Mechanical Engineering; Lightning; Heating; Weapons; Blasting
'18': Money transfer, virtual currency, or money service
'19': Money transfers
'20': Mortgage
'21': Other financial service
'22': Payday loan
'23': Payday loan, title loan, or personal loan
'24': Performing Operations; Transporting
'25': Physics
'26': Play Music
'27': Prepaid card
'28': Rate Book
'29': Refund not showing up
'30': Search Creative Work
'31': Search Screening Event
'32': Student loan
'33': Textiles; Paper
'34': Vehicle loan or lease
'35': Virtual currency
'36': activate my card
'37': age limit
'38': agri-foodstuffs
'39': agriculture, forestry and fisheries
'40': alarm query
'41': alarm remove
'42': alarm set
'43': apple pay or google pay
'44': atm support
'45': audio volume down
'46': audio volume mute
'47': audio volume other
'48': audio volume up
'49': automatic top up
'50': balance not updated after bank transfer
'51': balance not updated after cheque or cash deposit
'52': beneficiary not allowed
'53': business and competition
'54': calendar query
'55': calendar remove
'56': calendar set
'57': cancel transfer
'58': card about to expire
'59': card acceptance
'60': card arrival
'61': card delivery estimate
'62': card linking
'63': card not working
'64': card payment fee charged
'65': card payment not recognised
'66': card payment wrong exchange rate
'67': card swallowed
'68': cash withdrawal charge
'69': cash withdrawal not recognised
'70': change pin
'71': compromised card
'72': contactless not working
'73': cooking query
'74': cooking recipe
'75': country support
'76': datetime convert
'77': datetime query
'78': declined card payment
'79': declined cash withdrawal
'80': declined transfer
'81': direct debit payment not recognised
'82': disposable card limits
'83': economics
'84': edit personal details
'85': education and communications
'86': email addcontact
'87': email query
'88': email querycontact
'89': email sendemail
'90': employment and working conditions
'91': energy
'92': environment
'93': exchange charge
'94': exchange rate
'95': exchange via app
'96': extra charge on statement
'97': failed transfer
'98': fiat currency support
'99': finance
'100': general affirm
'101': general commandstop
'102': general confirm
'103': general dontcare
'104': general explain
'105': general greet
'106': general joke
'107': general negate
'108': general praise
'109': general quirky
'110': general repeat
'111': geography
'112': get disposable virtual card
'113': get physical card
'114': getting spare card
'115': getting virtual card
'116': industry
'117': international organisations
'118': international relations
'119': iot cleaning
'120': iot coffee
'121': iot hue lightchange
'122': iot hue lightdim
'123': iot hue lightoff
'124': iot hue lighton
'125': iot hue lightup
'126': iot wemo off
'127': iot wemo on
'128': law
'129': lists createoradd
'130': lists query
'131': lists remove
'132': lost or stolen card
'133': lost or stolen phone
'134': music dislikeness
'135': music likeness
'136': music query
'137': music settings
'138': negative
'139': neutral
'140': news query
'141': order physical card
'142': passcode forgotten
'143': pending card payment
'144': pending cash withdrawal
'145': pending top up
'146': pending transfer
'147': pin blocked
'148': play audiobook
'149': play game
'150': play music
'151': play podcasts
'152': play radio
'153': politics
'154': positive
'155': production, technology and research
'156': qa currency
'157': qa definition
'158': qa factoid
'159': qa maths
'160': qa stock
'161': receiving money
'162': recommendation events
'163': recommendation locations
'164': recommendation movies
'165': request refund
'166': reverted card payment?
'167': science
'168': social post
'169': social query
'170': social questions
'171': supported cards and currencies
'172': takeaway order
'173': takeaway query
'174': terminate account
'175': top up by bank transfer charge
'176': top up by card charge
'177': top up by cash or cheque
'178': top up failed
'179': top up limits
'180': top up reverted
'181': topping up by card
'182': trade
'183': transaction charged twice
'184': transfer fee charged
'185': transfer into account
'186': transfer not received by recipient
'187': transfer timing
'188': transport
'189': transport query
'190': transport taxi
'191': transport ticket
'192': transport traffic
'193': unable to verify identity
'194': verify my identity
'195': verify source of funds
'196': verify top up
'197': virtual card not working
'198': visa or mastercard
'199': weather query
'200': why verify identity
'201': wrong amount of cash received
'202': wrong exchange rate for cash withdrawal
- name: dataset_name
dtype:
class_label:
names:
'0': amazon_polarity
'1': finance_sentiment
'2': yelp
'3': banking77
'4': snips
'5': nlu_evaluation
'6': multi_eurlex
'7': patent
'8': consumer_finance
splits:
- name: train
num_bytes: 109566474
num_examples: 119167
- name: validation
num_bytes: 12432497
num_examples: 13263
- name: test
num_bytes: 541174753
num_examples: 625911
download_size: 1744258165
dataset_size: 663173724
---
# Universal Text Classification Dataset (UTCD)
## Load dataset
```python
from datasets import load_dataset
dataset = load_dataset('claritylab/utcd', name='in-domain')
```
## Description
UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. TUTCD consists of ~ 6M/800K train/test examples.
UTCD was introduced in the Findings of ACL'23 Paper **Label Agnostic Pre-training for Zero-shot Text Classification** by ***Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars***. [Project Homepage](https://github.com/ChrisIsKing/zero-shot-text-classification/tree/master).
UTCD Datasets & Principles:
In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain \& aspect transfer. As such, in the construction of UTCD we enforce the following principles:
- **Textual labels**: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language.
- **Diverse domains and Sequence lengths**: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above.
- Sentiment
- GoEmotions introduced in [GoEmotions: A Dataset of Fine-Grained Emotions](https://arxiv.org/pdf/2005.00547v2.pdf)
- TweetEval introduced in [TWEETEVAL: Unified Benchmark and Comparative Evaluation for Tweet Classification](https://arxiv.org/pdf/2010.12421v2.pdf) (Sentiment subset)
- Emotion introduced in [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404.pdf)
- Amazon Polarity introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- Finance Phrasebank introduced in [Good debt or bad debt: Detecting semantic orientations in economic texts](https://arxiv.org/pdf/1307.5336.pdf)
- Yelp introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- Intent/Dialogue
- Schema-Guided Dialogue introduced in [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/pdf/1909.05855v2.pdf)
- Clinc-150 introduced in [An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction](https://arxiv.org/pdf/1909.02027v1.pdf)
- SLURP SLU introduced in [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/pdf/2011.13205.pdf)
- Banking77 introduced in [Efficient Intent Detection with Dual Sentence Encoders](https://arxiv.org/abs/2003.04807](https://arxiv.org/pdf/2003.04807.pdf)
- Snips introduced in [Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces](https://arxiv.org/pdf/1805.10190.pdf)
- NLU Evaluation introduced in [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/pdf/1903.05566.pdf)
- Topic
- AG News introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- DBpedia 14 introduced in [DBpedia: A Nucleus for a Web of Open Data](https://link.springer.com/chapter/10.1007/978-3-540-76298-0_52)
- Yahoo Answer Topics introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- MultiEurlex introduced in [MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer](https://aclanthology.org/2021.emnlp-main.559v2.pdf)
- BigPatent introduced in [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://aclanthology.org/P19-1212.pdf)
- Consumer Finance introduced in [Consumer Complaint Database](https://www.consumerfinance.gov/data-research/consumer-complaints/)
## Structure
### Data Samples
Each dataset sample contains the text, the label encoded as an integer, and the dataset name encoded as an integer.
```python
{
'text': "My favourite food is anything I didn't have to cook myself.",
'labels': [215],
'dataset_name': 0
}
```
### Datasets Contained
The UTCD dataset contains 18 datasets, 9 `in-domain`, 9 `out-of-domain`, spanning 3 aspects: `sentiment`, `intent` and `topic`.
Below are statistics on the datasets.
**In-Domain Datasets**
| Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text in Train/Test |
| ---------- | --------- | ---------------------- | ------- | ------------------------------------ |
| GoEmotions | sentiment | 43K/5.4K | 28 | 12/12 |
| TweetEval | sentiment | 45K/12K | 3 | 19/14 |
| Emotion | sentiment | 16K/2K | 6 | 17/17 |
| SGD | intent | 16K/4.2K | 26 | 8/9 |
| Clinc-150 | intent | 15K/4.5K | 150 | 8/8 |
| SLURP | intent | 12K/2.6K | 75 | 7/7 |
| AG News | topic | 120K7.6K | 4 | 38/37 |
| DBpedia | topic | 560K/70K | 14 | 45/45 |
| Yahoo | topic | 1.4M/60K | 10 | 10/10 |
**Out-of-Domain Datasets**
| Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text |
| --------------------- | --------- | ---------------------- | ------- | ---------------------- |
| Amazon Polarity | sentiment | 3.6M/400K | 2 | 71/71 |
| Financial Phrase Bank | sentiment | 1.8K/453 | 3 | 19/19 |
| Yelp | sentiment | 650K/50K | 3 | 128/128 |
| Banking77 | intent | 10K/3.1K | 77 | 11/10 |
| SNIPS | intent | 14K/697 | 7 | 8/8 |
| NLU Eval | intent | 21K/5.2K | 68 | 7/7 |
| MultiEURLEX | topic | 55K/5K | 21 | 1198/1853 |
| Big Patent | topic | 25K/5K | 9 | 2872/2892 |
| Consumer Finance | topic | 630K/160K | 18 | 190/189 |
### Configurations
The `in-domain` and `out-of-domain` configurations has 2 splits: `train` and `test`.
The aspect-normalized configurations (`aspect-normalized-in-domain`, `aspect-normalized-out-of-domain`) has 3 splits: `train`, `validation` and `test`.
Below are statistics on the configuration splits.
**In-Domain Configuration**
| Split | #samples |
| ----- | --------- |
| Train | 2,192,703 |
| Test | 168,365 |
**Out-of-Domain Configuration**
| Split | #samples |
| ----- | --------- |
| Train | 4,996,673 |
| Test | 625,911 |
**Aspect-Normalized In-Domain Configuration**
| Split | #samples |
| ---------- | -------- |
| Train | 115,127 |
| Validation | 12,806 |
| Test | 168,365 |
**Aspect-Normalized Out-of-Domain Configuration**
| Split | #samples |
| ---------- | -------- |
| Train | 119,167 |
| Validation | 13,263 |
| Test | 625,911 |
|
vastream/dm | ---
license: apache-2.0
---
|
AIML-TUDA/laion-occupation | ---
license: cc-by-sa-4.0
pretty_name: LAION Occupation
---
# LAION Occupation
This dataset is a subset of [LAION-2B-en](https://laion.ai/blog/laion-5b/) containing 1.8M samples, each assigned to one of 153 occupations. This dataset was curated as part of our investigation into gender-occupation biases in LAION presented in [Fair Diffusion](https://arxiv.org/abs/2302.10893).
For downloading the images, check out [img2dataset](https://github.com/rom1504/img2dataset).
## Data Collection
We identified relevant images in the dataset by computing their CLIP similarity to a textual description of the target occupation. All descriptions were in the form of "an image of the face of a \<<em>occupation</em>\>". Consequently, we included all images above an empirically determined threshold.
## Probability of faces
The dataset also contains annotations for the probability of a human face being depicted. Scores were calculated using the MTCNN Face Detector of [FaceNet](https://github.com/timesler/facenet-pytorch). Empirically, scores above ca. 0.97 can be reasonably assumed to include recognizable faces.
## Dataset Format
The dataset consists of the following fields:
| Field | Explanation |
| ----------- | ----------- |
| URL | Url of the image. |
| TEXT | Text caption of the image. |
| occupation | Identified occupation. |
| pface | Probability of a face being contained in the image as per FaceNet. will be NaN if the image could not be retrieved. |
| url_active | Whether or not we were able to retrieve the image from the corresponding URL. |
| retr_sim | Cosine similarity between CLIP embeddings of image and retrieval prompt. |
| laion_index | Index of the sample in the original LAION-2B-en. |
| hash | Usual LAION hash of URL and caption. | |
anan-2024/twitter_dataset_1713211085 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 51568
num_examples: 153
download_size: 32902
dataset_size: 51568
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713165139 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 14206
num_examples: 36
download_size: 15953
dataset_size: 14206
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713165139"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tsa17/resume_ner17_hr | ---
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence: string
- name: prediction
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: score
dtype: float64
- name: start
dtype: int64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: vectors
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: annotated
struct:
- name: mentions
sequence: 'null'
- name: tags
sequence: 'null'
- name: predicted
struct:
- name: mentions
list:
- name: capitalness
dtype: string
- name: chars_length
dtype: int64
- name: density
dtype: float64
- name: label
dtype: string
- name: score
dtype: float64
- name: tokens_length
dtype: int64
- name: value
dtype: string
- name: tags
list:
- name: tag
dtype: string
- name: value
dtype: string
- name: text_length
dtype: int64
- name: tokens
list:
- name: capitalness
dtype: string
- name: char_end
dtype: int64
- name: char_start
dtype: int64
- name: custom
dtype: 'null'
- name: idx
dtype: int64
- name: length
dtype: int64
- name: score
dtype: 'null'
- name: tag
dtype: string
- name: value
dtype: string
- name: tokens_length
dtype: int64
splits:
- name: train
num_bytes: 3390188
num_examples: 50
download_size: 828976
dataset_size: 3390188
---
# Dataset Card for "resume_ner17_hr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
james-burton/OrientalMuseum_min3-3Dwhite-name | ---
dataset_info:
features:
- name: obj_num
dtype: string
- name: file
dtype: string
- name: image
dtype: image
- name: root
dtype: string
- name: description
dtype: string
- name: label
dtype:
class_label:
names:
'0': Aegis
'1': Ajaeng Holder
'2': Album Painting
'3': Amulet Mould
'4': Animal Figurine
'5': Animal Mummy
'6': Animal bone
'7': Arm Guard
'8': Axe Head
'9': Axle-caps
'10': Ball
'11': Ballista Bolt
'12': Band
'13': Basin
'14': Baton
'15': Bead Net
'16': Belt Hook
'17': Betel Nut Cutter
'18': Blouse
'19': Blu-ray disc
'20': Bolt
'21': Book Cover
'22': Box
'23': Brush Pot
'24': Brush Rest
'25': Brush Tray
'26': Bulb Bowl
'27': Bullet Mould
'28': Burnisher
'29': Cabinet
'30': Cannon
'31': Cap
'32': Carved stone
'33': Case
'34': Cash Box
'35': Chest
'36': Cigar Holder
'37': Clapper
'38': Clay pipe (smoking)
'39': Comb
'40': Compass
'41': Cosmetic and Medical Equipment and Implements
'42': Counterpoise
'43': Cricket pot
'44': Cross-bow Lock
'45': Cup And Saucer
'46': Cup, Saucer
'47': Cushion Cover
'48': DVDs
'49': Dagger
'50': Dice Box
'51': Dice Shaker
'52': Disc
'53': Domestic Equipment and Utensils
'54': Double Dagger
'55': Dummy
'56': Ear Protector
'57': Ear Stud
'58': Earring
'59': Elephant Goad
'60': Erotic Figurine
'61': Eye Protector
'62': Fan Case
'63': Feet Protector
'64': Ferrous object
'65': Figurine Mould
'66': File
'67': Finger Ring
'68': Fitting
'69': Flannel
'70': Flute
'71': Funerary Cone
'72': Funerary goods
'73': Funerary money
'74': Furosode
'75': Greek crosses
'76': Hand Jade
'77': Hand Protector
'78': Handwarmer
'79': Hanging
'80': Headband
'81': Heart Scarab
'82': Human Figurine
'83': Incense Holder
'84': Inkstick
'85': Jue (jade)
'86': Kite
'87': Knee Protector
'88': Kohl Pot
'89': Kundika
'90': Leaflet
'91': Leg
'92': Leg Protector
'93': Letter
'94': Lock
'95': Mah Jong Rack
'96': Majiang set
'97': Manuscript Page
'98': Massager
'99': Mat
'100': Mica Painting
'101': Miniature Painting
'102': Miniature Portrait
'103': Mortar
'104': Mould
'105': Mouth Jade
'106': Mouth Protector
'107': Mouth-piece
'108': Mummy Label
'109': Nail Protector
'110': Neck Guard
'111': Nose Protector
'112': Opium Pipe
'113': Opium Weight
'114': Oracle Bone
'115': Ostraka
'116': Paddle
'117': Palette
'118': Panel
'119': Part
'120': Pelmet
'121': Pencase
'122': Pendant
'123': Perfumer
'124': Phallus Protector
'125': Phylactery
'126': Pigstick
'127': Pipe
'128': Pipe Case
'129': Pipe Holder
'130': Pith Painting
'131': Plaque
'132': Plate
'133': Poh Kam
'134': Pounder
'135': Prayer Wheel
'136': Quoit
'137': Rank Square
'138': Rubber
'139': Sake Cup
'140': Scabbard Chape
'141': Scabbard Slide
'142': Scarab Seal
'143': Scarf
'144': Score Board
'145': Screen
'146': Seal
'147': Seal Paste Pot
'148': Shaft Terminal
'149': Shield
'150': Shroud Weight
'151': Sleeve Band
'152': Sleeve Weight
'153': Slide
'154': Soles
'155': Spillikins
'156': Staff Head
'157': Stamp
'158': Stand
'159': Stand of Incense Burner
'160': Stem Bowl
'161': Stem Cup
'162': Story Cloth
'163': Strainer
'164': Sword Guard
'165': Sword Knob
'166': T-shirts
'167': Table
'168': Table Runner
'169': Thangka
'170': Throwing Stick
'171': Tomb Figure
'172': Tomb Model
'173': Tongue Protector
'174': Washer
'175': Water Dropper
'176': Water Pot
'177': Wine Pot
'178': Womb Protector
'179': Woodblock Print
'180': Writing Desk
'181': accessories
'182': adzes
'183': alabastra
'184': albums
'185': altar components
'186': altars
'187': amphorae
'188': amulets
'189': anchors
'190': animation cels
'191': animation drawings
'192': anklets
'193': armbands
'194': armor
'195': armrests
'196': arrowheads
'197': arrows
'198': autograph albums
'199': axes
'200': 'axes: woodworking tools'
'201': back scratchers
'202': badges
'203': bags
'204': balances
'205': bandages
'206': bangles
'207': banners
'208': baskets
'209': beads
'210': beakers
'211': bedspreads
'212': bells
'213': belts
'214': bezels
'215': bi
'216': blades
'217': blowguns
'218': board games
'219': boats
'220': boilers
'221': bone
'222': booklets
'223': books
'224': bottles
'225': bowls
'226': boxes
'227': bracelets
'228': bread
'229': brick
'230': brooches
'231': brush washers
'232': brushes
'233': buckets
'234': buckles
'235': business cards
'236': buttons
'237': caddies
'238': calendars
'239': calligraphy
'240': candelabras
'241': candleholders
'242': candlesticks
'243': canopic jars
'244': card cases
'245': card tables
'246': cards
'247': carvings
'248': cases
'249': cash
'250': celestial globes
'251': censers
'252': chains
'253': chairs
'254': charms
'255': charts
'256': chess sets
'257': chessmen
'258': chisels
'259': chokers
'260': chopsticks
'261': cigarette cases
'262': cigarette holders
'263': cippi
'264': clamps
'265': clappers
'266': claypipe
'267': cloth
'268': clothing
'269': coats
'270': coffins
'271': coins
'272': collar
'273': combs
'274': compact discs
'275': containers
'276': coverings
'277': covers
'278': crucifixes
'279': cuffs
'280': cups
'281': cushions
'282': cutlery
'283': cylinder seals
'284': deels
'285': deity figurine
'286': diagrams
'287': dice
'288': dishes
'289': document containers
'290': documents
'291': dolls
'292': doors
'293': drawings
'294': dresses
'295': dressing gowns
'296': drums
'297': dung-chen
'298': earrings
'299': embroidery
'300': ensembles
'301': envelopes
'302': 'equipment for personal use: grooming, hygiene and health care'
'303': ewers
'304': fans
'305': fasteners
'306': 'feet: furniture components'
'307': female figurine
'308': ferrules
'309': fiddles
'310': figures
'311': figurines
'312': finials
'313': fishhooks
'314': flagons
'315': flags
'316': flasks
'317': flint
'318': fragments
'319': funnels
'320': furniture components
'321': gameboards
'322': games
'323': gaming counters
'324': ge
'325': glassware
'326': gloves
'327': goblets
'328': gongs
'329': gowns
'330': greeting cards
'331': hair ornaments
'332': hairpins
'333': hammerstones
'334': handkerchiefs
'335': handles
'336': handscrolls
'337': hanging scrolls
'338': harnesses
'339': hatpins
'340': hats
'341': headdresses
'342': headrests
'343': heads
'344': headscarves
'345': helmets
'346': hobs
'347': hoods
'348': hooks
'349': houses
'350': identity cards
'351': illuminated manuscripts
'352': incense burners
'353': incense sticks
'354': ink bottles
'355': inkstands
'356': inkstones
'357': inkwells
'358': inlays
'359': iron
'360': jackets
'361': jar seal
'362': jars
'363': jewelry
'364': jue
'365': juglets
'366': jugs
'367': kayagum
'368': keys
'369': kimonos
'370': knives
'371': kŏmun'gos
'372': ladles
'373': lamps
'374': lanterns
'375': lanyards
'376': leatherwork
'377': lids
'378': lockets
'379': loom weights
'380': maces
'381': manuscripts
'382': maps
'383': maquettes
'384': masks
'385': medals
'386': miniatures
'387': mirrors
'388': miscellaneous
'389': models
'390': money
'391': mortarboards
'392': mounts
'393': mugs
'394': mummies
'395': musical instruments
'396': nails
'397': necklaces
'398': needles
'399': netsukes
'400': nozzles
'401': obelisks
'402': obis
'403': oboes
'404': oil lamps
'405': ornaments
'406': overdresses
'407': pages
'408': paintings
'409': paper money
'410': paperweights
'411': papyrus
'412': passports
'413': pectorals
'414': pendants
'415': pennants
'416': pestles
'417': petticoats
'418': photograph albums
'419': photographs
'420': pictures
'421': pins
'422': pipes
'423': pitchers
'424': plaques
'425': plaster
'426': playing card boxes
'427': playing cards
'428': plinths
'429': plumb bobs
'430': plumbing fixtures
'431': plume holders
'432': poker
'433': pommels
'434': postage stamps
'435': postcards
'436': posters
'437': pots
'438': pottery
'439': prayer beads
'440': prayers
'441': printing blocks
'442': printing plates
'443': prints
'444': punch bowls
'445': puppets
'446': purses
'447': puzzles
'448': pyxides
'449': quilts
'450': rag-dung
'451': razors
'452': reliefs
'453': rifles
'454': rings
'455': robes
'456': roofing tile
'457': rosaries
'458': rose bowls
'459': rubbings
'460': rugs
'461': rulers
'462': sandals
'463': saris
'464': sarongs
'465': sashes
'466': sauceboats
'467': saucers
'468': saws
'469': scabbards
'470': scaraboids
'471': scarabs
'472': scarves
'473': scepters
'474': scissors
'475': scrolls
'476': sculpture
'477': seed
'478': seppa
'479': shadow puppets
'480': shawls
'481': shears
'482': shell
'483': shelves
'484': sherds
'485': shields
'486': shoes
'487': shrines
'488': sistra
'489': situlae
'490': sketches
'491': skewers
'492': skirts
'493': snuff bottles
'494': socks
'495': spatulas
'496': spearheads
'497': spears
'498': spittoons
'499': spoons
'500': stampers
'501': staples
'502': statues
'503': statuettes
'504': steelyards
'505': stelae
'506': sticks
'507': stirrup jars
'508': stools
'509': stoppers
'510': straps
'511': studs
'512': styluses
'513': sugar bowls
'514': sugar tongs
'515': swagger sticks
'516': swords
'517': tablecloths
'518': tablets
'519': tacks
'520': talismans
'521': tallies
'522': tangrams
'523': tankards
'524': tea bowls
'525': tea caddies
'526': tea kettles
'527': teacups
'528': teapots
'529': telephones
'530': ties
'531': tiles
'532': toggles
'533': toilet caskets
'534': tools
'535': toys
'536': trays
'537': trimming
'538': trophies
'539': trousers
'540': trumpets
'541': tubes
'542': tureens
'543': tweezers
'544': typewriters
'545': underdresses
'546': underwear
'547': unidentified
'548': urinals
'549': ushabti
'550': utensils
'551': vases
'552': veils
'553': vessels
'554': votive offerings
'555': waistcoats
'556': wall tile
'557': watches
'558': weighing devices
'559': weight
'560': weights
'561': whetstones
'562': whistles
'563': whorls
'564': wire
'565': wood blocks
'566': writing boards
'567': xylophones
- name: other_name
dtype: string
- name: material
dtype: string
- name: production.period
dtype: string
- name: production.place
dtype: string
splits:
- name: validation
num_bytes: 689672761.257
num_examples: 5489
- name: test
num_bytes: 658367756.568
num_examples: 5489
- name: train
num_bytes: 4933427063.75
num_examples: 116625
download_size: 6297757734
dataset_size: 6281467581.575
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.