datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
samu/emb_tutor_dataset | ---
license: mit
---
|
lmsys/lmsys-chat-1m | ---
size_categories:
- 1M<n<10M
task_categories:
- conversational
extra_gated_prompt: You agree to the [LMSYS-Chat-1M Dataset License Agreement](https://huggingface.co/datasets/lmsys/lmsys-chat-1m#lmsys-chat-1m-dataset-license-agreement).
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
extra_gated_button_content: I agree to the terms and conditions of the LMSYS-Chat-1M
Dataset License Agreement.
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
list:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: redacted
dtype: bool
splits:
- name: train
num_bytes: 2626438904
num_examples: 1000000
download_size: 1488850250
dataset_size: 2626438904
---
## LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.
It is collected from 210K unique IP addresses in the wild on the [Vicuna demo and Chatbot Arena website](https://chat.lmsys.org/) from April to August 2023.
Each sample includes a conversation ID, model name, conversation text in OpenAI API JSON format, detected language tag, and OpenAI moderation API tag.
User consent is obtained through the "Terms of use" section on the data collection website.
To ensure the safe release of data, we have made our best efforts to remove all conversations that contain personally identifiable information (PII).
In addition, we have included the OpenAI moderation API output for each message.
However, we have chosen to keep unsafe conversations so that researchers can study the safety-related questions associated with LLM usage in real-world scenarios as well as the OpenAI moderation process.
For more details, please refer to the paper: https://arxiv.org/abs/2309.11998
**Basic Statistics**
| Key | Value |
| --- | --- |
| # Conversations | 1,000,000 |
| # Models | 25 |
| # Users | 210,479 |
| # Languages | 154 |
| Avg. # Turns per Sample | 2.0 |
| Avg. # Tokens per Prompt | 69.5 |
| Avg. # Tokens per Response | 214.5 |
**PII Redaction**
We partnered with the [OpaquePrompts](https://opaqueprompts.opaque.co/) team to redact person names in this dataset to protect user privacy.
Names like "Mary" and "James" in a conversation will appear as "NAME_1" and "NAME_2". For example:
```json
Raw: [ { "content": "Write me a bio. My Name is Mary I am a student who is currently a beginner free lancer. I worked with James in the past ..." }]
Redacted: [ { "content": "Write me a bio. My Name is NAME_1 I am a student who is currently a beginner free lancer. I worked with NAME_2 in the past ..." }]
```
Each conversation includes a "redacted" field to indicate if it has been redacted.
This process may impact data quality and occasionally lead to incorrect redactions.
We are working on improving the redaction quality and will release improved versions in the future.
If you want to access the raw conversation data, please fill out [the form](https://docs.google.com/forms/d/1PZw67e19l0W3oCiQOjzSyZvXfOemhg6LCY0XzVmOUx0/edit) with details about your intended use cases.
## Uniqueness and Potential Usage
This dataset features large-scale real-world conversations with LLMs.
We believe it will help the AI research community answer important questions around topics like:
- Characteristics and distributions of real-world user prompts
- AI safety and content moderation
- Training instruction-following models
- Improving and evaluating LLM evaluation methods
- Model selection and request dispatching algorithms
For more details, please refer to the paper: https://arxiv.org/abs/2309.11998
## LMSYS-Chat-1M Dataset License Agreement
This Agreement contains the terms and conditions that govern your access and use of the LMSYS-Chat-1M Dataset (as defined above). You may not use the LMSYS-Chat-1M Dataset if you do not accept this Agreement. By clicking to accept, accessing the LMSYS-Chat-1M Dataset, or both, you hereby agree to the terms of the Agreement. If you are agreeing to be bound by the Agreement on behalf of your employer or another entity, you represent and warrant that you have full legal authority to bind your employer or such entity to this Agreement. If you do not have the requisite authority, you may not accept the Agreement or access the LMSYS-Chat-1M Dataset on behalf of your employer or another entity.
- Safety and Moderation: **This dataset contains unsafe conversations that may be perceived as offensive or unsettling.** User should apply appropriate filters and safety measures before utilizing this dataset for training dialogue agents.
- Non-Endorsement: The views and opinions depicted in this dataset **do not reflect** the perspectives of the researchers or affiliated institutions engaged in the data collection process.
- Legal Compliance: You are mandated to use it in adherence with all pertinent laws and regulations.
- Model Specific Terms: When leveraging direct outputs of a specific model, users must adhere to its corresponding terms of use.
- Non-Identification: You **must not** attempt to identify the identities of individuals or infer any sensitive personal data encompassed in this dataset.
- Prohibited Transfers: You should not distribute, copy, disclose, assign, sublicense, embed, host, or otherwise transfer the dataset to any third party.
- Right to Request Deletion: At any time, we may require you to delete all copies of the conversation dataset (in whole or in part) in your possession and control. You will promptly comply with any and all such requests. Upon our request, you shall provide us with written confirmation of your compliance with such requirement.
- Termination: We may, at any time, for any reason or for no reason, terminate this Agreement, effective immediately upon notice to you. Upon termination, the license granted to you hereunder will immediately terminate, and you will immediately stop using the LMSYS-Chat-1M Dataset and destroy all copies of the LMSYS-Chat-1M Dataset and related materials in your possession or control.
- Limitation of Liability: IN NO EVENT WILL WE BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL, EXEMPLARY, PUNITIVE, SPECIAL, OR INDIRECT DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, BUSINESS INTERRUPTION, OR LOSS OF INFORMATION) ARISING OUT OF OR RELATING TO THIS AGREEMENT OR ITS SUBJECT MATTER, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Subject to your compliance with the terms and conditions of this Agreement, we grant to you, a limited, non-exclusive, non-transferable, non-sublicensable license to use the LMSYS-Chat-1M Dataset, including the conversation data and annotations, to research, develop, and improve software, algorithms, machine learning models, techniques, and technologies for both research and commercial purposes.
## Citation
```
@misc{zheng2023lmsyschat1m,
title={LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset},
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Tianle Li and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zhuohan Li and Zi Lin and Eric. P Xing and Joseph E. Gonzalez and Ion Stoica and Hao Zhang},
year={2023},
eprint={2309.11998},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
heliosprime/twitter_dataset_1713197994 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 20098
num_examples: 57
download_size: 18170
dataset_size: 20098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713197994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
parksimon0808/prm800k-llama-generator-inference | ---
dataset_info:
features:
- name: texts
dtype: string
- name: answers
dtype: string
splits:
- name: test
num_bytes: 506185
num_examples: 2868
download_size: 76891
dataset_size: 506185
---
# Dataset Card for "prm800k-llama-generator-inference"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adamo1139/rawrr_v2 | ---
license: cc-by-nc-4.0
---
|
gustproof/sd-data | ---
license: cc-by-nc-sa-4.0
---
|
MarkrAI/eli5_sample_autorag | ---
configs:
- config_name: qa
splits:
- name: train
data_files: qa_train.parquet
- name: test
data_files: qa_test.parquet
- config_name: corpus
data_files: corpus.parquet
---
license: unknown
---
|
alexshengzhili/LLAVA-graph-OCRCleaned | ---
license: mit
---
|
SIAKAM/jubao_finetune | ---
license: openrail
---
|
KhimNguyen/spidertest | ---
dataset_info:
features:
- name: db_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 388446.75675675675
num_examples: 1065
- name: test
num_bytes: 97385.24324324324
num_examples: 267
download_size: 69452
dataset_size: 485832.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
freshpearYoon/train_free_43 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 9604902504
num_examples: 10000
download_size: 1431765342
dataset_size: 9604902504
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/fuyou_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of fuyou (Pokémon)
This is the dataset of fuyou (Pokémon), containing 200 images and their tags.
The core tags of this character are `short_hair, hair_ornament, dark_skin, hair_flower, dark-skinned_female, blue_eyes, breasts, black_hair, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 200 | 168.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 200 | 108.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 426 | 206.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 200 | 154.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 426 | 270.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/fuyou_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bandeau, blue_sarong, flower, navel, smile, bare_shoulders, solo, cleavage, midriff, print_sarong, blush, holding_poke_ball, tube_top, poke_ball_(basic), anklet, barefoot, hand_on_hip, large_breasts, open_mouth |
| 1 | 7 |  |  |  |  |  | 1girl, bare_shoulders, blue_sarong, flower, pokemon_(creature), print_sarong, smile, bandeau, navel, anklet, barefoot, midriff, open_mouth, tube_top |
| 2 | 6 |  |  |  |  |  | 1girl, eyelashes, open_mouth, :d, blue_sarong, pink_flower, pokemon_(creature), tongue, bangs, bare_shoulders, looking_at_viewer, strapless, blush, collarbone, navel, petals, spiked_hair, swimsuit |
| 3 | 8 |  |  |  |  |  | 1girl, bangs, detached_sleeves, dress, eyelashes, hairband, official_alternate_costume, pokemon_(creature), looking_at_viewer, open_mouth, tongue, blush, :d, hand_up |
| 4 | 14 |  |  |  |  |  | 1girl, flower, hetero, penis, sex, solo_focus, nipples, vaginal, 1boy, blue_sarong, blush, navel, open_mouth, spread_legs, smile, uncensored, cum_in_pussy, girl_on_top, medium_breasts, no_panties, print_sarong, bandeau, small_breasts, cowgirl_position, large_breasts, sweat |
| 5 | 5 |  |  |  |  |  | 1girl, flower, nipples, nude, smile, blush, solo, looking_at_viewer, closed_mouth, huge_breasts, large_breasts, medium_breasts, open_mouth, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bandeau | blue_sarong | flower | navel | smile | bare_shoulders | solo | cleavage | midriff | print_sarong | blush | holding_poke_ball | tube_top | poke_ball_(basic) | anklet | barefoot | hand_on_hip | large_breasts | open_mouth | pokemon_(creature) | eyelashes | :d | pink_flower | tongue | bangs | looking_at_viewer | strapless | collarbone | petals | spiked_hair | swimsuit | detached_sleeves | dress | hairband | official_alternate_costume | hand_up | hetero | penis | sex | solo_focus | nipples | vaginal | 1boy | spread_legs | uncensored | cum_in_pussy | girl_on_top | medium_breasts | no_panties | small_breasts | cowgirl_position | sweat | nude | closed_mouth | huge_breasts | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:--------------|:---------|:--------|:--------|:-----------------|:-------|:-----------|:----------|:---------------|:--------|:--------------------|:-----------|:--------------------|:---------|:-----------|:--------------|:----------------|:-------------|:---------------------|:------------|:-----|:--------------|:---------|:--------|:--------------------|:------------|:-------------|:---------|:--------------|:-----------|:-------------------|:--------|:-----------|:-----------------------------|:----------|:---------|:--------|:------|:-------------|:----------|:----------|:-------|:--------------|:-------------|:---------------|:--------------|:-----------------|:-------------|:----------------|:-------------------|:--------|:-------|:---------------|:---------------|:-------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | X | | | X | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | | X | | X | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | | | | | | | | | | X | | | | | | | | X | X | X | X | | X | X | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | X | X | X | X | | | | | X | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 5 | 5 |  |  |  |  |  | X | | | X | | X | | X | | | | X | | | | | | | X | X | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | X | | | | | X | X | X | X |
|
shibing624/sharegpt_gpt4 | ---
annotations_creators:
- shibing624
language_creators:
- shibing624
language:
- zh
- en
- gl
- ko
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/tree/main
task_categories:
- text-classification
- text-generation
task_ids:
- text-scoring
paperswithcode_id: LLM
pretty_name: GPT4 sharegpt
---
# Dataset Card
## Dataset Description
- **Repository:** [shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
- **Dataset:** [shibing624/sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
- **Size of downloaded dataset files:** 772 M
- **Total amount of disk used:** 772 M
### Dataset Summary
ShareGPT中挑选出的GPT4多轮问答数据,多语言问答。
### Languages
数据集是多语言,包括中文、英文、日文等常用语言。
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- `conversations`: a List of `string` .
```shell
head -n 1 sharegpt_gpt4.jsonl
{"conversations":[
{'from': 'human',
'value': '採用優雅現代中文,用中文繁體字型,回答以下問題。為所有標題或專用字詞提供對應的英語翻譯:Using scholarly style, summarize in detail James Barr\'s book "Semantics of Biblical Language". Provide examples supplied in the book and give corresponding page numbers wherever applicable.'},
{'from': 'gpt',
'value': '《聖經語言的語義學》(Semantics of Biblical Language)是詹姆斯·巴爾(James Barr)於1961年出版的一本研究聖經語言的專著。在這部作品中,巴爾對聖經希伯來語和希臘語的語言學特徵進行了深入的探討,他批判了當時盛行的聖經研究方法,主張應該將語言學原理應用於聖經文本的詮釋,以便更準確地理解聖經的意思。\n\n巴爾在《聖經語言的語義學》中提出了幾個重要的觀點。首先,他主張語言學和語言學家在聖經研究中的角色應該受到重視。他認為,應該避免將聖經語言學和其他古典語言學區分開來,因為聖經語言和其他古代語言一樣,都有其固有的語法、語義和詞彙特點(第18-23頁)。\n\n其次,巴爾批評了當時流行的"字根意義論"(root-meaning fallacy),即過分強調單詞的字根在詮釋聖經含義時的作用。他指出,詞義是由語境決定的,而非僅僅由字根決定。巴爾舉了希伯來語單詞 "כפר"(kāpar)為例,它在不同的語境下可以表示"遮蓋"、"贖回"或"和解",而不是僅僅依賴其字根意義(第183-184頁)。\n\n第三,巴爾強調了詞彙意義在詮釋聖經時的重要性。他認為,了解單詞的意義和用法對於恰當理解聖經文本至關重要。巴爾引用了希伯來語單詞 "ברית"(berîṯ,英文:covenant)和希臘語單詞 "διαθήκη"(diathēkē,英文:testament)為例,解釋了這兩個詞在聖經中的不'},
{'from': 'human', 'value': '繼續'},
{'from': 'gpt',
'value': '同含義和用法。巴爾認為,了解這些詞語在不同語境下的意義對於理解聖經中的盟約和遺囑概念至關重要(第50-53頁)。\n\n第四,巴爾強調語境在詮釋聖經時的重要性。他認為,要理解聖經的含義,就必須考慮到文本的歷史背景、文化環境和語言風格。此外,巴爾提倡從語言學的角度研究聖經,以便更好地理解聖經的結構和風格。他以希伯來詩篇的平行主義(Parallelism)為例,闡述了語言學原理如何有助於揭示詩篇中的意象和修辭結構(第250-252頁)。\n\n總之,《聖經語言的語義學》一書對聖經研究領域產生了深遠的影響。詹姆斯·巴爾通過對聖經語言學的深入研究,提出了許多富有洞察力的觀點,尤其是關於語言學在聖經詮釋中的重要性。這些觀點對後來的聖經研究者產生了很大的啟示,使他們在研究聖經時更加重視語言學原理,從而更加準確地理解聖經的含義。'}
]
}
```
### Data Splits
```shell
> wc -l *
6206 sharegpt_gpt4.jsonl # 清洗后的高质量gpt4日常问答数据集,大小:6K,主要是知识问答、编程题、推理计算,包括简体中文、繁体中文、英文、日文、韩文等多国语言多轮对话数据集。
58674 sharegpt_V3_format.jsonl # 原V3版sharegpt规范格式后的数据集,大小:58K,主要是日常问答,提问偏口语化,多国语言,多轮对话。
38535 sharegpt_zh_38K_format.jsonl # 中文gpt4日常问答数据集,大小:38K,主要是知识问答、翻译任务、求助、编程推理任务等偏口语提问,中文,多轮对话。
103415 total
```
#### Who are the annotators?
原作者。
### Licensing Information
same to sharegpt.
### Contributions
[shibing624](https://github.com/shibing624) add this dataset. |
HKUST-FYPHO2/audio-infos | ---
dataset_info:
features:
- name: chords
sequence: int64
- name: chord_times
sequence: float64
- name: beats
sequence: float64
- name: downbeats
sequence: float64
- name: sample_rate
dtype: int64
- name: genre
dtype: string
- name: audio_name
dtype: string
- name: url
dtype: string
- name: playlist
dtype: string
- name: time_accessed
dtype: int64
- name: views
dtype: int64
- name: length
dtype: int64
- name: rating
dtype: string
- name: age_restricted
dtype: bool
splits:
- name: train
num_bytes: 10545532
num_examples: 1495
download_size: 2617793
dataset_size: 10545532
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
easytpp/volcano | ---
license: apache-2.0
---
|
Prometutas/hgymviadaokakaka | ---
license: openrail
---
|
GSON-backup/KoInstruct-Base | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: type
dtype: string
- name: template
dtype: string
splits:
- name: train
num_bytes: 279249821
num_examples: 50169
download_size: 128982824
dataset_size: 279249821
---
# Dataset Card for "KoInstruct-QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
krish1999/pharmaceutical-classification | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: dataset
num_bytes: 34480
num_examples: 100
download_size: 12864
dataset_size: 34480
configs:
- config_name: default
data_files:
- split: dataset
path: data/dataset-*
---
|
lakshmikarpolam/pneumonia | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 363831717.625
num_examples: 3875
download_size: 331363448
dataset_size: 363831717.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
api-misuse/java_repo_star | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: repo_name
dtype: string
- name: stars_count
dtype: int64
- name: repo_head_hexsha
dtype: string
splits:
- name: train
num_bytes: 767389.0
num_examples: 9641
download_size: 652097
dataset_size: 767389.0
---
# Dataset Card for "java_repo_star"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kaychen/myfirstdataset | ---
license: apache-2.0
---
|
netcat420/MHENN | ---
license: mit
---
|
allandclive/UgandaLex2 | ---
task_categories:
- text-generation
- translation
language:
- ach
- alz
- teo
- gwr
- adh
- keo
- kin
- laj
- lgg
- myx
- kdj
- nyn
- nuj
- xog
- lg
- en
- luc
- kbo
- tjl
- rub
- ndp
- nyo
- lsm
pretty_name: UgandaLex2
size_categories:
- 1K<n<10K
---
### UgandaLex2: A Parallel Text Translation Corpus in 24 Ugandan Languages (3 added languages)
UgandaLex Parallel Texts in Ugandan Languages is a remarkable dataset consisting of parallel texts sourced from Bible translations across 21 Ugandan languages. This expansive corpus provides an invaluable resource for studying and analyzing the linguistic variations and nuances within Uganda's diverse language landscape. With aligned texts from various Bible translations, researchers, linguists, and developers can delve into the intricacies of Ugandan languages, explore translation patterns, and investigate the cultural and linguistic heritage of different communities. UgandaLex opens up avenues for advancing research in computational linguistics, cross-linguistic analysis, and the development of language technologies tailored specifically for Ugandan languages.
### Languages
**Kebu, Acholi, **Saamya-Gwe, **Nyoro, Alur, Aringa, Ateso, Ganda, Gwere, Jopadhola, Kakwa, Kinyarwanda, Kumam, Lango, Lugbara, Masaaba, Ng'akarimojong, Nyankore, Nyole, Soga, Swahili, English, Gungu, Keliko, Talinga-Bwisi
### Contributors
@allandclive & @oumo_os |
Binaryy/cream_listings | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: location
dtype: string
- name: features
sequence: string
- name: description
dtype: string
- name: images
sequence: string
- name: videos
sequence: string
- name: available
dtype: bool
- name: price
dtype: float64
- name: attachedDocument
sequence: 'null'
- name: year
dtype: int64
- name: carCondition
dtype: string
- name: engineType
dtype: string
- name: colour
dtype: string
- name: model
dtype: string
- name: noOfBed
dtype: float64
- name: noOfBathroom
dtype: float64
- name: locationISO
dtype: string
- name: forRent
dtype: bool
- name: views
sequence: string
- name: thoseWhoSaved
sequence: string
- name: createdAt
dtype: string
- name: updatedAt
dtype: string
- name: __v
dtype: int64
- name: category._id
dtype: string
- name: category.title
dtype: string
- name: category.slug
dtype: string
- name: category.isAdminAllowed
dtype: string
- name: category.createdAt
dtype: string
- name: category.updatedAt
dtype: string
- name: category.__v
dtype: int64
- name: postedBy.pageViews.value
dtype: int64
- name: postedBy.pageViews.users
sequence: 'null'
- name: postedBy.totalSaved.value
dtype: int64
- name: postedBy.totalSaved.users
sequence: string
- name: postedBy._id
dtype: string
- name: postedBy.firstName
dtype: string
- name: postedBy.lastName
dtype: string
- name: postedBy.about
dtype: string
- name: postedBy.cover
dtype: string
- name: postedBy.email
dtype: string
- name: postedBy.password
dtype: string
- name: postedBy.isAdmin
dtype: bool
- name: postedBy.savedListing
sequence: string
- name: postedBy.isVerified
dtype: bool
- name: postedBy.verifiedProfilePicture
dtype: float64
- name: postedBy.profilePicture
dtype: string
- name: postedBy.pronoun
dtype: float64
- name: postedBy.userType
dtype: int64
- name: postedBy.accountType
dtype: int64
- name: postedBy.subscribed
dtype: bool
- name: postedBy.noOfSubscription
dtype: int64
- name: postedBy.totalListing
dtype: int64
- name: postedBy.sellerType
dtype: int64
- name: postedBy.createdAt
dtype: string
- name: postedBy.updatedAt
dtype: string
- name: postedBy.__v
dtype: int64
- name: postedBy.address
dtype: string
- name: postedBy.city
dtype: string
- name: postedBy.country
dtype: string
- name: postedBy.gender
dtype: string
- name: postedBy.nationality
dtype: string
- name: postedBy.verificationType
dtype: float64
- name: postedBy.dob
dtype: string
- name: postedBy.locationISO
dtype: string
- name: postedBy.state
dtype: string
- name: postedBy.zipCode
dtype: float64
- name: postedBy.otherNames
dtype: string
- name: postedBy.facebookUrl
dtype: string
- name: postedBy.instagramUrl
dtype: string
- name: postedBy.phoneNumber1
dtype: string
- name: postedBy.phoneNumber2
dtype: string
- name: postedBy.websiteUrl
dtype: string
- name: postedBy.accountName
dtype: string
- name: postedBy.accountNo
dtype: string
- name: postedBy.bankName
dtype: string
- name: postedBy.verificationId
dtype: float64
- name: string_features
dtype: string
- name: complete_description
dtype: string
splits:
- name: train
num_bytes: 1121870
num_examples: 301
download_size: 404441
dataset_size: 1121870
---
# Dataset Card for "cream_listings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_title_v3_train_10_eval_10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 276687
num_examples: 184
- name: validation
num_bytes: 64836
num_examples: 68
download_size: 71168
dataset_size: 341523
---
# Dataset Card for "squad_title_v3_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ShrinivasSK/en_te_2 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 4108551.3
num_examples: 18000
- name: test
num_bytes: 456505.7
num_examples: 2000
download_size: 2453776
dataset_size: 4565057.0
---
# Dataset Card for "data_te_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
benayas/atis_nlpaug_5pct_v2 | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 423278
num_examples: 4455
download_size: 182712
dataset_size: 423278
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CSJianYang/InstrMulti102 | ---
license: mit
---
Please Download the zip. |
tyzhu/find_second_sent_train_10_eval_10_hint3 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 39579
num_examples: 30
- name: validation
num_bytes: 9273
num_examples: 10
download_size: 44665
dataset_size: 48852
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_second_sent_train_10_eval_10_hint3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/boudica_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of boudica/ブーディカ/布狄卡 (Fate/Grand Order)
This is the dataset of boudica/ブーディカ/布狄卡 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `red_hair, breasts, short_hair, large_breasts, short_ponytail, green_eyes, ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 678.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/boudica_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 589.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/boudica_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1211 | 1.09 GiB | [Download](https://huggingface.co/datasets/CyberHarem/boudica_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/boudica_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, cleavage, juliet_sleeves, navel, shrug_(clothing), solo, blush, looking_at_viewer, o-ring, smile, thighs, white_bikini, open_mouth, simple_background, thigh_strap, sitting, white_background |
| 1 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, open_mouth, penis, shrug_(clothing), sweat, thigh_strap, thighs, vaginal, juliet_sleeves, mosaic_censoring, nipples, pussy, solo_focus, aqua_eyes, girl_on_top, looking_at_viewer, sex_from_behind, spread_legs, thighhighs, anus, ass, doggystyle, heart, looking_back, navel, reverse_cowgirl_position |
| 2 | 6 |  |  |  |  |  | 1boy, 1girl, blush, grabbing_another's_breast, hetero, nipples, solo_focus, huge_breasts, shrug_(clothing), long_sleeves, open_mouth, puffy_sleeves |
| 3 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, collarbone, navel, nipples, thighs, completely_nude, female_pubic_hair, huge_breasts, smile, sweat |
| 4 | 9 |  |  |  |  |  | looking_at_viewer, playboy_bunny, fake_animal_ears, rabbit_ears, smile, thighs, 1girl, blush, highleg_leotard, solo, pantyhose, simple_background, aqua_eyes, bare_shoulders, white_background, white_leotard, bowtie, cleavage |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | juliet_sleeves | navel | shrug_(clothing) | solo | blush | looking_at_viewer | o-ring | smile | thighs | white_bikini | open_mouth | simple_background | thigh_strap | sitting | white_background | 1boy | hetero | penis | sweat | vaginal | mosaic_censoring | nipples | pussy | solo_focus | aqua_eyes | girl_on_top | sex_from_behind | spread_legs | thighhighs | anus | ass | doggystyle | heart | looking_back | reverse_cowgirl_position | grabbing_another's_breast | huge_breasts | long_sleeves | puffy_sleeves | collarbone | completely_nude | female_pubic_hair | playboy_bunny | fake_animal_ears | rabbit_ears | highleg_leotard | pantyhose | bare_shoulders | white_leotard | bowtie |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:-----------------|:--------|:-------------------|:-------|:--------|:--------------------|:---------|:--------|:---------|:---------------|:-------------|:--------------------|:--------------|:----------|:-------------------|:-------|:---------|:--------|:--------|:----------|:-------------------|:----------|:--------|:-------------|:------------|:--------------|:------------------|:--------------|:-------------|:-------|:------|:-------------|:--------|:---------------|:---------------------------|:----------------------------|:---------------|:---------------|:----------------|:-------------|:------------------|:--------------------|:----------------|:-------------------|:--------------|:------------------|:------------|:-----------------|:----------------|:---------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | X | X | | X | X | | | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | | X | | X | | | | | | X | | | | | X | X | | | | | X | | X | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | X | | X | X | X | | X | X | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | X | | | X | X | X | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | X | | | | X | X | X | | X | X | | | X | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
edarchimbaud/eps-revisions-stocks | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: up_last_7_days_current_qtr
dtype: float64
- name: next_qtr
dtype: string
- name: up_last_7_days_next_qtr
dtype: float64
- name: current_year
dtype: int64
- name: up_last_7_days_current_year
dtype: float64
- name: next_year
dtype: int64
- name: up_last_7_days_next_year
dtype: float64
- name: up_last_30_days_current_qtr
dtype: float64
- name: up_last_30_days_next_qtr
dtype: float64
- name: up_last_30_days_current_year
dtype: float64
- name: up_last_30_days_next_year
dtype: float64
- name: down_last_7_days_current_qtr
dtype: 'null'
- name: down_last_7_days_next_qtr
dtype: 'null'
- name: down_last_7_days_current_year
dtype: 'null'
- name: down_last_7_days_next_year
dtype: 'null'
- name: down_last_30_days_current_qtr
dtype: float64
- name: down_last_30_days_next_qtr
dtype: float64
- name: down_last_30_days_current_year
dtype: float64
- name: down_last_30_days_next_year
dtype: float64
splits:
- name: train
num_bytes: 3206767
num_examples: 20208
download_size: 263860
dataset_size: 3206767
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eps-revisions-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The eps-revisions-sp500 dataset provides information on earnings-per-share (EPS) revisions for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to analyze EPS revisions and their impact on the performance of companies in the S&P 500 index. It does not specify any particular leaderboard or evaluation metric.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string indicating the date of the recorded data.
- current_qtr (string): A string representing the current quarter.
- up_last_7_days_current_qtr (int64): An integer indicating the number of days the EPS has increased in the current quarter.
- next_qtr (string): A string representing the next quarter.
- up_last_7_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to increase in the next quarter.
- current_year (int64): An integer representing the current year.
- up_last_7_days_current_year (int64): An integer indicating the number of days the EPS has increased in the current year.
- next_year (int64): An integer representing the next year.
- up_last_7_days_next_year (int64): An integer indicating the number of days the EPS is projected to increase in the next year.
- up_last_30_days_current_qtr (int64): An integer indicating the number of days the EPS has increased in the current quarter over the last 30 days.
- up_last_30_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to increase in the next quarter over the last 30 days.
- up_last_30_days_current_year (int64): An integer indicating the number of days the EPS has increased in the current year over the last 30 days.
- up_last_30_days_next_year (int64): An integer indicating the number of days the EPS is projected to increase in the next year over the last 30 days.
- down_last_7_days_current_qtr (null): A null value indicating the absence of data on EPS decrease in the current quarter.
- down_last_7_days_next_qtr (null): A null value indicating the absence of data on EPS decrease in the next quarter.
- down_last_7_days_current_year (null): A null value indicating the absence of data on EPS decrease in the current year.
- down_last_7_days_next_year (null): A null value indicating the absence of data on EPS decrease in the next year.
- down_last_30_days_current_qtr (int64): An integer indicating the number of days the EPS has decreased in the current quarter over the last 30 days.
- down_last_30_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to decrease in the next quarter over the last 30 days.
- down_last_30_days_current_year (int64): An integer indicating the number of days the EPS has decreased in the current year over the last 30 days.
- down_last_30_days_next_year (int64): An integer indicating the number of days the EPS is projected to decrease in the next year over the last 30 days.
### Data Splits
A single split, called train.
## Dataset Creation
### Curation Rationale
The eps-revisions-sp500 dataset was created to provide information on EPS revisions for companies in the S&P 500 index.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from reliable sources and normalized for consistency.
### Annotations
#### Annotation Process
[N/A]
#### Annotators
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The eps-revisions-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The eps-revisions-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, eps-revisions-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
open-source-metrics/stars | ---
dataset_info:
features:
- name: login
dtype: string
- name: dates
dtype: string
splits:
- name: transformers
num_bytes: 4634694
num_examples: 124165
- name: peft
num_bytes: 505128
num_examples: 13577
- name: evaluate
num_bytes: 67182
num_examples: 1795
- name: huggingface_hub
num_bytes: 61776
num_examples: 1643
- name: accelerate
num_bytes: 256774
num_examples: 6881
- name: datasets
num_bytes: 688767
num_examples: 18345
- name: optimum
num_bytes: 79299
num_examples: 2115
- name: pytorch_image_models
num_bytes: 1103092
num_examples: 29579
- name: gradio
num_bytes: 1059755
num_examples: 28460
- name: tokenizers
num_bytes: 310614
num_examples: 8370
- name: diffusers
num_bytes: 830006
num_examples: 22298
- name: safetensors
num_bytes: 89074
num_examples: 2400
- name: sentence_transformers
num_bytes: 509768
num_examples: 13656
- name: candle
num_bytes: 490937
num_examples: 13246
- name: text_generation_inference
num_bytes: 286540
num_examples: 7689
- name: chat_ui
num_bytes: 215369
num_examples: 5787
- name: hub_docs
num_bytes: 8651
num_examples: 230
download_size: 6823699
dataset_size: 11197426
configs:
- config_name: default
data_files:
- split: transformers
path: data/transformers-*
- split: peft
path: data/peft-*
- split: evaluate
path: data/evaluate-*
- split: huggingface_hub
path: data/huggingface_hub-*
- split: accelerate
path: data/accelerate-*
- split: datasets
path: data/datasets-*
- split: optimum
path: data/optimum-*
- split: pytorch_image_models
path: data/pytorch_image_models-*
- split: gradio
path: data/gradio-*
- split: tokenizers
path: data/tokenizers-*
- split: diffusers
path: data/diffusers-*
- split: safetensors
path: data/safetensors-*
- split: sentence_transformers
path: data/sentence_transformers-*
- split: candle
path: data/candle-*
- split: text_generation_inference
path: data/text_generation_inference-*
- split: chat_ui
path: data/chat_ui-*
- split: hub_docs
path: data/hub_docs-*
---
|
timm/eurosat-rgb | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AnnualCrop
'1': Forest
'2': HerbaceousVegetation
'3': Highway
'4': Industrial
'5': Pasture
'6': PermanentCrop
'7': Residential
'8': River
'9': SeaLake
- name: image_id
dtype: string
splits:
- name: train
num_bytes: 55332279
num_examples: 16200
- name: validation
num_bytes: 18472972.2
num_examples: 5400
- name: test
num_bytes: 18625106.4
num_examples: 5400
download_size: 92078756
dataset_size: 92430357.6
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: mit
size_categories:
- 10K<n<100K
task_categories:
- image-classification
---
# EuroSat (RGB)
## Description
A dataset based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes with 27000 labeled and geo-referenced samples. This is the RGB version of the dataset with visible bands encoded as JPEG images.
The dataset does not have any default splits. Train, validation, and test splits were based on these definitions here https://github.com/google-research/google-research/blob/master/remote_sensing_representations/README.md#dataset-splits
* Website: https://github.com/phelber/eurosat
* Paper: https://arxiv.org/abs/1709.00029
## Citation
```bibtext
@article{helber2019eurosat,
title={Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification},
author={Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian},
journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
year={2019},
publisher={IEEE}
}
``` |
ExAi/Code-Golang-QA-2k-dpo | ---
license: apache-2.0
tags:
- Golang
- Code
- Go
- QA
size_categories:
- 1K<n<10K
---
# Code-Golang-QA-2k
This (small) dataset comprises ~1.8k dpo entries related to the Go programming language. It is designed to serve as a resource for individuals looking to enhance machine learning models, create chatbots, or simply to provide a comprehensive knowledge base for developers working with Go.
## Data Format
```json
[
{
"question": "How do you create a new RESTful API endpoint using Gin?",
"chosen_answer": "Creating a new RESTful API endpoint using the Gin framework in Go involves several steps. The Gin framework is a high-performance HTTP web framework that allows you to build web applications and microservices efficiently. It's designed to be faster and more efficient than other frameworks, making it an excellent choice for building RESTful APIs. Here's a step-by-step guide to creating a new RESTful API endpoint using Gin.\n\n### Step 1: Install Gin\n\nFirst, you need to install the Gin package if you haven't already. You can do this by running the following command:\n\n```bash\ngo get -u github.com/gin-gonic/gin\n```\n\nThis command will download and install the Gin package and its dependencies.\n\n### Step 2: Import Gin\n\nIn your Go file where you want to create the API endpoint, import the Gin package at the beginning:\n\n```go\npackage main\n\nimport \"github.com/gin-gonic/gin\"\n```\n\n### Step 3: Create a Gin Router\n\nNext, you create a router instance which will be used to handle the HTTP requests.\n\n```go\nfunc main() {\n router := gin.Default()\n}\n```\n\n### Step 4: Define the Endpoint\n\nNow, define the RESTful API endpoint. Let's say you want to create a simple `GET` endpoint that returns a message. You would use the `GET` method of the router to define the path and the function that will handle requests to that path.\n\n```go\nfunc main() {\n router := gin.Default()\n \n router.GET(\"/message\", func(c *gin.Context) {\n c.JSON(200, gin.H{\n \"message\": \"Hello, World!\",\n })\n })\n}\n```\n\nIn this example, when a client sends a `GET` request to `/message`, the function will return a JSON response with a message.\n\n### Step 5: Run the Server\n\nFinally, you need to start the HTTP server and specify the port it should listen on. This is done using the `Run` method of the router.\n\n```go\nfunc main() {\n router := gin.Default()\n\n router.GET(\"/message\", func(c *gin.Context) {\n c.JSON(200, gin.H{\n \"message\": \"Hello, World!\",\n })\n })\n\n router.Run(\":8080\") // Listen and serve on 0.0.0.0:8080\n}\n```\n\nThis will start the server, and it will listen for requests on port 8080. You can specify a different port by changing the argument to `Run`.\n\n### Testing the Endpoint\n\nYou can test the RESTful API endpoint using a tool like `curl` or Postman. For example, to test the `/message` endpoint with `curl`, you would run:\n\n```bash\ncurl http://localhost:8080/message\n```\n\nThis should return a JSON response like:\n\n```json\n{\"message\": \"Hello, World!\"}\n```\n\n### Conclusion\n\nBy following these steps, you've created a simple RESTful API endpoint using the Gin framework in Go. Gin makes it easy to define and handle routes, making it a powerful tool for building efficient web applications and services. You can expand on this by adding more endpoints, implementing middleware for authentication or logging, and integrating with databases to create full-fledged applications.",
"rejected_answer": "To create a new RESTful API endpoint using Gin, a popular Go web framework, follow these steps:\n\n1. Install Gin\n\nFirst, make sure you have Gin installed. If you haven't installed it yet, run the following command:\n\n```go\ngo get -u github.com/gin-gonic/gin\n```\n\n2. Create a new Go file\n\nCreate a new Go file (for example, `main.go`) to implement your API.\n\n3. Import the Gin package\n\nAt the beginning of your Go file, import the Gin package:\n\n```go\nimport \"github.com/gin-gonic/gin\"\n```\n\n4. Initialize a Gin router\n\nCreate a new Gin router instance:\n\n```go\nr := gin.Default()\n```\n\n5. Create the API endpoint\n\nNow, you can create a new API endpoint using the Gin router instance. Here's an example of creating a GET endpoint:\n\n```go\nr.GET(\"/api\", func(c *gin.Context) {\n // Your API logic goes here\n})\n```\n\nReplace `/api` with the endpoint path you want to create.\n\n6. Add response to the endpoint\n\nAdd the response to your endpoint. Here's an example of returning a JSON response:\n\n```go\nr.GET(\"/api\", func(c *gin.Context) {\n c.JSON(http.StatusOK, gin.H{\n \"message\": \"Hello, World!\",\n })\n})\n```\n\n7. Run the API server\n\nFinally, run your API server:\n\n```go\nr.Run(\":8080\")\n```\n\nReplace `:8080` with the port number you want to use.\n\nThe complete example:\n\n```go\npackage main\n\nimport \"github.com/gin-gonic/gin\"\n\nfunc main() {\n r := gin.Default()\n\n r.GET(\"/api\", func(c *gin.Context) {\n c.JSON(http.StatusOK, gin.H{\n \"message\": \"Hello, World!\",\n })\n })\n\n r.Run(\":8080\")\n}\n```\n\nThis example creates a new RESTful API endpoint at `/api` that returns a JSON response with the message \"Hello, World!\"."
}
...
]
``` |
cheafdevo56/Influential_CitedNegs_5_Percent | ---
dataset_info:
features:
- name: query
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: title
dtype: string
- name: pos
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: title
dtype: string
- name: neg
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: score
dtype: int64
- name: title
dtype: string
splits:
- name: train
num_bytes: 173083913.1
num_examples: 45000
- name: validation
num_bytes: 19231545.9
num_examples: 5000
download_size: 115596575
dataset_size: 192315459.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
CyberHarem/gr_mk23_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of gr_mk23/GrMk23/Mk23 (Girls' Frontline)
This is the dataset of gr_mk23/GrMk23/Mk23 (Girls' Frontline), containing 96 images and their tags.
The core tags of this character are `long_hair, brown_hair, breasts, multicolored_hair, blue_eyes, streaked_hair, heterochromia, earrings, animal_ears, red_eyes, heart_earrings, tail, bangs, cat_ears, medium_breasts, pink_hair, cat_tail, fang, two_side_up`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 96 | 128.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gr_mk23_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 96 | 73.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gr_mk23_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 230 | 157.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gr_mk23_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 96 | 113.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gr_mk23_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 230 | 222.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gr_mk23_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/gr_mk23_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, heart, jewelry, looking_at_viewer, solo, white_shirt, blush, collared_shirt, white_background, hairclip, open_mouth, simple_background, smile, long_sleeves, off_shoulder, pleated_skirt, purple_bowtie, x_hair_ornament, black_choker, plaid_skirt, purple_skirt, open_jacket, pink_jacket, school_uniform, very_long_hair |
| 1 | 5 |  |  |  |  |  | 1girl, choker, heart, jewelry, looking_at_viewer, nail_polish, open_mouth, solo, cleavage, large_breasts, white_shirt, blush, bowtie, hairclip, pink_eyes, x_hair_ornament, :d, collarbone, long_sleeves, pink_nails, purple_skirt, school_uniform, skin_fang, sleeves_past_wrists |
| 2 | 17 |  |  |  |  |  | 1girl, heart, jewelry, looking_at_viewer, solo, smile, blush, cleavage, simple_background, jingle_bell, neck_bell, fingerless_gloves, white_background, open_mouth, white_thighhighs, black_gloves, choker, sitting, skirt, tail_bell, bra, clothing_cutout, collarbone |
| 3 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, solo, black_gloves, china_dress, cleavage, double_bun, blush, jewelry, smile, white_dress, heart, covered_navel, open_mouth, very_long_hair, bare_shoulders, clothing_cutout, handgun, heckler_&_koch, single_elbow_glove, uneven_gloves |
| 4 | 10 |  |  |  |  |  | blush, jewelry, wedding_dress, white_dress, white_gloves, 1girl, bridal_veil, elbow_gloves, heart, solo, cleavage, looking_at_viewer, smile, strapless_dress, tiara, official_alternate_costume, open_mouth, bride, hair_ornament, pink_rose, bare_shoulders, bridal_garter, holding_bouquet, very_long_hair, full_body, gun, heckler_&_koch, simple_background, white_background, white_footwear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | heart | jewelry | looking_at_viewer | solo | white_shirt | blush | collared_shirt | white_background | hairclip | open_mouth | simple_background | smile | long_sleeves | off_shoulder | pleated_skirt | purple_bowtie | x_hair_ornament | black_choker | plaid_skirt | purple_skirt | open_jacket | pink_jacket | school_uniform | very_long_hair | choker | nail_polish | cleavage | large_breasts | bowtie | pink_eyes | :d | collarbone | pink_nails | skin_fang | sleeves_past_wrists | jingle_bell | neck_bell | fingerless_gloves | white_thighhighs | black_gloves | sitting | skirt | tail_bell | bra | clothing_cutout | china_dress | double_bun | white_dress | covered_navel | bare_shoulders | handgun | heckler_&_koch | single_elbow_glove | uneven_gloves | wedding_dress | white_gloves | bridal_veil | elbow_gloves | strapless_dress | tiara | official_alternate_costume | bride | hair_ornament | pink_rose | bridal_garter | holding_bouquet | full_body | gun | white_footwear |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:----------|:--------------------|:-------|:--------------|:--------|:-----------------|:-------------------|:-----------|:-------------|:--------------------|:--------|:---------------|:---------------|:----------------|:----------------|:------------------|:---------------|:--------------|:---------------|:--------------|:--------------|:-----------------|:-----------------|:---------|:--------------|:-----------|:----------------|:---------|:------------|:-----|:-------------|:-------------|:------------|:----------------------|:--------------|:------------|:--------------------|:-------------------|:---------------|:----------|:--------|:------------|:------|:------------------|:--------------|:-------------|:--------------|:----------------|:-----------------|:----------|:-----------------|:---------------------|:----------------|:----------------|:---------------|:--------------|:---------------|:------------------|:--------|:-----------------------------|:--------|:----------------|:------------|:----------------|:------------------|:------------|:------|:-----------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | X | | | X | | | | X | | | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 17 |  |  |  |  |  | X | X | X | X | X | | X | | X | | X | X | X | | | | | | | | | | | | | X | | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | X | X | X | X | | X | | | | X | | X | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | X | X | X | X | | X | | X | | X | X | X | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
hpe-ai/customer-complaints-train.csv | ---
license: apache-2.0
---
|
shivam9980/cnn-hindi | ---
license: apache-2.0
---
|
AdapterOcean/code_instructions_standardized_cluster_6 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 78950354
num_examples: 7994
download_size: 22618267
dataset_size: 78950354
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code_instructions_standardized_cluster_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Oakh/oakh2 | ---
license: other
---
|
CyberHarem/aether_foundation_employee_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of aether_foundation_employee/エーテル財団職員 (Pokémon)
This is the dataset of aether_foundation_employee/エーテル財団職員 (Pokémon), containing 185 images and their tags.
The core tags of this character are `dark_skin, short_hair, black_hair, dark-skinned_female, hat, breasts, white_headwear, cabbie_hat, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 185 | 183.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aether_foundation_employee_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 185 | 110.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aether_foundation_employee_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 478 | 240.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aether_foundation_employee_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 185 | 163.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aether_foundation_employee_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 478 | 319.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aether_foundation_employee_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/aether_foundation_employee_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1boy, 1girl, blush, hetero, penis, sex, sweat, vaginal, white_gloves, nude, open_mouth, girl_on_top, navel, nipples, cum_in_pussy, smile, solo_focus, bar_censor, large_breasts, spread_legs, squatting_cowgirl_position, thighhighs |
| 1 | 9 |  |  |  |  |  | 1boy, 1girl, clothed_female_nude_male, hetero, penis, short_sleeves, testicles, white_gloves, gloved_handjob, blush, cum, dark-skinned_male, interracial, mosaic_censoring, open_mouth, bangs, pouch, sweat |
| 2 | 9 |  |  |  |  |  | 1boy, 1girl, blush, hetero, penis, solo_focus, white_gloves, short_sleeves, censored, open_mouth, tongue_out, fellatio, cum_in_mouth, facial, large_breasts, nude, simple_background, white_background |
| 3 | 9 |  |  |  |  |  | 1girl, short_sleeves, white_gloves, shoes, white_footwear, simple_background, white_background, looking_at_viewer, open_mouth, pokemon_(creature), white_pantyhose, full_body, pouch, short_jumpsuit, thigh_strap |
| 4 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, solo, white_gloves, holding_poke_ball, poke_ball_(basic), short_sleeves, blush, closed_mouth, pantyhose, white_background, bangs, hand_on_hip, simple_background, smile, uniform |
| 5 | 5 |  |  |  |  |  | 1girl, simple_background, solo, white_background, white_bikini, gigantic_breasts, looking_at_viewer, smile, huge_breasts, navel |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1boy | 1girl | blush | hetero | penis | sex | sweat | vaginal | white_gloves | nude | open_mouth | girl_on_top | navel | nipples | cum_in_pussy | smile | solo_focus | bar_censor | large_breasts | spread_legs | squatting_cowgirl_position | thighhighs | clothed_female_nude_male | short_sleeves | testicles | gloved_handjob | cum | dark-skinned_male | interracial | mosaic_censoring | bangs | pouch | censored | tongue_out | fellatio | cum_in_mouth | facial | simple_background | white_background | shoes | white_footwear | looking_at_viewer | pokemon_(creature) | white_pantyhose | full_body | short_jumpsuit | thigh_strap | solo | holding_poke_ball | poke_ball_(basic) | closed_mouth | pantyhose | hand_on_hip | uniform | white_bikini | gigantic_breasts | huge_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------|:--------|:--------|:---------|:--------|:------|:--------|:----------|:---------------|:-------|:-------------|:--------------|:--------|:----------|:---------------|:--------|:-------------|:-------------|:----------------|:--------------|:-----------------------------|:-------------|:---------------------------|:----------------|:------------|:-----------------|:------|:--------------------|:--------------|:-------------------|:--------|:--------|:-----------|:-------------|:-----------|:---------------|:---------|:--------------------|:-------------------|:--------|:-----------------|:--------------------|:---------------------|:------------------|:------------|:-----------------|:--------------|:-------|:--------------------|:--------------------|:---------------|:------------|:--------------|:----------|:---------------|:-------------------|:---------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | X | | X | | X | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | X | X | X | | | | X | X | X | | | | | | X | | X | | | | | X | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | | X | | | | | | | X | | X | | | | | | | | | | | | | X | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | | X | X | | | | | | X | | | | | | | X | | | | | | | | X | | | | | | | X | | | | | | | X | X | | | X | | | | | | X | X | X | X | X | X | X | | | |
| 5 | 5 |  |  |  |  |  | | X | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | | | X | | | | | | X | | | | | | | X | X | X |
|
Nexdata/In-Car_Noise_Data_by_Microphone_and_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/In-Car_Noise_Data_by_Microphone_and_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/233?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
531 hours of noise data in in-car scene. It contains various vehicle models, road types, vehicle speed and car windoe close/open condition. Six recording points are placed to record the noise situation at different positions in the vehicle and accurately match the vehicle noise modeling requirements.
For more details, please refer to the link: https://www.nexdata.ai/datasets/233?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition,noisy-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Noise Data
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
hirundo-io/vinbigdata-mislabels | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': all_bboxes
'1': relabeled_bboxes
'2': suspect_bboxes
- name: filename
dtype: string
splits:
- name: train
num_bytes: 14002024.0
num_examples: 426
download_size: 14006028
dataset_size: 14002024.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vinbigdata-mislabels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-miscellaneous-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 137981
num_examples: 783
download_size: 92589
dataset_size: 137981
---
# Dataset Card for "mmlu-miscellaneous-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gayane/esol | ---
task_categories:
- text-classification
- text-generation
tags:
- chemistry
- biology
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: smiles
dtype: string
- name: activity
dtype: float64
splits:
- name: train
num_bytes: 31241
num_examples: 902
- name: validation
num_bytes: 3955
num_examples: 113
- name: test
num_bytes: 3737
num_examples: 113
download_size: 28948
dataset_size: 38933
---
|
BangumiBase/sakurasounopetnakanojo | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Sakurasou No Pet Na Kanojo
This is the image base of bangumi Sakurasou no Pet na Kanojo, we detected 24 characters, 4107 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1328 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 405 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 313 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 33 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 18 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 46 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 47 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 74 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 580 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 105 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 43 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 523 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 43 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 71 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 11 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 21 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 139 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 13 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 9 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 28 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 9 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 20 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 9 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 219 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
timxv/xPrompts | ---
license: unknown
---
|
jeongseon/cp-final-project-preprocessed2 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 48027206752
num_examples: 50000
- name: valid
num_bytes: 2444583512
num_examples: 2545
download_size: 10102545245
dataset_size: 50471790264
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
youndukn/ROMANTIC_2048 | ---
license: apache-2.0
---
|
Shuv001/proc_r50 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 454411979.0
num_examples: 50000
download_size: 324957581
dataset_size: 454411979.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Confirm-Labs/pile_scan_4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: token_short
dtype: string
- name: token_long
dtype: string
- name: p_short
dtype: float32
- name: p_long
dtype: float32
- name: JS
dtype: float32
- name: long_ids
sequence: int32
- name: short_max_id
dtype: int64
- name: long_max_id
dtype: int64
- name: context
dtype: string
- name: context_ids
sequence: int32
- name: p_delta_max
dtype: float32
- name: logit_excite_max
dtype: float32
- name: logit_inhibit_max
dtype: float32
- name: batch
dtype: int64
- name: sample
dtype: int64
- name: start
dtype: int64
splits:
- name: scan
num_bytes: 466393218
num_examples: 1874497
download_size: 0
dataset_size: 466393218
---
# scan_4
See https://confirmlabs.org/posts/catalog.html for details.
- `text`: The long prompt text with the the first token surrounded by square brackets.
- `token_short`: The model's prediction for the most likely token given the short prompt.
- `token_long`: The model's prediction for the most likely token given the long prompt.
- `p_short`: The model's prediction for the probability of `token_short`.
- `p_long`: The model's prediction for the probability of `token_long`.
- `JS`: The Jensen-Shannon divergence between the model's distribution over tokens given the short prompt and the model's distribution over tokens given the long prompt.
- `long_ids`: The ids of the tokens in the long prompt.
- `short_max_id`: The id of `token_short`.
- `long_max_id`: The id of `token_long`.
- `context`: The surrounding text of the prompt.
- `context_ids`: The ids of `context`.
- `p_delta_max`: The largest difference in probability for any token between the short and long prompt.
- `logit_excite_max`: The largest increase in logit for any token between the short and long prompt.
- `logit_inhibit_max`: The largest decrease in logit for any token between the short and long prompt.
- `batch`: The batch number of the prompt.
- `sample`: The sample number of the prompt.
- `start`: The start index of the prompt in the sample. |
SauravMaheshkar/pareto-ogbn-arxiv | ---
size_categories:
- 1K<n<10K
task_categories:
- graph-ml
license: cc
---
## Dataset Information
| # Nodes | # Edges | # Features |
|:-------:|:---------:|:----------:|
| 169,343 | 1,166,243 | 128 |
Pre-processed as per the official codebase of https://arxiv.org/abs/2210.02016
## Citations
```
@article{ju2023multi,
title={Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization},
author={Ju, Mingxuan and Zhao, Tong and Wen, Qianlong and Yu, Wenhao and Shah, Neil and Ye, Yanfang and Zhang, Chuxu},
booktitle={International Conference on Learning Representations},
year={2023}
}
``` |
dmrau/trec_dl20 | ---
license: unknown
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 2816
num_examples: 54
- name: corpus
num_bytes: 4099348
num_examples: 10446
download_size: 2356188
dataset_size: 4102164
---
|
Atom007/mc4-japanese-data | ---
license: apache-2.0
task_categories:
- conversational
language:
- ja
---
Reference https://huggingface.co/datasets/mc4 |
liataynat/Yoimiya2 | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: file_path
dtype: string
- name: repo_id
dtype: string
- name: token_count
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 47904101
num_examples: 8276
download_size: 15604911
dataset_size: 47904101
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jorgeortizfuentes/spanish_books | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: SpanishBooks
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 40822979419
num_examples: 87967
download_size: 25042031556
dataset_size: 40822979419
---
# Spanish Books
## Dataset Description
- **Total of books:** 87,967
### Dataset Summary
Dataset of books in Spanish crawled from web and torrents.
### Preprocessing
Preprocessing performed by [spanish_nlp](https://github.com/jorgeortizfuentes/spanish_nlp).
### Licensing Information
The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Some books may be subject to copyright. Use for academic purposes only.
### Citation Information
```
@misc{ortiz2022esbooks,
title={Crawled Spanish Books},
author={Jorge Ortiz-Fuentes},
year={2022},
publisher= {Hugging Face}
}
```
|
cetacean/ttt | ---
license: unknown
---
|
SodaDQ/cache_test | ---
dataset_info:
features:
- name: sodacl
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2075
num_examples: 5
- name: test
num_bytes: 145801
num_examples: 308
download_size: 74408
dataset_size: 147876
---
# Dataset Card for "cache_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
McSpicyWithMilo/infographic-instructions | ---
language:
- en
---
# Dataset Card for Dataset Name
This dataset contains manually generated instructions to modify an infographic.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
wpicard/nostradamus-propheties | ---
annotations_creators:
- no-annotation
language_creators: []
language:
- en
language_bcp47:
- en-US
license:
- unknown
multilinguality:
- monolingual
pretty_name: nostradamus-propheties
size_categories:
- unknown
source_datasets: []
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# Dataset Card for "nostradamus-propheties"
## Dataset Description
### Dataset Summary
The Nostradamus propheties dataset is a set of structured files containing the "Propheties" by Nostradamus, translated in modern English.
The original text consists of 10 "Centuries", every century containing 100 numbered quatrains.
In the dataset, every century is a separate file named `century**.json`. For instance, all the quatrains of Century I are in the file `century01.json`.
The century and the quantrain number are kept for every quatrain. Every quatrain has been split in four separate lines. For example, the second quatrain of Century I is stored in `century01.json` as follows:
```
{
"century":1,
"index":2,
"line1":"The wand in the hand is placed in the middle of the tripod's legs.",
"line2":"With water he sprinkles both the hem of his garment and his foot.",
"line3":"A voice, fear: he trembles in his robes.",
"line4":"Divine splendor; the God sits nearby."
}
```
|
Ayon128/github-issues | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: 'null'
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 14368648
num_examples: 2500
download_size: 4077061
dataset_size: 14368648
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingartists/the-avalanches | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/the-avalanches"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.123553 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e22f7806a402c82b09336cb5cf79a618.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-avalanches">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Avalanches</div>
<a href="https://genius.com/artists/the-avalanches">
<div style="text-align: center; font-size: 14px;">@the-avalanches</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-avalanches).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-avalanches")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|111| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-avalanches")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
allenai/wcep_sparse_oracle | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.6443 | 0.6443 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.6280 | 0.6280 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6658 | 0.6658 | |
dmntrd/autotrain-data-autotrain-gvxvw-67e2w | ---
dataset_info:
features:
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 832
num_examples: 64
- name: validation
num_bytes: 832
num_examples: 64
download_size: 1748
dataset_size: 1664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-autotrain-gvxvw-67e2w"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sachit-sankhe/Mentoring-Dataset | ---
license: apache-2.0
---
# Boost Your Technical Mentorship with OpenLLaMA 3B Fine-Tuning
**Ready to unlock expert-level guidance on your technical journey?** Explore this question-answer dataset designed for technical mentorship, with future plans to fine-tune the powerful OpenLLaMA 3B language model for even more advanced interactions.
## Overview
* **Focus:** Technical Mentorship
* **Domains:** Currently covers 7 key areas: AI, ML, Blockchain, Cybersecurity, AppDev, WebDev, DevOps
* **Content:**
* General questions a student would ask a subject matter expert.
* Mixture of generative AI-crafted questions and human-guided refinement by industry mentors.
* **Model:** Planned future fine-tuning with OpenLLaMA 3B for enhanced language understanding and response generation
## Why Choose This Dataset?
* **Gain Insights:** Get expert-level answers to your technical questions.
* **Explore Diverse Domains:** Broaden your knowledge across several in-demand tech fields.
* **Contribute to Growth:** We encourage contributions of more domains and question variety!
* **Be Part of the Future:** Stay tuned for the exciting integration of OpenLLaMA 3B for even more powerful interactions.
## Getting Started
* Access the dataset on Hugging Face Hub (https://huggingface.co/datasets/sachit-sankhe/Mentoring-Dataset).
* Learn how the dataset will be used to fine-tune OpenLLaMA 3B in the future.
* Explore the current questions and answers to learn and spark new inquiries.
## Join the Community
We're passionate about building a comprehensive resource for technical mentorship. If you have:
* Feedback on the current dataset
* Contributions for additional domains or question types
* Ideas for further development
Don't hesitate to reach out! Let's work together to build the ultimate technical mentorship experience.
**Remember, this is just the beginning. Stay tuned for further updates and exciting developments!**
## Contributing
Contributions are welcome! |
dipteshkanojia/llama-2-qe-2023-engu-da-sys-test | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 583787
num_examples: 1075
download_size: 246758
dataset_size: 583787
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
- gu
---
# Dataset Card for "llama-2-qe-2023-engu-da-sys-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aframson/medic-test | ---
license: mit
---
|
ProgramComputer/VGGFace2-HQ | ---
task_categories:
- image-classification
license: cc-by-nc-4.0
---
https://github.com/NNNNAI/VGGFace2-HQ |
ImageEditing/inpainting_1000_samples | ---
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
- name: masked_image
dtype: image
- name: edit_prompt
dtype: string
splits:
- name: train
num_bytes: 791669845.0
num_examples: 1000
download_size: 791109122
dataset_size: 791669845.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
akkasi/sem_eval_2018 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: float64
- name: label2idx
dtype: string
- name: idx2label
dtype: string
splits:
- name: train
num_bytes: 3419309
num_examples: 6838
- name: test
num_bytes: 1628220
num_examples: 3259
- name: validation
num_bytes: 442769
num_examples: 886
download_size: 907175
dataset_size: 5490298
---
# Dataset Card for "sem_eval_2018_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_andrijdavid__tinyllama-dare | ---
pretty_name: Evaluation run of andrijdavid/tinyllama-dare
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [andrijdavid/tinyllama-dare](https://huggingface.co/andrijdavid/tinyllama-dare)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_andrijdavid__tinyllama-dare\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-19T19:20:12.926605](https://huggingface.co/datasets/open-llm-leaderboard/details_andrijdavid__tinyllama-dare/blob/main/results_2024-01-19T19-20-12.926605.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.260218948497339,\n\
\ \"acc_stderr\": 0.03089367507715055,\n \"acc_norm\": 0.26040524383105657,\n\
\ \"acc_norm_stderr\": 0.031653815968800486,\n \"mc1\": 0.2558139534883721,\n\
\ \"mc1_stderr\": 0.015274176219283361,\n \"mc2\": 0.3901127619389903,\n\
\ \"mc2_stderr\": 0.014174485975506508\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.3643344709897611,\n \"acc_stderr\": 0.014063260279882412,\n\
\ \"acc_norm\": 0.3728668941979522,\n \"acc_norm_stderr\": 0.014131176760131163\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4700258912567218,\n\
\ \"acc_stderr\": 0.004980807231136748,\n \"acc_norm\": 0.6277633937462657,\n\
\ \"acc_norm_stderr\": 0.004824130528590593\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909284,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909284\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.18518518518518517,\n\
\ \"acc_stderr\": 0.033556772163131424,\n \"acc_norm\": 0.18518518518518517,\n\
\ \"acc_norm_stderr\": 0.033556772163131424\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123387,\n\
\ \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123387\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.24,\n\
\ \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.24,\n \
\ \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2679245283018868,\n \"acc_stderr\": 0.027257260322494845,\n\
\ \"acc_norm\": 0.2679245283018868,\n \"acc_norm_stderr\": 0.027257260322494845\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.034765901043041336,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.034765901043041336\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\"\
: 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.20809248554913296,\n\
\ \"acc_stderr\": 0.030952890217749895,\n \"acc_norm\": 0.20809248554913296,\n\
\ \"acc_norm_stderr\": 0.030952890217749895\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237655,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237655\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.251063829787234,\n \"acc_stderr\": 0.028346963777162452,\n\
\ \"acc_norm\": 0.251063829787234,\n \"acc_norm_stderr\": 0.028346963777162452\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21052631578947367,\n\
\ \"acc_stderr\": 0.0383515395439942,\n \"acc_norm\": 0.21052631578947367,\n\
\ \"acc_norm_stderr\": 0.0383515395439942\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.23448275862068965,\n \"acc_stderr\": 0.035306258743465914,\n\
\ \"acc_norm\": 0.23448275862068965,\n \"acc_norm_stderr\": 0.035306258743465914\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.25396825396825395,\n \"acc_stderr\": 0.022418042891113953,\n \"\
acc_norm\": 0.25396825396825395,\n \"acc_norm_stderr\": 0.022418042891113953\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1984126984126984,\n\
\ \"acc_stderr\": 0.03567016675276862,\n \"acc_norm\": 0.1984126984126984,\n\
\ \"acc_norm_stderr\": 0.03567016675276862\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.25161290322580643,\n\
\ \"acc_stderr\": 0.02468597928623997,\n \"acc_norm\": 0.25161290322580643,\n\
\ \"acc_norm_stderr\": 0.02468597928623997\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03010833071801162,\n\
\ \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03010833071801162\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.23,\n \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\"\
: 0.23,\n \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.24848484848484848,\n \"acc_stderr\": 0.03374402644139405,\n\
\ \"acc_norm\": 0.24848484848484848,\n \"acc_norm_stderr\": 0.03374402644139405\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.22727272727272727,\n \"acc_stderr\": 0.029857515673386407,\n \"\
acc_norm\": 0.22727272727272727,\n \"acc_norm_stderr\": 0.029857515673386407\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.21761658031088082,\n \"acc_stderr\": 0.029778663037752954,\n\
\ \"acc_norm\": 0.21761658031088082,\n \"acc_norm_stderr\": 0.029778663037752954\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2564102564102564,\n \"acc_stderr\": 0.022139081103971545,\n\
\ \"acc_norm\": 0.2564102564102564,\n \"acc_norm_stderr\": 0.022139081103971545\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.24444444444444444,\n \"acc_stderr\": 0.02620276653465215,\n \
\ \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.02620276653465215\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.02755361446786382,\n\
\ \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.02755361446786382\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2185430463576159,\n \"acc_stderr\": 0.03374235550425694,\n \"\
acc_norm\": 0.2185430463576159,\n \"acc_norm_stderr\": 0.03374235550425694\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.24036697247706423,\n \"acc_stderr\": 0.01832060732096407,\n \"\
acc_norm\": 0.24036697247706423,\n \"acc_norm_stderr\": 0.01832060732096407\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.3333333333333333,\n \"acc_stderr\": 0.03214952147802749,\n \"\
acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.03214952147802749\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.27450980392156865,\n \"acc_stderr\": 0.031321798030832904,\n \"\
acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.031321798030832904\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.26582278481012656,\n \"acc_stderr\": 0.028756799629658342,\n \
\ \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.028756799629658342\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.34977578475336324,\n\
\ \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.34977578475336324,\n\
\ \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.03768335959728745,\n\
\ \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.03768335959728745\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.24793388429752067,\n \"acc_stderr\": 0.039418975265163025,\n \"\
acc_norm\": 0.24793388429752067,\n \"acc_norm_stderr\": 0.039418975265163025\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.25766871165644173,\n \"acc_stderr\": 0.03436150827846917,\n\
\ \"acc_norm\": 0.25766871165644173,\n \"acc_norm_stderr\": 0.03436150827846917\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.30357142857142855,\n\
\ \"acc_stderr\": 0.04364226155841044,\n \"acc_norm\": 0.30357142857142855,\n\
\ \"acc_norm_stderr\": 0.04364226155841044\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.2524271844660194,\n \"acc_stderr\": 0.04301250399690875,\n\
\ \"acc_norm\": 0.2524271844660194,\n \"acc_norm_stderr\": 0.04301250399690875\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.029343114798094476,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.029343114798094476\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2886334610472541,\n\
\ \"acc_stderr\": 0.016203792703197804,\n \"acc_norm\": 0.2886334610472541,\n\
\ \"acc_norm_stderr\": 0.016203792703197804\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.2398843930635838,\n \"acc_stderr\": 0.02298959254312357,\n\
\ \"acc_norm\": 0.2398843930635838,\n \"acc_norm_stderr\": 0.02298959254312357\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.22681564245810057,\n\
\ \"acc_stderr\": 0.014005843570897897,\n \"acc_norm\": 0.22681564245810057,\n\
\ \"acc_norm_stderr\": 0.014005843570897897\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.24183006535947713,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.24183006535947713,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2765273311897106,\n\
\ \"acc_stderr\": 0.025403832978179615,\n \"acc_norm\": 0.2765273311897106,\n\
\ \"acc_norm_stderr\": 0.025403832978179615\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2654320987654321,\n \"acc_stderr\": 0.024569223600460845,\n\
\ \"acc_norm\": 0.2654320987654321,\n \"acc_norm_stderr\": 0.024569223600460845\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.24113475177304963,\n \"acc_stderr\": 0.025518731049537766,\n \
\ \"acc_norm\": 0.24113475177304963,\n \"acc_norm_stderr\": 0.025518731049537766\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2333767926988266,\n\
\ \"acc_stderr\": 0.010803108481179088,\n \"acc_norm\": 0.2333767926988266,\n\
\ \"acc_norm_stderr\": 0.010803108481179088\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.024562204314142314,\n\
\ \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.024562204314142314\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.2679738562091503,\n \"acc_stderr\": 0.017917974069594726,\n \
\ \"acc_norm\": 0.2679738562091503,\n \"acc_norm_stderr\": 0.017917974069594726\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2909090909090909,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.2909090909090909,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.1673469387755102,\n \"acc_stderr\": 0.023897144768914524,\n\
\ \"acc_norm\": 0.1673469387755102,\n \"acc_norm_stderr\": 0.023897144768914524\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n\
\ \"acc_stderr\": 0.030147775935409224,\n \"acc_norm\": 0.23880597014925373,\n\
\ \"acc_norm_stderr\": 0.030147775935409224\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3313253012048193,\n\
\ \"acc_stderr\": 0.03664314777288087,\n \"acc_norm\": 0.3313253012048193,\n\
\ \"acc_norm_stderr\": 0.03664314777288087\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.25146198830409355,\n \"acc_stderr\": 0.033275044238468436,\n\
\ \"acc_norm\": 0.25146198830409355,\n \"acc_norm_stderr\": 0.033275044238468436\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2558139534883721,\n\
\ \"mc1_stderr\": 0.015274176219283361,\n \"mc2\": 0.3901127619389903,\n\
\ \"mc2_stderr\": 0.014174485975506508\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.659037095501184,\n \"acc_stderr\": 0.0133226814359348\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.016679302501895376,\n \
\ \"acc_stderr\": 0.0035275958887224465\n }\n}\n```"
repo_url: https://huggingface.co/andrijdavid/tinyllama-dare
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|arc:challenge|25_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|gsm8k|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hellaswag|10_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T19-20-12.926605.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-19T19-20-12.926605.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- '**/details_harness|winogrande|5_2024-01-19T19-20-12.926605.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-19T19-20-12.926605.parquet'
- config_name: results
data_files:
- split: 2024_01_19T19_20_12.926605
path:
- results_2024-01-19T19-20-12.926605.parquet
- split: latest
path:
- results_2024-01-19T19-20-12.926605.parquet
---
# Dataset Card for Evaluation run of andrijdavid/tinyllama-dare
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [andrijdavid/tinyllama-dare](https://huggingface.co/andrijdavid/tinyllama-dare) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_andrijdavid__tinyllama-dare",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-19T19:20:12.926605](https://huggingface.co/datasets/open-llm-leaderboard/details_andrijdavid__tinyllama-dare/blob/main/results_2024-01-19T19-20-12.926605.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.260218948497339,
"acc_stderr": 0.03089367507715055,
"acc_norm": 0.26040524383105657,
"acc_norm_stderr": 0.031653815968800486,
"mc1": 0.2558139534883721,
"mc1_stderr": 0.015274176219283361,
"mc2": 0.3901127619389903,
"mc2_stderr": 0.014174485975506508
},
"harness|arc:challenge|25": {
"acc": 0.3643344709897611,
"acc_stderr": 0.014063260279882412,
"acc_norm": 0.3728668941979522,
"acc_norm_stderr": 0.014131176760131163
},
"harness|hellaswag|10": {
"acc": 0.4700258912567218,
"acc_stderr": 0.004980807231136748,
"acc_norm": 0.6277633937462657,
"acc_norm_stderr": 0.004824130528590593
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.18518518518518517,
"acc_stderr": 0.033556772163131424,
"acc_norm": 0.18518518518518517,
"acc_norm_stderr": 0.033556772163131424
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123387,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123387
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.24,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.24,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2679245283018868,
"acc_stderr": 0.027257260322494845,
"acc_norm": 0.2679245283018868,
"acc_norm_stderr": 0.027257260322494845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.034765901043041336,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.034765901043041336
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.20809248554913296,
"acc_stderr": 0.030952890217749895,
"acc_norm": 0.20809248554913296,
"acc_norm_stderr": 0.030952890217749895
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237655,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237655
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.251063829787234,
"acc_stderr": 0.028346963777162452,
"acc_norm": 0.251063829787234,
"acc_norm_stderr": 0.028346963777162452
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.0383515395439942,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.0383515395439942
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.23448275862068965,
"acc_stderr": 0.035306258743465914,
"acc_norm": 0.23448275862068965,
"acc_norm_stderr": 0.035306258743465914
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.25396825396825395,
"acc_stderr": 0.022418042891113953,
"acc_norm": 0.25396825396825395,
"acc_norm_stderr": 0.022418042891113953
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.1984126984126984,
"acc_stderr": 0.03567016675276862,
"acc_norm": 0.1984126984126984,
"acc_norm_stderr": 0.03567016675276862
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.25161290322580643,
"acc_stderr": 0.02468597928623997,
"acc_norm": 0.25161290322580643,
"acc_norm_stderr": 0.02468597928623997
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03010833071801162,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03010833071801162
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.23,
"acc_stderr": 0.042295258468165044,
"acc_norm": 0.23,
"acc_norm_stderr": 0.042295258468165044
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24848484848484848,
"acc_stderr": 0.03374402644139405,
"acc_norm": 0.24848484848484848,
"acc_norm_stderr": 0.03374402644139405
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.029857515673386407,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.029857515673386407
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.21761658031088082,
"acc_stderr": 0.029778663037752954,
"acc_norm": 0.21761658031088082,
"acc_norm_stderr": 0.029778663037752954
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2564102564102564,
"acc_stderr": 0.022139081103971545,
"acc_norm": 0.2564102564102564,
"acc_norm_stderr": 0.022139081103971545
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24444444444444444,
"acc_stderr": 0.02620276653465215,
"acc_norm": 0.24444444444444444,
"acc_norm_stderr": 0.02620276653465215
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.02755361446786382,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.02755361446786382
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2185430463576159,
"acc_stderr": 0.03374235550425694,
"acc_norm": 0.2185430463576159,
"acc_norm_stderr": 0.03374235550425694
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.24036697247706423,
"acc_stderr": 0.01832060732096407,
"acc_norm": 0.24036697247706423,
"acc_norm_stderr": 0.01832060732096407
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.03214952147802749,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.03214952147802749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.031321798030832904,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.031321798030832904
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.26582278481012656,
"acc_stderr": 0.028756799629658342,
"acc_norm": 0.26582278481012656,
"acc_norm_stderr": 0.028756799629658342
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.34977578475336324,
"acc_stderr": 0.03200736719484503,
"acc_norm": 0.34977578475336324,
"acc_norm_stderr": 0.03200736719484503
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.24427480916030533,
"acc_stderr": 0.03768335959728745,
"acc_norm": 0.24427480916030533,
"acc_norm_stderr": 0.03768335959728745
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.24793388429752067,
"acc_stderr": 0.039418975265163025,
"acc_norm": 0.24793388429752067,
"acc_norm_stderr": 0.039418975265163025
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.25766871165644173,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.25766871165644173,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.30357142857142855,
"acc_stderr": 0.04364226155841044,
"acc_norm": 0.30357142857142855,
"acc_norm_stderr": 0.04364226155841044
},
"harness|hendrycksTest-management|5": {
"acc": 0.2524271844660194,
"acc_stderr": 0.04301250399690875,
"acc_norm": 0.2524271844660194,
"acc_norm_stderr": 0.04301250399690875
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.029343114798094476,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.029343114798094476
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2886334610472541,
"acc_stderr": 0.016203792703197804,
"acc_norm": 0.2886334610472541,
"acc_norm_stderr": 0.016203792703197804
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2398843930635838,
"acc_stderr": 0.02298959254312357,
"acc_norm": 0.2398843930635838,
"acc_norm_stderr": 0.02298959254312357
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.22681564245810057,
"acc_stderr": 0.014005843570897897,
"acc_norm": 0.22681564245810057,
"acc_norm_stderr": 0.014005843570897897
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.24183006535947713,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.24183006535947713,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2765273311897106,
"acc_stderr": 0.025403832978179615,
"acc_norm": 0.2765273311897106,
"acc_norm_stderr": 0.025403832978179615
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2654320987654321,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.2654320987654321,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.24113475177304963,
"acc_stderr": 0.025518731049537766,
"acc_norm": 0.24113475177304963,
"acc_norm_stderr": 0.025518731049537766
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2333767926988266,
"acc_stderr": 0.010803108481179088,
"acc_norm": 0.2333767926988266,
"acc_norm_stderr": 0.010803108481179088
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.20588235294117646,
"acc_stderr": 0.024562204314142314,
"acc_norm": 0.20588235294117646,
"acc_norm_stderr": 0.024562204314142314
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2679738562091503,
"acc_stderr": 0.017917974069594726,
"acc_norm": 0.2679738562091503,
"acc_norm_stderr": 0.017917974069594726
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.2909090909090909,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.2909090909090909,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.1673469387755102,
"acc_stderr": 0.023897144768914524,
"acc_norm": 0.1673469387755102,
"acc_norm_stderr": 0.023897144768914524
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23880597014925373,
"acc_stderr": 0.030147775935409224,
"acc_norm": 0.23880597014925373,
"acc_norm_stderr": 0.030147775935409224
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3313253012048193,
"acc_stderr": 0.03664314777288087,
"acc_norm": 0.3313253012048193,
"acc_norm_stderr": 0.03664314777288087
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.25146198830409355,
"acc_stderr": 0.033275044238468436,
"acc_norm": 0.25146198830409355,
"acc_norm_stderr": 0.033275044238468436
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2558139534883721,
"mc1_stderr": 0.015274176219283361,
"mc2": 0.3901127619389903,
"mc2_stderr": 0.014174485975506508
},
"harness|winogrande|5": {
"acc": 0.659037095501184,
"acc_stderr": 0.0133226814359348
},
"harness|gsm8k|5": {
"acc": 0.016679302501895376,
"acc_stderr": 0.0035275958887224465
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
andersonbcdefg/wikipedia_en_tokenized | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: targets
sequence: int64
splits:
- name: train
num_bytes: 56050416752
num_examples: 9110926
download_size: 19330142269
dataset_size: 56050416752
---
# Dataset Card for "wikipedia_en_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sablo/dolly_curated | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 11515591
num_examples: 13952
- name: test
num_bytes: 573809
num_examples: 735
download_size: 7032039
dataset_size: 12089400
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dolly 15k Curated
## Dataset Details
### Dataset Description
A filtered and curated dataset version of https://huggingface.co/datasets/databricks/databricks-dolly-15k. Saved in HF Chat format. The result is a high quality dataset for SFT.
- **Created by:** [dctanner](https://huggingface.co/dctanner) and the team at [Sablo AI](https://sablo.ai)
- **License:** CC BY-SA 3.0
## Dataset Structure
We structure the dataset using the format commonly used as input into [Hugging Face Chat Templates](https://huggingface.co/docs/transformers/chat_templating). Where present, the context field text has been appending to the instruction in OpenAI style `Text: """..."""` format.
```
[
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"}
]
```
## Dataset Creation
### Source Data
- **Source Dataset:** https://huggingface.co/datasets/argilla/databricks-dolly-15k-curated-multilingual and https://huggingface.co/datasets/databricks/databricks-dolly-15k
#### Data Collection and Processing
We started with https://huggingface.co/datasets/argilla/databricks-dolly-15k-curated-multilingual (en split only) which is a manually curated version of https://huggingface.co/datasets/databricks/databricks-dolly-15k.
As well as formatting to fit HF Chat style, we removed many duplicates based on the instruction text. This ensures the dataset is diverse and not repetitive.
# License
- **License:** CC BY-SA 3.0
This dataset is usable for commercial purposes. Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
- Wikipedia (various pages) - https://www.wikipedia.org/ - Copyright © Wikipedia editors and contributors.
- Databricks (https://www.databricks.com) - Copyright © Databricks
# Contact
Created by [dctanner](https://huggingface.co/dctanner) and the team at [Sablo AI](https://sablo.ai) |
chriswmurphy/esperanto | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 722828315
num_examples: 4574372
download_size: 473036878
dataset_size: 722828315
---
# Dataset Card for "esperanto"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jessiedu314/FindSumAll | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1142199650
num_examples: 83254
- name: validation
num_bytes: 142621982
num_examples: 10405
- name: test
num_bytes: 142826827
num_examples: 10405
download_size: 635119558
dataset_size: 1427648459
---
# Dataset Card for "FindSumAll"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Undi95__Llamix2-MLewd-4x13B | ---
pretty_name: Evaluation run of Undi95/Llamix2-MLewd-4x13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Undi95/Llamix2-MLewd-4x13B](https://huggingface.co/Undi95/Llamix2-MLewd-4x13B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Undi95__Llamix2-MLewd-4x13B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-24T22:02:51.116526](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Llamix2-MLewd-4x13B/blob/main/results_2023-12-24T22-02-51.116526.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5658778116094036,\n\
\ \"acc_stderr\": 0.033655629521003254,\n \"acc_norm\": 0.5685687034689173,\n\
\ \"acc_norm_stderr\": 0.0343395673955478,\n \"mc1\": 0.36107711138310894,\n\
\ \"mc1_stderr\": 0.016814312844836886,\n \"mc2\": 0.5034696577826705,\n\
\ \"mc2_stderr\": 0.015794631306390153\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5827645051194539,\n \"acc_stderr\": 0.014409825518403079,\n\
\ \"acc_norm\": 0.6100682593856656,\n \"acc_norm_stderr\": 0.01425295984889289\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6445927106154152,\n\
\ \"acc_stderr\": 0.004776583530909569,\n \"acc_norm\": 0.8317068313085043,\n\
\ \"acc_norm_stderr\": 0.003733618111043529\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5111111111111111,\n\
\ \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.5111111111111111,\n\
\ \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5197368421052632,\n \"acc_stderr\": 0.040657710025626036,\n\
\ \"acc_norm\": 0.5197368421052632,\n \"acc_norm_stderr\": 0.040657710025626036\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n\
\ \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6113207547169811,\n \"acc_stderr\": 0.030000485448675986,\n\
\ \"acc_norm\": 0.6113207547169811,\n \"acc_norm_stderr\": 0.030000485448675986\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5902777777777778,\n\
\ \"acc_stderr\": 0.04112490974670788,\n \"acc_norm\": 0.5902777777777778,\n\
\ \"acc_norm_stderr\": 0.04112490974670788\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.43,\n\
\ \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \
\ \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5260115606936416,\n\
\ \"acc_stderr\": 0.03807301726504513,\n \"acc_norm\": 0.5260115606936416,\n\
\ \"acc_norm_stderr\": 0.03807301726504513\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.04336432707993179,\n\
\ \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.04336432707993179\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n\
\ \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4723404255319149,\n \"acc_stderr\": 0.03263597118409769,\n\
\ \"acc_norm\": 0.4723404255319149,\n \"acc_norm_stderr\": 0.03263597118409769\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.32456140350877194,\n\
\ \"acc_stderr\": 0.04404556157374768,\n \"acc_norm\": 0.32456140350877194,\n\
\ \"acc_norm_stderr\": 0.04404556157374768\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.47586206896551725,\n \"acc_stderr\": 0.041618085035015295,\n\
\ \"acc_norm\": 0.47586206896551725,\n \"acc_norm_stderr\": 0.041618085035015295\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3439153439153439,\n \"acc_stderr\": 0.024464426625596437,\n \"\
acc_norm\": 0.3439153439153439,\n \"acc_norm_stderr\": 0.024464426625596437\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n\
\ \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n\
\ \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6548387096774193,\n\
\ \"acc_stderr\": 0.02704574657353433,\n \"acc_norm\": 0.6548387096774193,\n\
\ \"acc_norm_stderr\": 0.02704574657353433\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4187192118226601,\n \"acc_stderr\": 0.03471192860518468,\n\
\ \"acc_norm\": 0.4187192118226601,\n \"acc_norm_stderr\": 0.03471192860518468\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\"\
: 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6848484848484848,\n \"acc_stderr\": 0.036277305750224094,\n\
\ \"acc_norm\": 0.6848484848484848,\n \"acc_norm_stderr\": 0.036277305750224094\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7323232323232324,\n \"acc_stderr\": 0.03154449888270285,\n \"\
acc_norm\": 0.7323232323232324,\n \"acc_norm_stderr\": 0.03154449888270285\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8082901554404145,\n \"acc_stderr\": 0.028408953626245258,\n\
\ \"acc_norm\": 0.8082901554404145,\n \"acc_norm_stderr\": 0.028408953626245258\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5256410256410257,\n \"acc_stderr\": 0.025317649726448656,\n\
\ \"acc_norm\": 0.5256410256410257,\n \"acc_norm_stderr\": 0.025317649726448656\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3111111111111111,\n \"acc_stderr\": 0.02822644674968352,\n \
\ \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.02822644674968352\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.03196876989195778,\n \
\ \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.03196876989195778\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31125827814569534,\n \"acc_stderr\": 0.03780445850526733,\n \"\
acc_norm\": 0.31125827814569534,\n \"acc_norm_stderr\": 0.03780445850526733\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7431192660550459,\n \"acc_stderr\": 0.018732492928342462,\n \"\
acc_norm\": 0.7431192660550459,\n \"acc_norm_stderr\": 0.018732492928342462\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.375,\n \"acc_stderr\": 0.033016908987210894,\n \"acc_norm\": 0.375,\n\
\ \"acc_norm_stderr\": 0.033016908987210894\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.7598039215686274,\n \"acc_stderr\": 0.02998373305591361,\n\
\ \"acc_norm\": 0.7598039215686274,\n \"acc_norm_stderr\": 0.02998373305591361\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \
\ \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n\
\ \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516303,\n \"\
acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516303\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.043300437496507416,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.043300437496507416\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7116564417177914,\n \"acc_stderr\": 0.035590395316173425,\n\
\ \"acc_norm\": 0.7116564417177914,\n \"acc_norm_stderr\": 0.035590395316173425\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.04547960999764376,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.04547960999764376\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6601941747572816,\n \"acc_stderr\": 0.04689765937278134,\n\
\ \"acc_norm\": 0.6601941747572816,\n \"acc_norm_stderr\": 0.04689765937278134\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8076923076923077,\n\
\ \"acc_stderr\": 0.025819233256483706,\n \"acc_norm\": 0.8076923076923077,\n\
\ \"acc_norm_stderr\": 0.025819233256483706\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7624521072796935,\n\
\ \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.7624521072796935,\n\
\ \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6560693641618497,\n \"acc_stderr\": 0.025574123786546665,\n\
\ \"acc_norm\": 0.6560693641618497,\n \"acc_norm_stderr\": 0.025574123786546665\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.464804469273743,\n\
\ \"acc_stderr\": 0.01668102093107665,\n \"acc_norm\": 0.464804469273743,\n\
\ \"acc_norm_stderr\": 0.01668102093107665\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.630718954248366,\n \"acc_stderr\": 0.02763417668960266,\n\
\ \"acc_norm\": 0.630718954248366,\n \"acc_norm_stderr\": 0.02763417668960266\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6430868167202572,\n\
\ \"acc_stderr\": 0.027210420375934023,\n \"acc_norm\": 0.6430868167202572,\n\
\ \"acc_norm_stderr\": 0.027210420375934023\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6419753086419753,\n \"acc_stderr\": 0.026675611926037103,\n\
\ \"acc_norm\": 0.6419753086419753,\n \"acc_norm_stderr\": 0.026675611926037103\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.43617021276595747,\n \"acc_stderr\": 0.029583452036284062,\n \
\ \"acc_norm\": 0.43617021276595747,\n \"acc_norm_stderr\": 0.029583452036284062\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4335071707953064,\n\
\ \"acc_stderr\": 0.012656810383983965,\n \"acc_norm\": 0.4335071707953064,\n\
\ \"acc_norm_stderr\": 0.012656810383983965\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5330882352941176,\n \"acc_stderr\": 0.030306257722468314,\n\
\ \"acc_norm\": 0.5330882352941176,\n \"acc_norm_stderr\": 0.030306257722468314\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5833333333333334,\n \"acc_stderr\": 0.01994491413687358,\n \
\ \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.01994491413687358\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.0469237132203465,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.0469237132203465\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6285714285714286,\n \"acc_stderr\": 0.03093285879278986,\n\
\ \"acc_norm\": 0.6285714285714286,\n \"acc_norm_stderr\": 0.03093285879278986\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7313432835820896,\n\
\ \"acc_stderr\": 0.03134328358208954,\n \"acc_norm\": 0.7313432835820896,\n\
\ \"acc_norm_stderr\": 0.03134328358208954\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n\
\ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n\
\ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.031581495393387324,\n\
\ \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.031581495393387324\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.36107711138310894,\n\
\ \"mc1_stderr\": 0.016814312844836886,\n \"mc2\": 0.5034696577826705,\n\
\ \"mc2_stderr\": 0.015794631306390153\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7537490134175217,\n \"acc_stderr\": 0.012108365307437526\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4336618650492798,\n \
\ \"acc_stderr\": 0.013650728047064681\n }\n}\n```"
repo_url: https://huggingface.co/Undi95/Llamix2-MLewd-4x13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|arc:challenge|25_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|gsm8k|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hellaswag|10_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-24T22-02-51.116526.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-24T22-02-51.116526.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- '**/details_harness|winogrande|5_2023-12-24T22-02-51.116526.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-24T22-02-51.116526.parquet'
- config_name: results
data_files:
- split: 2023_12_24T22_02_51.116526
path:
- results_2023-12-24T22-02-51.116526.parquet
- split: latest
path:
- results_2023-12-24T22-02-51.116526.parquet
---
# Dataset Card for Evaluation run of Undi95/Llamix2-MLewd-4x13B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Undi95/Llamix2-MLewd-4x13B](https://huggingface.co/Undi95/Llamix2-MLewd-4x13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Undi95__Llamix2-MLewd-4x13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-24T22:02:51.116526](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Llamix2-MLewd-4x13B/blob/main/results_2023-12-24T22-02-51.116526.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5658778116094036,
"acc_stderr": 0.033655629521003254,
"acc_norm": 0.5685687034689173,
"acc_norm_stderr": 0.0343395673955478,
"mc1": 0.36107711138310894,
"mc1_stderr": 0.016814312844836886,
"mc2": 0.5034696577826705,
"mc2_stderr": 0.015794631306390153
},
"harness|arc:challenge|25": {
"acc": 0.5827645051194539,
"acc_stderr": 0.014409825518403079,
"acc_norm": 0.6100682593856656,
"acc_norm_stderr": 0.01425295984889289
},
"harness|hellaswag|10": {
"acc": 0.6445927106154152,
"acc_stderr": 0.004776583530909569,
"acc_norm": 0.8317068313085043,
"acc_norm_stderr": 0.003733618111043529
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5111111111111111,
"acc_stderr": 0.04318275491977976,
"acc_norm": 0.5111111111111111,
"acc_norm_stderr": 0.04318275491977976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5197368421052632,
"acc_stderr": 0.040657710025626036,
"acc_norm": 0.5197368421052632,
"acc_norm_stderr": 0.040657710025626036
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6113207547169811,
"acc_stderr": 0.030000485448675986,
"acc_norm": 0.6113207547169811,
"acc_norm_stderr": 0.030000485448675986
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5902777777777778,
"acc_stderr": 0.04112490974670788,
"acc_norm": 0.5902777777777778,
"acc_norm_stderr": 0.04112490974670788
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5260115606936416,
"acc_stderr": 0.03807301726504513,
"acc_norm": 0.5260115606936416,
"acc_norm_stderr": 0.03807301726504513
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.04336432707993179,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.04336432707993179
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4723404255319149,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.4723404255319149,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.32456140350877194,
"acc_stderr": 0.04404556157374768,
"acc_norm": 0.32456140350877194,
"acc_norm_stderr": 0.04404556157374768
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.47586206896551725,
"acc_stderr": 0.041618085035015295,
"acc_norm": 0.47586206896551725,
"acc_norm_stderr": 0.041618085035015295
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3439153439153439,
"acc_stderr": 0.024464426625596437,
"acc_norm": 0.3439153439153439,
"acc_norm_stderr": 0.024464426625596437
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6548387096774193,
"acc_stderr": 0.02704574657353433,
"acc_norm": 0.6548387096774193,
"acc_norm_stderr": 0.02704574657353433
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4187192118226601,
"acc_stderr": 0.03471192860518468,
"acc_norm": 0.4187192118226601,
"acc_norm_stderr": 0.03471192860518468
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6848484848484848,
"acc_stderr": 0.036277305750224094,
"acc_norm": 0.6848484848484848,
"acc_norm_stderr": 0.036277305750224094
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7323232323232324,
"acc_stderr": 0.03154449888270285,
"acc_norm": 0.7323232323232324,
"acc_norm_stderr": 0.03154449888270285
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8082901554404145,
"acc_stderr": 0.028408953626245258,
"acc_norm": 0.8082901554404145,
"acc_norm_stderr": 0.028408953626245258
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5256410256410257,
"acc_stderr": 0.025317649726448656,
"acc_norm": 0.5256410256410257,
"acc_norm_stderr": 0.025317649726448656
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3111111111111111,
"acc_stderr": 0.02822644674968352,
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.02822644674968352
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5882352941176471,
"acc_stderr": 0.03196876989195778,
"acc_norm": 0.5882352941176471,
"acc_norm_stderr": 0.03196876989195778
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31125827814569534,
"acc_stderr": 0.03780445850526733,
"acc_norm": 0.31125827814569534,
"acc_norm_stderr": 0.03780445850526733
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7431192660550459,
"acc_stderr": 0.018732492928342462,
"acc_norm": 0.7431192660550459,
"acc_norm_stderr": 0.018732492928342462
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.375,
"acc_stderr": 0.033016908987210894,
"acc_norm": 0.375,
"acc_norm_stderr": 0.033016908987210894
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7598039215686274,
"acc_stderr": 0.02998373305591361,
"acc_norm": 0.7598039215686274,
"acc_norm_stderr": 0.02998373305591361
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.648854961832061,
"acc_stderr": 0.04186445163013751,
"acc_norm": 0.648854961832061,
"acc_norm_stderr": 0.04186445163013751
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.03941897526516303,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.03941897526516303
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.043300437496507416,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.043300437496507416
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7116564417177914,
"acc_stderr": 0.035590395316173425,
"acc_norm": 0.7116564417177914,
"acc_norm_stderr": 0.035590395316173425
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04547960999764376,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04547960999764376
},
"harness|hendrycksTest-management|5": {
"acc": 0.6601941747572816,
"acc_stderr": 0.04689765937278134,
"acc_norm": 0.6601941747572816,
"acc_norm_stderr": 0.04689765937278134
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8076923076923077,
"acc_stderr": 0.025819233256483706,
"acc_norm": 0.8076923076923077,
"acc_norm_stderr": 0.025819233256483706
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7624521072796935,
"acc_stderr": 0.015218733046150193,
"acc_norm": 0.7624521072796935,
"acc_norm_stderr": 0.015218733046150193
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6560693641618497,
"acc_stderr": 0.025574123786546665,
"acc_norm": 0.6560693641618497,
"acc_norm_stderr": 0.025574123786546665
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.464804469273743,
"acc_stderr": 0.01668102093107665,
"acc_norm": 0.464804469273743,
"acc_norm_stderr": 0.01668102093107665
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.630718954248366,
"acc_stderr": 0.02763417668960266,
"acc_norm": 0.630718954248366,
"acc_norm_stderr": 0.02763417668960266
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6430868167202572,
"acc_stderr": 0.027210420375934023,
"acc_norm": 0.6430868167202572,
"acc_norm_stderr": 0.027210420375934023
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6419753086419753,
"acc_stderr": 0.026675611926037103,
"acc_norm": 0.6419753086419753,
"acc_norm_stderr": 0.026675611926037103
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.43617021276595747,
"acc_stderr": 0.029583452036284062,
"acc_norm": 0.43617021276595747,
"acc_norm_stderr": 0.029583452036284062
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4335071707953064,
"acc_stderr": 0.012656810383983965,
"acc_norm": 0.4335071707953064,
"acc_norm_stderr": 0.012656810383983965
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5330882352941176,
"acc_stderr": 0.030306257722468314,
"acc_norm": 0.5330882352941176,
"acc_norm_stderr": 0.030306257722468314
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.01994491413687358,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.01994491413687358
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6,
"acc_stderr": 0.0469237132203465,
"acc_norm": 0.6,
"acc_norm_stderr": 0.0469237132203465
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6285714285714286,
"acc_stderr": 0.03093285879278986,
"acc_norm": 0.6285714285714286,
"acc_norm_stderr": 0.03093285879278986
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7313432835820896,
"acc_stderr": 0.03134328358208954,
"acc_norm": 0.7313432835820896,
"acc_norm_stderr": 0.03134328358208954
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.783625730994152,
"acc_stderr": 0.031581495393387324,
"acc_norm": 0.783625730994152,
"acc_norm_stderr": 0.031581495393387324
},
"harness|truthfulqa:mc|0": {
"mc1": 0.36107711138310894,
"mc1_stderr": 0.016814312844836886,
"mc2": 0.5034696577826705,
"mc2_stderr": 0.015794631306390153
},
"harness|winogrande|5": {
"acc": 0.7537490134175217,
"acc_stderr": 0.012108365307437526
},
"harness|gsm8k|5": {
"acc": 0.4336618650492798,
"acc_stderr": 0.013650728047064681
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
multi-train/hotpotqa-train-multikilt_1107 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 88502871
num_examples: 68659
download_size: 50639711
dataset_size: 88502871
---
# Dataset Card for "hotpotqa-train-multikilt_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713054391 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 11478
num_examples: 26
download_size: 8710
dataset_size: 11478
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713054391"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Silly-Machine/TuPyE-Dataset | ---
license: cc-by-4.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- pt
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- crowdsourced
task_categories:
- text-classification
task_ids: []
pretty_name: TuPy-Dataset
language_bcp47:
- pt-BR
tags:
- hate-speech-detection
configs:
- config_name: multilabel
data_files:
- split: train
path: multilabel/multilabel_train.csv
- split: test
path: multilabel/multilabel_test.csv
- config_name: binary
data_files:
- split: train
path: binary/binary_train.csv
- split: test
path: binary/binary_test.csv
---
# Portuguese Hate Speech Expanded Dataset (TuPyE)
TuPyE, an enhanced iteration of TuPy, encompasses a compilation of 43,668 meticulously annotated documents specifically
selected for the purpose of hate speech detection within diverse social network contexts.
This augmented dataset integrates supplementary annotations and amalgamates with datasets sourced from
[Fortuna et al. (2019)](https://aclanthology.org/W19-3510/),
[Leite et al. (2020)](https://arxiv.org/abs/2010.04543),
and [Vargas et al. (2022)](https://arxiv.org/abs/2103.14972),
complemented by an infusion of 10,000 original documents from the [TuPy-Dataset](https://huggingface.co/datasets/Silly-Machine/TuPy-Dataset).
In light of the constrained availability of annotated data in Portuguese pertaining to the English language,
TuPyE is committed to the expansion and enhancement of existing datasets.
This augmentation serves to facilitate the development of advanced hate speech detection models through the utilization of machine learning (ML)
and natural language processing (NLP) techniques.
This repository is organized as follows:
```sh
root.
├── binary : binary dataset (including training and testing split)
├── multilabel : multilabel dataset (including training and testing split)
└── README.md : documentation and card metadata
```
We highly recommend reading the associated research paper [TuPy-E: detecting hate speech in Brazilian Portuguese social media with a novel dataset and comprehensive analysis of models](https://arxiv.org/abs/2312.17704) to gain
comprehensive insights into the advancements integrated into this extended dataset.
## Security measures
To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted
## Annotation and voting process
In the pursuit of advancing the field of automatic hate speech detection in Portuguese, our team undertook the meticulous task of creating a comprehensive database.
This endeavor involved the integration of labeled document sets from seminal studies in the domain, specifically those conducted by Fortuna et al. (2019),
Leite et al. (2020), and Vargas et al. (2022). To ensure the highest degree of consistency and compatibility within our dataset,
we adhered to stringent guidelines for text integration, detailed as follows:
1. **Fortuna et al. (2019)**: This study presented a dataset of 5,670 tweets, each annotated by three independent evaluators to ascertain the presence or absence of hate speech. In our integration process, we adopted a simple majority-voting mechanism to classify each document, ensuring a consistent approach to hate speech identification across the dataset.
2. **Leite et al. (2020)**: The dataset from this research encompassed 21,000 tweets, annotated by 129 volunteers. Each tweet was reviewed by three different assessors. The study identified six categories of toxic speech, namely: (i) homophobia, (ii) racism, (iii) xenophobia, (iv) offensive language, (v) obscene language, and (vi) misogyny. In aligning with our operational definition of hate speech, we chose to exclude texts that solely fell under the categories of offensive and/or obscene language. Consistent with our methodology, a straightforward majority-voting process was utilized for the classification of these texts.
3. **Vargas et al**. (2022): This research involved a compilation of 7,000 comments sourced from Instagram, each labeled by a trio of annotators. These data had already been subjected to a simple majority-voting classification, thereby obviating the need for us to apply additional text classification protocols.
Through the application of these rigorous integration guidelines, we have succeeded in establishing a robust, unified database that stands as a valuable resource for the development and refinement of automatic hate speech detection systems in the Portuguese language.
## Data structure
A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an
absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of
annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism,
religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPyE dataset is depicted below:
```python
{
source:"twitter",
text: "e tem pobre de direita imbecil que ainda defendia a manutenção da política de preços atrelada ao dólar link",
researcher:"leite et al", year:2020,
aggressive: 1, hate: 1, ageism: 0, aporophobia: 1, body shame: 0, capacitism: 0, lgbtphobia: 0, political: 1, racism : 0,
religious intolerance : 0, misogyny : 0, xenophobia : 0, other : 0
}
```
# Dataset content
The table 1 delineates the quantity of documents annotated in TuPyE, systematically categorized by the respective researchers.
#### Table 1 - TuPyE composition
| Label | Count |Source |
|----------------------|--------|---------|
| Leite et al. | 21,000 |Twitter |
| TuPy | 10,000 |Twitter |
| Vargas et al. | 7,000 |Instagram|
| Fortuna et al. | 5,668 |Twitter |
Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents
#### Table 2 - Count of non-aggressive and aggressive documents
| Label | Count |
|----------------------|--------|
| Non-aggressive | 31121 |
| Aggressive - Not hate| 3180 |
| Aggressive - Hate | 9367 |
| Total | 43668 |
Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.
#### Table 3 - Hate categories count
| Label | Count |
|--------------------------|-------|
| Ageism | 57 |
| Aporophobia | 66 |
| Body shame | 285 |
| Capacitism | 99 |
| LGBTphobia | 805 |
| Political | 1149 |
| Racism | 290 |
| Religious intolerance | 108 |
| Misogyny | 1675 |
| Xenophobia | 357 |
| Other | 4476 |
| Total | 9367 |
# Acknowledge
The TuPy-E project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro ([UFRJ](https://ufrj.br/)) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering ([COPPE](https://coppe.ufrj.br/)).
# References
[1] P. Fortuna, J. Rocha Da Silva, J. Soler-Company, L. Wanner, and S. Nunes, “A Hierarchically-Labeled Portuguese Hate Speech Dataset,” 2019. [Online]. Available: https://github.com/t-davidson/hate-s
[2] J. A. Leite, D. F. Silva, K. Bontcheva, and C. Scarton, “Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis,” Oct. 2020, [Online]. Available: http://arxiv.org/abs/2010.04543
[3] F. Vargas, I. Carvalho, F. Góes, T. A. S. Pardo, and F. Benevenuto, “HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection,” 2022. [Online]. Available: https://aclanthology.org/2022.lrec-1.777/
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_119 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1367955616.0
num_examples: 268648
download_size: 1397215885
dataset_size: 1367955616.0
---
# Dataset Card for "chunk_119"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Felladrin/ChatML-databricks-dolly-15k | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
[databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) in ChatML format.
Python code used for conversion:
```python
from datasets import load_dataset
import pandas
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path="Felladrin/Llama-160M-Chat-v1"
)
dataset = load_dataset("databricks/databricks-dolly-15k", split="train")
def format(columns):
instruction = columns["instruction"].strip()
context = columns["context"].strip()
response = columns["response"].strip()
if context:
user_message = f"{instruction}\n\nContext:\n{context}"
else:
user_message = instruction
messages = [
{
"role": "user",
"content": user_message,
},
{
"role": "assistant",
"content": response,
},
]
return tokenizer.apply_chat_template(messages, tokenize=False)
pandas.DataFrame({"text": [format(columns) for columns in dataset]}).to_parquet("train.parquet", index=False)
```
|
fathyshalab/reklambox-balanced | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 561839
num_examples: 1102
- name: test
num_bytes: 140041
num_examples: 276
download_size: 0
dataset_size: 701880
---
# Dataset Card for "reklambox-balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
phanvancongthanh/pubchem_enamine_dedup | ---
dataset_info:
features:
- name: standardized_smiles
dtype: string
splits:
- name: train
num_bytes: 47338232
num_examples: 906545
download_size: 24899243
dataset_size: 47338232
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pubchem_enamine_dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lancelot53/bengali_ai_ipa | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: ipa
dtype: string
- name: row_id_column_name
dtype: int64
splits:
- name: train
num_bytes: 6974634
num_examples: 21999
- name: test
num_bytes: 5861099
num_examples: 27228
download_size: 6174391
dataset_size: 12835733
---
# Dataset Card for "bengali_ai_ipa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pattern90/sidewalk-imagery | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 3138386.0
num_examples: 10
download_size: 3139599
dataset_size: 3138386.0
---
# Dataset Card for "sidewalk-imagery"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
benayas/massive_llm_v0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: locale
dtype: string
- name: partition
dtype: string
- name: scenario
dtype:
class_label:
names:
'0': social
'1': transport
'2': calendar
'3': play
'4': news
'5': datetime
'6': recommendation
'7': email
'8': iot
'9': general
'10': audio
'11': lists
'12': qa
'13': cooking
'14': takeaway
'15': music
'16': alarm
'17': weather
- name: intent
dtype:
class_label:
names:
'0': datetime_query
'1': iot_hue_lightchange
'2': transport_ticket
'3': takeaway_query
'4': qa_stock
'5': general_greet
'6': recommendation_events
'7': music_dislikeness
'8': iot_wemo_off
'9': cooking_recipe
'10': qa_currency
'11': transport_traffic
'12': general_quirky
'13': weather_query
'14': audio_volume_up
'15': email_addcontact
'16': takeaway_order
'17': email_querycontact
'18': iot_hue_lightup
'19': recommendation_locations
'20': play_audiobook
'21': lists_createoradd
'22': news_query
'23': alarm_query
'24': iot_wemo_on
'25': general_joke
'26': qa_definition
'27': social_query
'28': music_settings
'29': audio_volume_other
'30': calendar_remove
'31': iot_hue_lightdim
'32': calendar_query
'33': email_sendemail
'34': iot_cleaning
'35': audio_volume_down
'36': play_radio
'37': cooking_query
'38': datetime_convert
'39': qa_maths
'40': iot_hue_lightoff
'41': iot_hue_lighton
'42': transport_query
'43': music_likeness
'44': email_query
'45': play_music
'46': audio_volume_mute
'47': social_post
'48': alarm_set
'49': qa_factoid
'50': calendar_set
'51': play_game
'52': alarm_remove
'53': lists_remove
'54': transport_taxi
'55': recommendation_movies
'56': iot_coffee
'57': music_query
'58': play_podcasts
'59': lists_query
- name: utt
dtype: string
- name: annot_utt
dtype: string
- name: worker_id
dtype: string
- name: slot_method
sequence:
- name: slot
dtype: string
- name: method
dtype: string
- name: judgments
sequence:
- name: worker_id
dtype: string
- name: intent_score
dtype: int8
- name: slots_score
dtype: int8
- name: grammar_score
dtype: int8
- name: spelling_score
dtype: int8
- name: language_identification
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6371399
num_examples: 11514
- name: validation
num_bytes: 1119231
num_examples: 2033
- name: test
num_bytes: 1636424
num_examples: 2974
download_size: 1813395
dataset_size: 9127054
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
lucadiliello/raceqa | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: key
dtype: string
- name: labels
list:
- name: end
sequence: int64
- name: start
sequence: int64
splits:
- name: test
num_bytes: 1272251
num_examples: 674
download_size: 747482
dataset_size: 1272251
---
# Dataset Card for "raceqa"
Split taken from the MRQA 2019 Shared Task, formatted and filtered for Question Answering. For the original dataset, have a look [here](https://huggingface.co/datasets/mrqa). |
gagan3012/arabic-sts-pairwise | ---
dataset_info:
features:
- name: labels
sequence: int64
- name: sent1
sequence: string
- name: sent2
sequence: string
splits:
- name: train
num_bytes: 227137
num_examples: 1
- name: validation
num_bytes: 63521
num_examples: 1
- name: test
num_bytes: 33531
num_examples: 1
download_size: 182982
dataset_size: 324189
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Tamnemtf/VietNamese_lang | ---
license: llama2
language:
- vi
task_categories:
- text-generation
- text2text-generation
- text-classification
--- |
open-llm-leaderboard/details_mvpmaster__Einstein-4D-MoE-2x7b-test | ---
pretty_name: Evaluation run of mvpmaster/Einstein-4D-MoE-2x7b-test
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mvpmaster/Einstein-4D-MoE-2x7b-test](https://huggingface.co/mvpmaster/Einstein-4D-MoE-2x7b-test)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mvpmaster__Einstein-4D-MoE-2x7b-test\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-21T22:01:43.937024](https://huggingface.co/datasets/open-llm-leaderboard/details_mvpmaster__Einstein-4D-MoE-2x7b-test/blob/main/results_2024-03-21T22-01-43.937024.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6580323184365453,\n\
\ \"acc_stderr\": 0.031863507695258274,\n \"acc_norm\": 0.6584232173968556,\n\
\ \"acc_norm_stderr\": 0.03251626866674865,\n \"mc1\": 0.4565483476132191,\n\
\ \"mc1_stderr\": 0.01743728095318369,\n \"mc2\": 0.6228612855037876,\n\
\ \"mc2_stderr\": 0.01520495704402633\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6638225255972696,\n \"acc_stderr\": 0.013804855026205761,\n\
\ \"acc_norm\": 0.697098976109215,\n \"acc_norm_stderr\": 0.013428241573185349\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6809400517825135,\n\
\ \"acc_stderr\": 0.004651597209993092,\n \"acc_norm\": 0.865166301533559,\n\
\ \"acc_norm_stderr\": 0.003408478333768278\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7236842105263158,\n \"acc_stderr\": 0.03639057569952929,\n\
\ \"acc_norm\": 0.7236842105263158,\n \"acc_norm_stderr\": 0.03639057569952929\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\"\
: 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n\
\ \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n\
\ \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n\
\ \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n\
\ \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.041227371113703316,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.041227371113703316\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4312169312169312,\n \"acc_stderr\": 0.025506481698138215,\n \"\
acc_norm\": 0.4312169312169312,\n \"acc_norm_stderr\": 0.025506481698138215\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7870967741935484,\n \"acc_stderr\": 0.023287665127268545,\n \"\
acc_norm\": 0.7870967741935484,\n \"acc_norm_stderr\": 0.023287665127268545\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n \"\
acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.0315841532404771,\n\
\ \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.0315841532404771\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"\
acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.023710888501970572,\n \
\ \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.023710888501970572\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.02882088466625326,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.02882088466625326\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342856,\n\
\ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342856\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8513761467889909,\n \"acc_stderr\": 0.015251253773660834,\n \"\
acc_norm\": 0.8513761467889909,\n \"acc_norm_stderr\": 0.015251253773660834\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8676470588235294,\n \"acc_stderr\": 0.023784297520918856,\n \"\
acc_norm\": 0.8676470588235294,\n \"acc_norm_stderr\": 0.023784297520918856\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233494,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233494\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7130044843049327,\n\
\ \"acc_stderr\": 0.030360379710291954,\n \"acc_norm\": 0.7130044843049327,\n\
\ \"acc_norm_stderr\": 0.030360379710291954\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n\
\ \"acc_stderr\": 0.03755265865037181,\n \"acc_norm\": 0.8148148148148148,\n\
\ \"acc_norm_stderr\": 0.03755265865037181\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n\
\ \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n\
\ \"acc_stderr\": 0.02023714900899093,\n \"acc_norm\": 0.8931623931623932,\n\
\ \"acc_norm_stderr\": 0.02023714900899093\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8403575989782887,\n\
\ \"acc_stderr\": 0.01309793451326301,\n \"acc_norm\": 0.8403575989782887,\n\
\ \"acc_norm_stderr\": 0.01309793451326301\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7427745664739884,\n \"acc_stderr\": 0.023532925431044287,\n\
\ \"acc_norm\": 0.7427745664739884,\n \"acc_norm_stderr\": 0.023532925431044287\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.46033519553072627,\n\
\ \"acc_stderr\": 0.01666979959211203,\n \"acc_norm\": 0.46033519553072627,\n\
\ \"acc_norm_stderr\": 0.01666979959211203\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n\
\ \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n\
\ \"acc_stderr\": 0.02540383297817961,\n \"acc_norm\": 0.7234726688102894,\n\
\ \"acc_norm_stderr\": 0.02540383297817961\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7530864197530864,\n \"acc_stderr\": 0.023993501709042117,\n\
\ \"acc_norm\": 0.7530864197530864,\n \"acc_norm_stderr\": 0.023993501709042117\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5035460992907801,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.5035460992907801,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4595827900912647,\n\
\ \"acc_stderr\": 0.012728446067669968,\n \"acc_norm\": 0.4595827900912647,\n\
\ \"acc_norm_stderr\": 0.012728446067669968\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7022058823529411,\n \"acc_stderr\": 0.027778298701545443,\n\
\ \"acc_norm\": 0.7022058823529411,\n \"acc_norm_stderr\": 0.027778298701545443\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6748366013071896,\n \"acc_stderr\": 0.018950886770806315,\n \
\ \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.018950886770806315\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291293,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291293\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n\
\ \"acc_stderr\": 0.025196929874827072,\n \"acc_norm\": 0.8507462686567164,\n\
\ \"acc_norm_stderr\": 0.025196929874827072\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4565483476132191,\n\
\ \"mc1_stderr\": 0.01743728095318369,\n \"mc2\": 0.6228612855037876,\n\
\ \"mc2_stderr\": 0.01520495704402633\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8145224940805051,\n \"acc_stderr\": 0.010923965303140505\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6959818043972706,\n \
\ \"acc_stderr\": 0.012670420440198673\n }\n}\n```"
repo_url: https://huggingface.co/mvpmaster/Einstein-4D-MoE-2x7b-test
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|arc:challenge|25_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|gsm8k|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hellaswag|10_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T22-01-43.937024.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T22-01-43.937024.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- '**/details_harness|winogrande|5_2024-03-21T22-01-43.937024.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-21T22-01-43.937024.parquet'
- config_name: results
data_files:
- split: 2024_03_21T22_01_43.937024
path:
- results_2024-03-21T22-01-43.937024.parquet
- split: latest
path:
- results_2024-03-21T22-01-43.937024.parquet
---
# Dataset Card for Evaluation run of mvpmaster/Einstein-4D-MoE-2x7b-test
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mvpmaster/Einstein-4D-MoE-2x7b-test](https://huggingface.co/mvpmaster/Einstein-4D-MoE-2x7b-test) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mvpmaster__Einstein-4D-MoE-2x7b-test",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-21T22:01:43.937024](https://huggingface.co/datasets/open-llm-leaderboard/details_mvpmaster__Einstein-4D-MoE-2x7b-test/blob/main/results_2024-03-21T22-01-43.937024.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6580323184365453,
"acc_stderr": 0.031863507695258274,
"acc_norm": 0.6584232173968556,
"acc_norm_stderr": 0.03251626866674865,
"mc1": 0.4565483476132191,
"mc1_stderr": 0.01743728095318369,
"mc2": 0.6228612855037876,
"mc2_stderr": 0.01520495704402633
},
"harness|arc:challenge|25": {
"acc": 0.6638225255972696,
"acc_stderr": 0.013804855026205761,
"acc_norm": 0.697098976109215,
"acc_norm_stderr": 0.013428241573185349
},
"harness|hellaswag|10": {
"acc": 0.6809400517825135,
"acc_stderr": 0.004651597209993092,
"acc_norm": 0.865166301533559,
"acc_norm_stderr": 0.003408478333768278
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7236842105263158,
"acc_stderr": 0.03639057569952929,
"acc_norm": 0.7236842105263158,
"acc_norm_stderr": 0.03639057569952929
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569525,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569525
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.035676037996391706,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.035676037996391706
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.041227371113703316,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.041227371113703316
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4312169312169312,
"acc_stderr": 0.025506481698138215,
"acc_norm": 0.4312169312169312,
"acc_norm_stderr": 0.025506481698138215
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7870967741935484,
"acc_stderr": 0.023287665127268545,
"acc_norm": 0.7870967741935484,
"acc_norm_stderr": 0.023287665127268545
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4729064039408867,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.4729064039408867,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.0315841532404771,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.0315841532404771
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.676923076923077,
"acc_stderr": 0.023710888501970572,
"acc_norm": 0.676923076923077,
"acc_norm_stderr": 0.023710888501970572
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.02882088466625326,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.02882088466625326
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7016806722689075,
"acc_stderr": 0.029719142876342856,
"acc_norm": 0.7016806722689075,
"acc_norm_stderr": 0.029719142876342856
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8513761467889909,
"acc_stderr": 0.015251253773660834,
"acc_norm": 0.8513761467889909,
"acc_norm_stderr": 0.015251253773660834
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8676470588235294,
"acc_stderr": 0.023784297520918856,
"acc_norm": 0.8676470588235294,
"acc_norm_stderr": 0.023784297520918856
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233494,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233494
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7130044843049327,
"acc_stderr": 0.030360379710291954,
"acc_norm": 0.7130044843049327,
"acc_norm_stderr": 0.030360379710291954
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037181,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037181
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.03408997886857529,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.03408997886857529
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8931623931623932,
"acc_stderr": 0.02023714900899093,
"acc_norm": 0.8931623931623932,
"acc_norm_stderr": 0.02023714900899093
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8403575989782887,
"acc_stderr": 0.01309793451326301,
"acc_norm": 0.8403575989782887,
"acc_norm_stderr": 0.01309793451326301
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7427745664739884,
"acc_stderr": 0.023532925431044287,
"acc_norm": 0.7427745664739884,
"acc_norm_stderr": 0.023532925431044287
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.46033519553072627,
"acc_stderr": 0.01666979959211203,
"acc_norm": 0.46033519553072627,
"acc_norm_stderr": 0.01666979959211203
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242557,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242557
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.02540383297817961,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.02540383297817961
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.023993501709042117,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.023993501709042117
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5035460992907801,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.5035460992907801,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4595827900912647,
"acc_stderr": 0.012728446067669968,
"acc_norm": 0.4595827900912647,
"acc_norm_stderr": 0.012728446067669968
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7022058823529411,
"acc_stderr": 0.027778298701545443,
"acc_norm": 0.7022058823529411,
"acc_norm_stderr": 0.027778298701545443
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.018950886770806315,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.018950886770806315
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.028795185574291293,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.028795185574291293
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.025196929874827072,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.025196929874827072
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4565483476132191,
"mc1_stderr": 0.01743728095318369,
"mc2": 0.6228612855037876,
"mc2_stderr": 0.01520495704402633
},
"harness|winogrande|5": {
"acc": 0.8145224940805051,
"acc_stderr": 0.010923965303140505
},
"harness|gsm8k|5": {
"acc": 0.6959818043972706,
"acc_stderr": 0.012670420440198673
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-19b-prototype | ---
pretty_name: Evaluation run of The-Face-Of-Goonery/Huginn-19b-prototype
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [The-Face-Of-Goonery/Huginn-19b-prototype](https://huggingface.co/The-Face-Of-Goonery/Huginn-19b-prototype)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-19b-prototype\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-27T22:16:21.455804](https://huggingface.co/datasets/open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-19b-prototype/blob/main/results_2023-10-27T22-16-21.455804.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.055264261744966445,\n\
\ \"em_stderr\": 0.0023400062101028673,\n \"f1\": 0.1135434144295301,\n\
\ \"f1_stderr\": 0.0025693901510907753,\n \"acc\": 0.4039910888938488,\n\
\ \"acc_stderr\": 0.008790747649701043\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.055264261744966445,\n \"em_stderr\": 0.0023400062101028673,\n\
\ \"f1\": 0.1135434144295301,\n \"f1_stderr\": 0.0025693901510907753\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04397270659590599,\n \
\ \"acc_stderr\": 0.005647666449126459\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7640094711917916,\n \"acc_stderr\": 0.011933828850275626\n\
\ }\n}\n```"
repo_url: https://huggingface.co/The-Face-Of-Goonery/Huginn-19b-prototype
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|arc:challenge|25_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_27T22_16_21.455804
path:
- '**/details_harness|drop|3_2023-10-27T22-16-21.455804.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-27T22-16-21.455804.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_27T22_16_21.455804
path:
- '**/details_harness|gsm8k|5_2023-10-27T22-16-21.455804.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-27T22-16-21.455804.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hellaswag|10_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T13-00-00.797867.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T13-00-00.797867.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T13-00-00.797867.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_27T22_16_21.455804
path:
- '**/details_harness|winogrande|5_2023-10-27T22-16-21.455804.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-27T22-16-21.455804.parquet'
- config_name: results
data_files:
- split: 2023_10_10T13_00_00.797867
path:
- results_2023-10-10T13-00-00.797867.parquet
- split: 2023_10_27T22_16_21.455804
path:
- results_2023-10-27T22-16-21.455804.parquet
- split: latest
path:
- results_2023-10-27T22-16-21.455804.parquet
---
# Dataset Card for Evaluation run of The-Face-Of-Goonery/Huginn-19b-prototype
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/The-Face-Of-Goonery/Huginn-19b-prototype
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [The-Face-Of-Goonery/Huginn-19b-prototype](https://huggingface.co/The-Face-Of-Goonery/Huginn-19b-prototype) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-19b-prototype",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T22:16:21.455804](https://huggingface.co/datasets/open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-19b-prototype/blob/main/results_2023-10-27T22-16-21.455804.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.055264261744966445,
"em_stderr": 0.0023400062101028673,
"f1": 0.1135434144295301,
"f1_stderr": 0.0025693901510907753,
"acc": 0.4039910888938488,
"acc_stderr": 0.008790747649701043
},
"harness|drop|3": {
"em": 0.055264261744966445,
"em_stderr": 0.0023400062101028673,
"f1": 0.1135434144295301,
"f1_stderr": 0.0025693901510907753
},
"harness|gsm8k|5": {
"acc": 0.04397270659590599,
"acc_stderr": 0.005647666449126459
},
"harness|winogrande|5": {
"acc": 0.7640094711917916,
"acc_stderr": 0.011933828850275626
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
pchristm/CompMix | ---
license: cc-by-4.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- complex
- question answering
- complexQA
- QA
- heterogeneous sources
pretty_name: CompMix
size_categories:
- 1K<n<10K
splits:
- name: train
num_examples: 4966
- name: validation
num_examples: 1680
- name: test
num_examples: 2764
---
# Dataset Card for ConvMix
## Dataset Description
- **Homepage:** [CompMix Website](https://qa.mpi-inf.mpg.de/compmix)
- **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de)
### Dataset Summary
CompMix collates the completed versions of the conversational questions in the [ConvMix dataset](https://convinse.mpi-inf.mpg.de), that are provided directly by crowdworkers from Amazon Mechanical Turk (AMT). Questions in CompMix exhibit complex phenomena like the presence of multiple entities, relations, temporal conditions, comparisons, aggregations, and more. It is aimed at evaluating QA methods that operate over a mixture of heterogeneous input sources (KB, text, tables, infoboxes). The dataset has 9,410 questions, split into train (4,966 questions), dev (1,680), and test (2,764) sets. All answers provided in the CompMix dataset are grounded to the KB (except for dates which are normalized, and other literals like names).
Further details will be provided in a dedicated write-up soon.
### Dataset Creation
CompMix collates the completed versions of the conversational questions in ConvMix, that are provided directly by the crowdworkers.
The ConvMix benchmark, on which CompMix is based, was created by real humans. We tried to ensure that the collected data is as natural as possible. Master crowdworkers on Amazon Mechanical Turk (AMT) selected an entity of interest in a specific domain, and then started issuing conversational questions on this entity, potentially drifting to other topics of interest throughout the course of the conversation. By letting users choose the entities themselves, we aimed to ensure that they are more interested into the topics the conversations are based on. After writing a question, users were asked to find the answer in eithers Wikidata, Wikipedia text, a Wikipedia table or a Wikipedia infobox, whatever they find more natural for the specific question at hand. Since Wikidata requires some basic understanding of knowledge bases, we provided video guidelines that illustrated how Wikidata can be used for detecting answers, following an example conversation. For each conversational question, that might be incomplete, the crowdworker provides a completed question that is intent-explicit, and can be answered without the conversational context. These questions constitute the CompMix dataset. We provide also the answer source the user found the answer in and question entities.
|
argilla/self-instruct-eval | ---
dataset_info:
features:
- name: text
dtype: 'null'
- name: inputs
struct:
- name: input
dtype: string
- name: response
dtype: string
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: vectors
struct:
- name: completion
sequence: float64
- name: prompt
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: 'null'
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
dtype: 'null'
splits:
- name: train
num_bytes: 1037904569
num_examples: 82612
download_size: 834389885
dataset_size: 1037904569
---
# Dataset Card for "self-instruct-eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LishaNM/Mistral_dataset | ---
license: apache-2.0
task_categories:
- text-generation
--- |
Nikutka/L1_poleval_korpus_wzorcowy_test | ---
dataset_info:
features:
- name: content
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1963
num_examples: 25
download_size: 2784
dataset_size: 1963
---
# Dataset Card for "L1_poleval_korpus_wzorcowy_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_qqp_serial_verb_give | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 12208
num_examples: 65
- name: test
num_bytes: 120104
num_examples: 656
- name: train
num_bytes: 99639
num_examples: 538
download_size: 136302
dataset_size: 231951
---
# Dataset Card for "MULTI_VALUE_qqp_serial_verb_give"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qwedsacf/cortex_test | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 22184352
num_examples: 10000
download_size: 10501828
dataset_size: 22184352
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jacobbuckman2/abc | ---
license: afl-3.0
---
|
JasiekKaczmarczyk/giant-midi-quantized | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart_bin
sequence: int8
length: 128
- name: duration_bin
sequence: int8
length: 128
- name: velocity_bin
sequence: int8
length: 128
splits:
- name: train
num_bytes: 168083130
num_examples: 238919
- name: validation
num_bytes: 20721368
num_examples: 29453
- name: test
num_bytes: 20062265
num_examples: 28531
download_size: 77193117
dataset_size: 208866763
---
# Dataset Card for "giant-midi-quantized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1712993395 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 9045
num_examples: 20
download_size: 8176
dataset_size: 9045
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1712993395"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.