datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
enoahjr/twitter_dataset_1713219998 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 384399
num_examples: 1166
download_size: 204184
dataset_size: 384399
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
novus677/nlp-xsum-test-tinyllama | ---
dataset_info:
features:
- name: summary
dtype: string
- name: prompt
dtype: string
splits:
- name: test
num_bytes: 27311507
num_examples: 11334
download_size: 17007395
dataset_size: 27311507
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
mcanoglu/big-dataset-test-defect-cwe-grouping | ---
license: mit
---
|
letao670982/usecase5_fake_ques_ans | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 699529
num_examples: 1689
- name: vaild
num_bytes: 87550
num_examples: 211
- name: test
num_bytes: 88694
num_examples: 212
download_size: 333507
dataset_size: 875773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: vaild
path: data/vaild-*
- split: test
path: data/test-*
---
|
FASOXO/COUI | ---
license: openrail
---
|
mokshaannigeri/CS482_Dagster_Dataset | ---
dataset_info:
features:
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: housing_median_age
dtype: float64
- name: total_rooms
dtype: float64
- name: total_bedrooms
dtype: float64
- name: population
dtype: float64
- name: households
dtype: float64
- name: median_income
dtype: float64
- name: median_house_value
dtype: float64
- name: ocean_proximity
dtype: string
splits:
- name: train
num_bytes: 1737680
num_examples: 20640
download_size: 824144
dataset_size: 1737680
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
size_categories:
- 10K<n<100K
--- |
copenlu/spanex | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- 'rationale-extraction'
- reasoning
- nli
- fact-checking
- explainability
pretty_name: spanex
size_categories:
- 1K<n<10K
configs:
- config_name: snli_extended
data_files:
- split: test
path: snli_extended.jsonl
- config_name: fever_extended
data_files:
- split: test
path: fever_extended.jsonl
- config_name: snli
data_files:
- split: test
path: snli.jsonl
- config_name: fever
data_files:
- split: test
path: fever.jsonl
---
SpanEx consists of 7071 instances annotated for span interactions.
SpanEx is the first dataset with human phrase-level interaction explanations with explicit labels for interaction types.
Moreover, SpanEx is annotated by three annotators, which opens new avenues for studies of human explanation agreement -- an understudied area in the explainability literature.
Our study reveals that while human annotators often agree on span interactions, they also offer complementary reasons for a prediction, collectively providing a comprehensive set of reasons for a prediction.
We collect explanations of span interactions for NLI on the SNLI dataset and for FC on the FEVER dataset.
Please cite the following paper if you use this dataset:
```
@inproceedings{choudhury-etal-2023-explaining,
title = "Explaining Interactions Between Text Spans",
author = "Choudhury, Sagnik and
Atanasova, Pepa and
Augenstein, Isabelle",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.783",
doi = "10.18653/v1/2023.emnlp-main.783",
pages = "12709--12730",
abstract = "Reasoning over spans of tokens from different parts of the input is essential for natural language understanding (NLU) tasks such as fact-checking (FC), machine reading comprehension (MRC) or natural language inference (NLI). However, existing highlight-based explanations primarily focus on identifying individual important features or interactions only between adjacent tokens or tuples of tokens. Most notably, there is a lack of annotations capturing the human decision-making process with respect to the necessary interactions for informed decision-making in such tasks. To bridge this gap, we introduce SpanEx, a multi-annotator dataset of human span interaction explanations for two NLU tasks: NLI and FC. We then investigate the decision-making processes of multiple fine-tuned large language models in terms of the employed connections between spans in separate parts of the input and compare them to the human reasoning processes. Finally, we present a novel community detection based unsupervised method to extract such interaction explanations. We make the code and the dataset available on [Github](https://github.com/copenlu/spanex). The dataset is also available on [Huggingface datasets](https://huggingface.co/datasets/copenlu/spanex).",
}
``` |
aisyahhrazak/crawl-utusansarawak | ---
language:
- ms
---
About
- Data scraped from http://www.utusansarawak.com.my/ |
CyberHarem/thompson_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of thompson/トンプソン/汤姆森 (Girls' Frontline)
This is the dataset of thompson/トンプソン/汤姆森 (Girls' Frontline), containing 102 images and their tags.
The core tags of this character are `short_hair, red_eyes, breasts, large_breasts, sunglasses, bangs, headphones, glasses`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 102 | 120.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/thompson_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 102 | 70.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/thompson_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 243 | 147.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/thompson_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 102 | 108.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/thompson_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 243 | 206.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/thompson_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/thompson_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | 1girl, looking_at_viewer, solo, headphones_around_neck, smile, pants, shirt, submachine_gun, white_background, simple_background, fedora, navel, red_necktie, sleeveless, holding_gun, mouth_hold, bare_shoulders, yellow_gloves, magazine_(weapon) |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | headphones_around_neck | smile | pants | shirt | submachine_gun | white_background | simple_background | fedora | navel | red_necktie | sleeveless | holding_gun | mouth_hold | bare_shoulders | yellow_gloves | magazine_(weapon) |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-------------------------|:--------|:--------|:--------|:-----------------|:-------------------|:--------------------|:---------|:--------|:--------------|:-------------|:--------------|:-------------|:-----------------|:----------------|:--------------------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
jordiclive/OATS-ABSA | ---
dataset_info:
features:
- name: comment
dtype: string
- name: quad
sequence:
sequence: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 2111953
num_examples: 3987
- name: test
num_bytes: 266209
num_examples: 500
- name: validation
num_bytes: 88525
num_examples: 170
download_size: 1136999
dataset_size: 2466687
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# OATS Dataset
## Description
The OATS (Opinion Aspect Target Sentiment) dataset is a comprehensive collection designed for the Aspect Sentiment Quad Prediction (ASQP) or Aspect-Category-Opinion-Sentiment (ACOS) task. This dataset aims to facilitate research in aspect-based sentiment analysis by providing detailed opinion quadruples extracted from review texts. Additionally, for each review, we offer tuples summarizing the dominant sentiment polarity toward each aspect category discussed.
The dataset covers three distinct domains: Amazon FineFood reviews, Coursera course reviews, and TripAdvisor Hotel reviews, offering a broad spectrum for analysis across different types of services and products.
Structure
The dataset is structured into two primary components:
Opinion Quadruples: Detailed annotations on the level of individual opinions, including the aspect, the sentiment target, and the corresponding sentiment.
Review-Level Tuples: Aggregate information at the review level, indicating the overall sentiment polarity for each aspect category mentioned.
## Domains
Amazon FineFood Reviews
Coursera Course Reviews
TripAdvisor Hotel Reviews
Each domain is annotated from scratch, ensuring high-quality data for nuanced sentiment analysis tasks.
Citation
If you use the OATS dataset in your research, please cite the original authors:
```
@misc{chebolu2023oats,
title={OATS: Opinion Aspect Target Sentiment Quadruple Extraction Dataset for Aspect-Based Sentiment Analysis},
author={Siva Uday Sampreeth Chebolu and Franck Dernoncourt and Nedim Lipka and Thamar Solorio},
year={2023},
eprint={2309.13297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage
This dataset has been curated to facilitate easy access and integration into existing NLP pipelines. To use this dataset, you can load it using the datasets library by Hugging Face:
```
from datasets import load_dataset
dataset = load_dataset("jordiclive/OATS-ABSA")
```
|
open-llm-leaderboard/details_lqtrung1998__Codellama-7b-hf-ReFT-Rerank-GSM8k | ---
pretty_name: Evaluation run of lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k](https://huggingface.co/lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lqtrung1998__Codellama-7b-hf-ReFT-Rerank-GSM8k\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-05T00:37:45.743048](https://huggingface.co/datasets/open-llm-leaderboard/details_lqtrung1998__Codellama-7b-hf-ReFT-Rerank-GSM8k/blob/main/results_2024-03-05T00-37-45.743048.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2468037154694787,\n\
\ \"acc_stderr\": 0.030588940762331273,\n \"acc_norm\": 0.2474659083336445,\n\
\ \"acc_norm_stderr\": 0.031401393639476693,\n \"mc1\": 0.2460220318237454,\n\
\ \"mc1_stderr\": 0.015077219200662578,\n \"mc2\": 0.4997242001927576,\n\
\ \"mc2_stderr\": 0.01687751410270577\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.24232081911262798,\n \"acc_stderr\": 0.012521593295800116,\n\
\ \"acc_norm\": 0.29266211604095566,\n \"acc_norm_stderr\": 0.01329591610361941\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2555267874925314,\n\
\ \"acc_stderr\": 0.0043526552636823385,\n \"acc_norm\": 0.2613025293766182,\n\
\ \"acc_norm_stderr\": 0.004384465219070756\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3037037037037037,\n\
\ \"acc_stderr\": 0.039725528847851375,\n \"acc_norm\": 0.3037037037037037,\n\
\ \"acc_norm_stderr\": 0.039725528847851375\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.24342105263157895,\n \"acc_stderr\": 0.034923496688842384,\n\
\ \"acc_norm\": 0.24342105263157895,\n \"acc_norm_stderr\": 0.034923496688842384\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.26,\n\
\ \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.21132075471698114,\n \"acc_stderr\": 0.02512576648482784,\n\
\ \"acc_norm\": 0.21132075471698114,\n \"acc_norm_stderr\": 0.02512576648482784\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2152777777777778,\n\
\ \"acc_stderr\": 0.03437079344106134,\n \"acc_norm\": 0.2152777777777778,\n\
\ \"acc_norm_stderr\": 0.03437079344106134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322674,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322674\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\"\
: 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322674,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322674\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2658959537572254,\n\
\ \"acc_stderr\": 0.03368762932259431,\n \"acc_norm\": 0.2658959537572254,\n\
\ \"acc_norm_stderr\": 0.03368762932259431\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.04220773659171452,\n\
\ \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.04220773659171452\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n\
\ \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.20851063829787234,\n \"acc_stderr\": 0.026556982117838725,\n\
\ \"acc_norm\": 0.20851063829787234,\n \"acc_norm_stderr\": 0.026556982117838725\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21929824561403508,\n\
\ \"acc_stderr\": 0.03892431106518753,\n \"acc_norm\": 0.21929824561403508,\n\
\ \"acc_norm_stderr\": 0.03892431106518753\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.296551724137931,\n \"acc_stderr\": 0.03806142687309993,\n\
\ \"acc_norm\": 0.296551724137931,\n \"acc_norm_stderr\": 0.03806142687309993\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.23809523809523808,\n \"acc_stderr\": 0.021935878081184756,\n \"\
acc_norm\": 0.23809523809523808,\n \"acc_norm_stderr\": 0.021935878081184756\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.037184890068181146,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.037184890068181146\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.22258064516129034,\n\
\ \"acc_stderr\": 0.023664216671642518,\n \"acc_norm\": 0.22258064516129034,\n\
\ \"acc_norm_stderr\": 0.023664216671642518\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.22167487684729065,\n \"acc_stderr\": 0.0292255758924896,\n\
\ \"acc_norm\": 0.22167487684729065,\n \"acc_norm_stderr\": 0.0292255758924896\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\"\
: 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.3151515151515151,\n \"acc_stderr\": 0.0362773057502241,\n\
\ \"acc_norm\": 0.3151515151515151,\n \"acc_norm_stderr\": 0.0362773057502241\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.2828282828282828,\n \"acc_stderr\": 0.03208779558786753,\n \"\
acc_norm\": 0.2828282828282828,\n \"acc_norm_stderr\": 0.03208779558786753\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.27979274611398963,\n \"acc_stderr\": 0.03239637046735703,\n\
\ \"acc_norm\": 0.27979274611398963,\n \"acc_norm_stderr\": 0.03239637046735703\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2205128205128205,\n \"acc_stderr\": 0.02102067268082791,\n \
\ \"acc_norm\": 0.2205128205128205,\n \"acc_norm_stderr\": 0.02102067268082791\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2,\n \"acc_stderr\": 0.024388430433987654,\n \"acc_norm\"\
: 0.2,\n \"acc_norm_stderr\": 0.024388430433987654\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\"\
: {\n \"acc\": 0.25630252100840334,\n \"acc_stderr\": 0.02835962087053395,\n\
\ \"acc_norm\": 0.25630252100840334,\n \"acc_norm_stderr\": 0.02835962087053395\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.25165562913907286,\n \"acc_stderr\": 0.035433042343899844,\n \"\
acc_norm\": 0.25165562913907286,\n \"acc_norm_stderr\": 0.035433042343899844\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.21651376146788992,\n \"acc_stderr\": 0.01765871059444315,\n \"\
acc_norm\": 0.21651376146788992,\n \"acc_norm_stderr\": 0.01765871059444315\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.19444444444444445,\n \"acc_stderr\": 0.026991454502036716,\n \"\
acc_norm\": 0.19444444444444445,\n \"acc_norm_stderr\": 0.026991454502036716\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.2549019607843137,\n \"acc_stderr\": 0.030587591351604246,\n \"\
acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.030587591351604246\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.25738396624472576,\n \"acc_stderr\": 0.02845882099146031,\n \
\ \"acc_norm\": 0.25738396624472576,\n \"acc_norm_stderr\": 0.02845882099146031\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.25112107623318386,\n\
\ \"acc_stderr\": 0.02910522083322462,\n \"acc_norm\": 0.25112107623318386,\n\
\ \"acc_norm_stderr\": 0.02910522083322462\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.22137404580152673,\n \"acc_stderr\": 0.0364129708131373,\n\
\ \"acc_norm\": 0.22137404580152673,\n \"acc_norm_stderr\": 0.0364129708131373\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2644628099173554,\n \"acc_stderr\": 0.040261875275912073,\n \"\
acc_norm\": 0.2644628099173554,\n \"acc_norm_stderr\": 0.040261875275912073\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.28703703703703703,\n\
\ \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.28703703703703703,\n\
\ \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.294478527607362,\n \"acc_stderr\": 0.03581165790474082,\n\
\ \"acc_norm\": 0.294478527607362,\n \"acc_norm_stderr\": 0.03581165790474082\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.20535714285714285,\n\
\ \"acc_stderr\": 0.038342410214190714,\n \"acc_norm\": 0.20535714285714285,\n\
\ \"acc_norm_stderr\": 0.038342410214190714\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.03760178006026621,\n\
\ \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.03760178006026621\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2606837606837607,\n\
\ \"acc_stderr\": 0.028760348956523414,\n \"acc_norm\": 0.2606837606837607,\n\
\ \"acc_norm_stderr\": 0.028760348956523414\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909284,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909284\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.27586206896551724,\n\
\ \"acc_stderr\": 0.01598281477469563,\n \"acc_norm\": 0.27586206896551724,\n\
\ \"acc_norm_stderr\": 0.01598281477469563\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n\
\ \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2324022346368715,\n\
\ \"acc_stderr\": 0.014125968754673401,\n \"acc_norm\": 0.2324022346368715,\n\
\ \"acc_norm_stderr\": 0.014125968754673401\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.24836601307189543,\n \"acc_stderr\": 0.024739981355113592,\n\
\ \"acc_norm\": 0.24836601307189543,\n \"acc_norm_stderr\": 0.024739981355113592\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.3022508038585209,\n\
\ \"acc_stderr\": 0.026082700695399655,\n \"acc_norm\": 0.3022508038585209,\n\
\ \"acc_norm_stderr\": 0.026082700695399655\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.23765432098765432,\n \"acc_stderr\": 0.023683591837008553,\n\
\ \"acc_norm\": 0.23765432098765432,\n \"acc_norm_stderr\": 0.023683591837008553\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2624113475177305,\n \"acc_stderr\": 0.026244920349842993,\n \
\ \"acc_norm\": 0.2624113475177305,\n \"acc_norm_stderr\": 0.026244920349842993\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2607561929595828,\n\
\ \"acc_stderr\": 0.011213471559602338,\n \"acc_norm\": 0.2607561929595828,\n\
\ \"acc_norm_stderr\": 0.011213471559602338\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.17647058823529413,\n \"acc_stderr\": 0.023157468308559342,\n\
\ \"acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.023157468308559342\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.21895424836601307,\n \"acc_stderr\": 0.016729937565537527,\n \
\ \"acc_norm\": 0.21895424836601307,\n \"acc_norm_stderr\": 0.016729937565537527\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n\
\ \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n\
\ \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.1673469387755102,\n \"acc_stderr\": 0.023897144768914524,\n\
\ \"acc_norm\": 0.1673469387755102,\n \"acc_norm_stderr\": 0.023897144768914524\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.26865671641791045,\n\
\ \"acc_stderr\": 0.03134328358208954,\n \"acc_norm\": 0.26865671641791045,\n\
\ \"acc_norm_stderr\": 0.03134328358208954\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.22289156626506024,\n\
\ \"acc_stderr\": 0.032400048255946876,\n \"acc_norm\": 0.22289156626506024,\n\
\ \"acc_norm_stderr\": 0.032400048255946876\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.03377310252209195,\n\
\ \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.03377310252209195\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2460220318237454,\n\
\ \"mc1_stderr\": 0.015077219200662578,\n \"mc2\": 0.4997242001927576,\n\
\ \"mc2_stderr\": 0.01687751410270577\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.510655090765588,\n \"acc_stderr\": 0.014049294536290396\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|arc:challenge|25_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|gsm8k|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hellaswag|10_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-05T00-37-45.743048.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-05T00-37-45.743048.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- '**/details_harness|winogrande|5_2024-03-05T00-37-45.743048.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-05T00-37-45.743048.parquet'
- config_name: results
data_files:
- split: 2024_03_05T00_37_45.743048
path:
- results_2024-03-05T00-37-45.743048.parquet
- split: latest
path:
- results_2024-03-05T00-37-45.743048.parquet
---
# Dataset Card for Evaluation run of lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k](https://huggingface.co/lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lqtrung1998__Codellama-7b-hf-ReFT-Rerank-GSM8k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-05T00:37:45.743048](https://huggingface.co/datasets/open-llm-leaderboard/details_lqtrung1998__Codellama-7b-hf-ReFT-Rerank-GSM8k/blob/main/results_2024-03-05T00-37-45.743048.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2468037154694787,
"acc_stderr": 0.030588940762331273,
"acc_norm": 0.2474659083336445,
"acc_norm_stderr": 0.031401393639476693,
"mc1": 0.2460220318237454,
"mc1_stderr": 0.015077219200662578,
"mc2": 0.4997242001927576,
"mc2_stderr": 0.01687751410270577
},
"harness|arc:challenge|25": {
"acc": 0.24232081911262798,
"acc_stderr": 0.012521593295800116,
"acc_norm": 0.29266211604095566,
"acc_norm_stderr": 0.01329591610361941
},
"harness|hellaswag|10": {
"acc": 0.2555267874925314,
"acc_stderr": 0.0043526552636823385,
"acc_norm": 0.2613025293766182,
"acc_norm_stderr": 0.004384465219070756
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.039725528847851375,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.039725528847851375
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.24342105263157895,
"acc_stderr": 0.034923496688842384,
"acc_norm": 0.24342105263157895,
"acc_norm_stderr": 0.034923496688842384
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21132075471698114,
"acc_stderr": 0.02512576648482784,
"acc_norm": 0.21132075471698114,
"acc_norm_stderr": 0.02512576648482784
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2152777777777778,
"acc_stderr": 0.03437079344106134,
"acc_norm": 0.2152777777777778,
"acc_norm_stderr": 0.03437079344106134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322674,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322674
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322674,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322674
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2658959537572254,
"acc_stderr": 0.03368762932259431,
"acc_norm": 0.2658959537572254,
"acc_norm_stderr": 0.03368762932259431
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.04220773659171452,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.04220773659171452
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.20851063829787234,
"acc_stderr": 0.026556982117838725,
"acc_norm": 0.20851063829787234,
"acc_norm_stderr": 0.026556982117838725
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.21929824561403508,
"acc_stderr": 0.03892431106518753,
"acc_norm": 0.21929824561403508,
"acc_norm_stderr": 0.03892431106518753
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.296551724137931,
"acc_stderr": 0.03806142687309993,
"acc_norm": 0.296551724137931,
"acc_norm_stderr": 0.03806142687309993
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.23809523809523808,
"acc_stderr": 0.021935878081184756,
"acc_norm": 0.23809523809523808,
"acc_norm_stderr": 0.021935878081184756
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.037184890068181146,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.037184890068181146
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.22258064516129034,
"acc_stderr": 0.023664216671642518,
"acc_norm": 0.22258064516129034,
"acc_norm_stderr": 0.023664216671642518
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.22167487684729065,
"acc_stderr": 0.0292255758924896,
"acc_norm": 0.22167487684729065,
"acc_norm_stderr": 0.0292255758924896
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.3151515151515151,
"acc_stderr": 0.0362773057502241,
"acc_norm": 0.3151515151515151,
"acc_norm_stderr": 0.0362773057502241
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.2828282828282828,
"acc_stderr": 0.03208779558786753,
"acc_norm": 0.2828282828282828,
"acc_norm_stderr": 0.03208779558786753
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.27979274611398963,
"acc_stderr": 0.03239637046735703,
"acc_norm": 0.27979274611398963,
"acc_norm_stderr": 0.03239637046735703
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2205128205128205,
"acc_stderr": 0.02102067268082791,
"acc_norm": 0.2205128205128205,
"acc_norm_stderr": 0.02102067268082791
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2,
"acc_stderr": 0.024388430433987654,
"acc_norm": 0.2,
"acc_norm_stderr": 0.024388430433987654
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.25630252100840334,
"acc_stderr": 0.02835962087053395,
"acc_norm": 0.25630252100840334,
"acc_norm_stderr": 0.02835962087053395
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.25165562913907286,
"acc_stderr": 0.035433042343899844,
"acc_norm": 0.25165562913907286,
"acc_norm_stderr": 0.035433042343899844
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.21651376146788992,
"acc_stderr": 0.01765871059444315,
"acc_norm": 0.21651376146788992,
"acc_norm_stderr": 0.01765871059444315
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.19444444444444445,
"acc_stderr": 0.026991454502036716,
"acc_norm": 0.19444444444444445,
"acc_norm_stderr": 0.026991454502036716
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.030587591351604246,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.030587591351604246
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.25738396624472576,
"acc_stderr": 0.02845882099146031,
"acc_norm": 0.25738396624472576,
"acc_norm_stderr": 0.02845882099146031
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.25112107623318386,
"acc_stderr": 0.02910522083322462,
"acc_norm": 0.25112107623318386,
"acc_norm_stderr": 0.02910522083322462
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.22137404580152673,
"acc_stderr": 0.0364129708131373,
"acc_norm": 0.22137404580152673,
"acc_norm_stderr": 0.0364129708131373
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2644628099173554,
"acc_stderr": 0.040261875275912073,
"acc_norm": 0.2644628099173554,
"acc_norm_stderr": 0.040261875275912073
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.28703703703703703,
"acc_stderr": 0.043733130409147614,
"acc_norm": 0.28703703703703703,
"acc_norm_stderr": 0.043733130409147614
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.294478527607362,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.294478527607362,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.20535714285714285,
"acc_stderr": 0.038342410214190714,
"acc_norm": 0.20535714285714285,
"acc_norm_stderr": 0.038342410214190714
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2606837606837607,
"acc_stderr": 0.028760348956523414,
"acc_norm": 0.2606837606837607,
"acc_norm_stderr": 0.028760348956523414
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.27586206896551724,
"acc_stderr": 0.01598281477469563,
"acc_norm": 0.27586206896551724,
"acc_norm_stderr": 0.01598281477469563
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2324022346368715,
"acc_stderr": 0.014125968754673401,
"acc_norm": 0.2324022346368715,
"acc_norm_stderr": 0.014125968754673401
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.24836601307189543,
"acc_stderr": 0.024739981355113592,
"acc_norm": 0.24836601307189543,
"acc_norm_stderr": 0.024739981355113592
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.3022508038585209,
"acc_stderr": 0.026082700695399655,
"acc_norm": 0.3022508038585209,
"acc_norm_stderr": 0.026082700695399655
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.23765432098765432,
"acc_stderr": 0.023683591837008553,
"acc_norm": 0.23765432098765432,
"acc_norm_stderr": 0.023683591837008553
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2624113475177305,
"acc_stderr": 0.026244920349842993,
"acc_norm": 0.2624113475177305,
"acc_norm_stderr": 0.026244920349842993
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2607561929595828,
"acc_stderr": 0.011213471559602338,
"acc_norm": 0.2607561929595828,
"acc_norm_stderr": 0.011213471559602338
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.17647058823529413,
"acc_stderr": 0.023157468308559342,
"acc_norm": 0.17647058823529413,
"acc_norm_stderr": 0.023157468308559342
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.21895424836601307,
"acc_stderr": 0.016729937565537527,
"acc_norm": 0.21895424836601307,
"acc_norm_stderr": 0.016729937565537527
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.1673469387755102,
"acc_stderr": 0.023897144768914524,
"acc_norm": 0.1673469387755102,
"acc_norm_stderr": 0.023897144768914524
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.26865671641791045,
"acc_stderr": 0.03134328358208954,
"acc_norm": 0.26865671641791045,
"acc_norm_stderr": 0.03134328358208954
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-virology|5": {
"acc": 0.22289156626506024,
"acc_stderr": 0.032400048255946876,
"acc_norm": 0.22289156626506024,
"acc_norm_stderr": 0.032400048255946876
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.03377310252209195,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.03377310252209195
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2460220318237454,
"mc1_stderr": 0.015077219200662578,
"mc2": 0.4997242001927576,
"mc2_stderr": 0.01687751410270577
},
"harness|winogrande|5": {
"acc": 0.510655090765588,
"acc_stderr": 0.014049294536290396
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
automated-research-group/llama2_7b_chat-siqa | ---
dataset_info:
features:
- name: id
dtype: string
- name: request
dtype: string
- name: response
dtype: string
- name: input_perplexity
dtype: float64
- name: input_likelihood
dtype: float64
- name: output_perplexity
dtype: float64
- name: output_likelihood
dtype: float64
splits:
- name: train
num_bytes: 973655
num_examples: 1935
download_size: 337947
dataset_size: 973655
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Introvert696/klavakoka | ---
license: openrail
---
|
SDbiaseval/jobs-sd-1.4 | ---
dataset_info:
features:
- name: adjective
dtype: string
- name: profession
dtype: string
- name: 'no'
dtype: int32
- name: image_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 1161828556.5
num_examples: 31500
download_size: 1167871729
dataset_size: 1161828556.5
---
# Dataset Card for "dataset-v-1.4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sounakray1997/CoNLLU_WikiNEuRal | ---
license: apache-2.0
---
|
SocialGrep/ten-million-reddit-answers | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for ten-million-reddit-answers
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers)
### Dataset Summary
This corpus contains ten million question-answer pairs, labeled with score and pre-packaged with results of a basic sentiment predictor.
The data was procured from /r/AskReddit using [SocialGrep](https://socialgrep.com/?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers).
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] |
Hack90/ncbi_genbank_part_76 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 31427190646
num_examples: 832959
download_size: 13887863083
dataset_size: 31427190646
---
# Dataset Card for "ncbi_genbank_part_76"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kenhktsui/openwebtext_quality_score_v1 | ---
dataset_info:
features:
- name: text
dtype: string
- name: quality_score_v1
dtype: float64
splits:
- name: train
num_bytes: 39833601840
num_examples: 8013769
download_size: 24272267533
dataset_size: 39833601840
license: cc0-1.0
task_categories:
- text-generation
language:
- en
---
# Dataset Card for "openwebtext_quality_score_v1"
Adding quality score v1 to [Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sefgsefg/ID_test | ---
language:
- zh
--- |
Simonk97/DIEUNHI | ---
license: openrail
---
|
IceMasterT/BTC-Data-1Hour-2018-2023 | ---
license: mit
task_categories:
- token-classification
- text-classification
language:
- en
tags:
- finance
pretty_name: Bitcoin Data 1 Hour 2018-2023
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
zolak/twitter_dataset_79_1713054821 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2656620
num_examples: 6568
download_size: 1319377
dataset_size: 2656620
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tasksource/mmlu | ---
license: apache-2.0
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
- closed-domain-qa
language:
- en
tags:
- multi-task
- multitask
- mmlu
- hendrycks_test
pretty_name: mmlu
---
MMLU (`hendrycks_test` on huggingface) without auxiliary train. It is much lighter (7MB vs 162MB) and faster than the original implementation, in which auxiliary train is loaded (+ duplicated!) by default for all the configs in the original version, making it quite heavy.
We use this version in [tasksource](https://huggingface.co/tasksource).
Reference to original dataset:
Measuring Massive Multitask Language Understanding - https://github.com/hendrycks/test
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
``` |
CyberHarem/harukawa_fuki_lycorisrecoil | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Harukawa Fuki
This is the dataset of Harukawa Fuki, containing 64 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 64 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 127 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 151 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 64 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 64 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 64 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 127 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 127 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 105 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 151 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 151 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
open-llm-leaderboard/details_FelixChao__Scorpio-7B | ---
pretty_name: Evaluation run of FelixChao/Scorpio-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [FelixChao/Scorpio-7B](https://huggingface.co/FelixChao/Scorpio-7B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_FelixChao__Scorpio-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-14T06:06:41.057848](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__Scorpio-7B/blob/main/results_2024-02-14T06-06-41.057848.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.652720861598439,\n\
\ \"acc_stderr\": 0.03206963180616428,\n \"acc_norm\": 0.6521299642533669,\n\
\ \"acc_norm_stderr\": 0.03274061096321565,\n \"mc1\": 0.5593635250917993,\n\
\ \"mc1_stderr\": 0.017379697555437446,\n \"mc2\": 0.7250768130559095,\n\
\ \"mc2_stderr\": 0.014525579582282969\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6868600682593856,\n \"acc_stderr\": 0.013552671543623494,\n\
\ \"acc_norm\": 0.7133105802047781,\n \"acc_norm_stderr\": 0.013214986329274783\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6979685321649074,\n\
\ \"acc_stderr\": 0.004582004744713377,\n \"acc_norm\": 0.8849830711013742,\n\
\ \"acc_norm_stderr\": 0.0031839033919416975\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337128,\n\
\ \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337128\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n\
\ \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n\
\ \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n\
\ \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n\
\ \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42857142857142855,\n \"acc_stderr\": 0.02548718714785938,\n \"\
acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.02548718714785938\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n\
\ \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n\
\ \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723295,\n \"\
acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723295\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"\
acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.03158415324047711,\n\
\ \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.03158415324047711\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267042,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267042\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768766,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768766\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6794871794871795,\n \"acc_stderr\": 0.02366129639396428,\n \
\ \"acc_norm\": 0.6794871794871795,\n \"acc_norm_stderr\": 0.02366129639396428\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \
\ \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.39072847682119205,\n \"acc_stderr\": 0.039837983066598075,\n \"\
acc_norm\": 0.39072847682119205,\n \"acc_norm_stderr\": 0.039837983066598075\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8293577981651377,\n \"acc_stderr\": 0.01612927102509986,\n \"\
acc_norm\": 0.8293577981651377,\n \"acc_norm_stderr\": 0.01612927102509986\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"\
acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8016877637130801,\n \"acc_stderr\": 0.025955020841621115,\n \
\ \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.025955020841621115\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n\
\ \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n\
\ \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8275862068965517,\n\
\ \"acc_stderr\": 0.013507943909371802,\n \"acc_norm\": 0.8275862068965517,\n\
\ \"acc_norm_stderr\": 0.013507943909371802\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.02370309952525818,\n\
\ \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.02370309952525818\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4324022346368715,\n\
\ \"acc_stderr\": 0.01656897123354861,\n \"acc_norm\": 0.4324022346368715,\n\
\ \"acc_norm_stderr\": 0.01656897123354861\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818733,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818733\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n\
\ \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n\
\ \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135107,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135107\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \
\ \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46088657105606257,\n\
\ \"acc_stderr\": 0.012731102790504512,\n \"acc_norm\": 0.46088657105606257,\n\
\ \"acc_norm_stderr\": 0.012731102790504512\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.02841820861940676,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.02841820861940676\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6813725490196079,\n \"acc_stderr\": 0.01885008469646872,\n \
\ \"acc_norm\": 0.6813725490196079,\n \"acc_norm_stderr\": 0.01885008469646872\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128445,\n\
\ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128445\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.025538433368578334,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.025538433368578334\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n\
\ \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n\
\ \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5593635250917993,\n\
\ \"mc1_stderr\": 0.017379697555437446,\n \"mc2\": 0.7250768130559095,\n\
\ \"mc2_stderr\": 0.014525579582282969\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.835043409629045,\n \"acc_stderr\": 0.010430917468237431\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7187263078089462,\n \
\ \"acc_stderr\": 0.012384789310940244\n }\n}\n```"
repo_url: https://huggingface.co/FelixChao/Scorpio-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|arc:challenge|25_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|gsm8k|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hellaswag|10_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-14T06-06-41.057848.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-14T06-06-41.057848.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- '**/details_harness|winogrande|5_2024-02-14T06-06-41.057848.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-14T06-06-41.057848.parquet'
- config_name: results
data_files:
- split: 2024_02_14T06_06_41.057848
path:
- results_2024-02-14T06-06-41.057848.parquet
- split: latest
path:
- results_2024-02-14T06-06-41.057848.parquet
---
# Dataset Card for Evaluation run of FelixChao/Scorpio-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [FelixChao/Scorpio-7B](https://huggingface.co/FelixChao/Scorpio-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_FelixChao__Scorpio-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-14T06:06:41.057848](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__Scorpio-7B/blob/main/results_2024-02-14T06-06-41.057848.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.652720861598439,
"acc_stderr": 0.03206963180616428,
"acc_norm": 0.6521299642533669,
"acc_norm_stderr": 0.03274061096321565,
"mc1": 0.5593635250917993,
"mc1_stderr": 0.017379697555437446,
"mc2": 0.7250768130559095,
"mc2_stderr": 0.014525579582282969
},
"harness|arc:challenge|25": {
"acc": 0.6868600682593856,
"acc_stderr": 0.013552671543623494,
"acc_norm": 0.7133105802047781,
"acc_norm_stderr": 0.013214986329274783
},
"harness|hellaswag|10": {
"acc": 0.6979685321649074,
"acc_stderr": 0.004582004744713377,
"acc_norm": 0.8849830711013742,
"acc_norm_stderr": 0.0031839033919416975
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.027943219989337128,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.027943219989337128
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.02548718714785938,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.02548718714785938
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723295,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723295
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.03158415324047711,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.03158415324047711
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267042,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267042
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768766,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768766
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6794871794871795,
"acc_stderr": 0.02366129639396428,
"acc_norm": 0.6794871794871795,
"acc_norm_stderr": 0.02366129639396428
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028593,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.39072847682119205,
"acc_stderr": 0.039837983066598075,
"acc_norm": 0.39072847682119205,
"acc_norm_stderr": 0.039837983066598075
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8293577981651377,
"acc_stderr": 0.01612927102509986,
"acc_norm": 0.8293577981651377,
"acc_norm_stderr": 0.01612927102509986
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.025955020841621115,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.025955020841621115
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973646,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973646
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8275862068965517,
"acc_stderr": 0.013507943909371802,
"acc_norm": 0.8275862068965517,
"acc_norm_stderr": 0.013507943909371802
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.02370309952525818,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.02370309952525818
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4324022346368715,
"acc_stderr": 0.01656897123354861,
"acc_norm": 0.4324022346368715,
"acc_norm_stderr": 0.01656897123354861
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.024477222856135107,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.024477222856135107
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46088657105606257,
"acc_stderr": 0.012731102790504512,
"acc_norm": 0.46088657105606257,
"acc_norm_stderr": 0.012731102790504512
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.02841820861940676,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.02841820861940676
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6813725490196079,
"acc_stderr": 0.01885008469646872,
"acc_norm": 0.6813725490196079,
"acc_norm_stderr": 0.01885008469646872
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128445,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128445
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578334,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578334
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5593635250917993,
"mc1_stderr": 0.017379697555437446,
"mc2": 0.7250768130559095,
"mc2_stderr": 0.014525579582282969
},
"harness|winogrande|5": {
"acc": 0.835043409629045,
"acc_stderr": 0.010430917468237431
},
"harness|gsm8k|5": {
"acc": 0.7187263078089462,
"acc_stderr": 0.012384789310940244
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ChirathD/dpt-caption-dataset-version-2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3150308.0
num_examples: 5
download_size: 3152834
dataset_size: 3150308.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
avsolatorio/mteb-amazon_counterfactual-avs_triplets | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
- name: label_text
dtype: string
- name: idx
dtype: int64
- name: query_idx
dtype: int64
- name: positive_idx
dtype: int64
- name: negative_idx
dtype: int64
splits:
- name: train
num_bytes: 677319
num_examples: 4018
download_size: 386346
dataset_size: 677319
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# MTEB Amazon Counterfactual Triplets Dataset
This dataset was used in the paper GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning. Refer to https://arxiv.org/abs/2402.16829 for details.
The code for generating the data is available at https://github.com/avsolatorio/GISTEmbed/blob/main/scripts/create_classification_dataset.py.
## Citation
```
@article{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
journal={arXiv preprint arXiv:2402.16829},
year={2024},
URL={https://arxiv.org/abs/2402.16829}
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
Victin/Poppy_PtBr | ---
license: unknown
---
|
xiaoqia/p | ---
license: afl-3.0
---
|
rogeriosss/jv14 | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_aboros98__merlin1.3 | ---
pretty_name: Evaluation run of aboros98/merlin1.3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [aboros98/merlin1.3](https://huggingface.co/aboros98/merlin1.3) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aboros98__merlin1.3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-15T01:26:49.217337](https://huggingface.co/datasets/open-llm-leaderboard/details_aboros98__merlin1.3/blob/main/results_2024-03-15T01-26-49.217337.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.578062943034698,\n\
\ \"acc_stderr\": 0.03374196476906128,\n \"acc_norm\": 0.5799459687976225,\n\
\ \"acc_norm_stderr\": 0.03443622361311283,\n \"mc1\": 0.31456548347613217,\n\
\ \"mc1_stderr\": 0.01625524199317918,\n \"mc2\": 0.4676638816154199,\n\
\ \"mc2_stderr\": 0.015177166274149752\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5802047781569966,\n \"acc_stderr\": 0.014422181226303026,\n\
\ \"acc_norm\": 0.5998293515358362,\n \"acc_norm_stderr\": 0.014317197787809184\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5683130850428202,\n\
\ \"acc_stderr\": 0.004942990623131118,\n \"acc_norm\": 0.7519418442541327,\n\
\ \"acc_norm_stderr\": 0.004310031044459142\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4888888888888889,\n\
\ \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.4888888888888889,\n\
\ \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5789473684210527,\n \"acc_stderr\": 0.040179012759817494,\n\
\ \"acc_norm\": 0.5789473684210527,\n \"acc_norm_stderr\": 0.040179012759817494\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5886792452830188,\n \"acc_stderr\": 0.030285009259009794,\n\
\ \"acc_norm\": 0.5886792452830188,\n \"acc_norm_stderr\": 0.030285009259009794\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6458333333333334,\n\
\ \"acc_stderr\": 0.039994111357535424,\n \"acc_norm\": 0.6458333333333334,\n\
\ \"acc_norm_stderr\": 0.039994111357535424\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.44,\n\
\ \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5838150289017341,\n\
\ \"acc_stderr\": 0.03758517775404947,\n \"acc_norm\": 0.5838150289017341,\n\
\ \"acc_norm_stderr\": 0.03758517775404947\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.04724007352383888,\n\
\ \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.04724007352383888\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.03267862331014063,\n\
\ \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.03267862331014063\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n\
\ \"acc_stderr\": 0.045144961328736334,\n \"acc_norm\": 0.35964912280701755,\n\
\ \"acc_norm_stderr\": 0.045144961328736334\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4523809523809524,\n \"acc_stderr\": 0.025634258115554955,\n \"\
acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.025634258115554955\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n\
\ \"acc_stderr\": 0.04403438954768177,\n \"acc_norm\": 0.4126984126984127,\n\
\ \"acc_norm_stderr\": 0.04403438954768177\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7290322580645161,\n\
\ \"acc_stderr\": 0.025284416114900156,\n \"acc_norm\": 0.7290322580645161,\n\
\ \"acc_norm_stderr\": 0.025284416114900156\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.035158955511656986,\n\
\ \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\"\
: 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.03756335775187898,\n\
\ \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.03756335775187898\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198906,\n \"\
acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198906\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.772020725388601,\n \"acc_stderr\": 0.030276909945178267,\n\
\ \"acc_norm\": 0.772020725388601,\n \"acc_norm_stderr\": 0.030276909945178267\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5717948717948718,\n \"acc_stderr\": 0.025088301454694834,\n\
\ \"acc_norm\": 0.5717948717948718,\n \"acc_norm_stderr\": 0.025088301454694834\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3148148148148148,\n \"acc_stderr\": 0.028317533496066468,\n \
\ \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.028317533496066468\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566548,\n\
\ \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566548\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3973509933774834,\n \"acc_stderr\": 0.0399552400768168,\n \"acc_norm\"\
: 0.3973509933774834,\n \"acc_norm_stderr\": 0.0399552400768168\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7743119266055046,\n\
\ \"acc_stderr\": 0.017923087667803064,\n \"acc_norm\": 0.7743119266055046,\n\
\ \"acc_norm_stderr\": 0.017923087667803064\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.49537037037037035,\n \"acc_stderr\": 0.03409825519163572,\n\
\ \"acc_norm\": 0.49537037037037035,\n \"acc_norm_stderr\": 0.03409825519163572\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6764705882352942,\n \"acc_stderr\": 0.03283472056108561,\n \"\
acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.03283472056108561\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \
\ \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6188340807174888,\n\
\ \"acc_stderr\": 0.03259625118416827,\n \"acc_norm\": 0.6188340807174888,\n\
\ \"acc_norm_stderr\": 0.03259625118416827\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6564885496183206,\n \"acc_stderr\": 0.041649760719448786,\n\
\ \"acc_norm\": 0.6564885496183206,\n \"acc_norm_stderr\": 0.041649760719448786\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7520661157024794,\n \"acc_stderr\": 0.039418975265163025,\n \"\
acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.039418975265163025\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664743,\n\
\ \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664743\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8162393162393162,\n\
\ \"acc_stderr\": 0.02537213967172293,\n \"acc_norm\": 0.8162393162393162,\n\
\ \"acc_norm_stderr\": 0.02537213967172293\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.59,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6781609195402298,\n\
\ \"acc_stderr\": 0.016706381415057897,\n \"acc_norm\": 0.6781609195402298,\n\
\ \"acc_norm_stderr\": 0.016706381415057897\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6560693641618497,\n \"acc_stderr\": 0.02557412378654668,\n\
\ \"acc_norm\": 0.6560693641618497,\n \"acc_norm_stderr\": 0.02557412378654668\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2748603351955307,\n\
\ \"acc_stderr\": 0.014931316703220503,\n \"acc_norm\": 0.2748603351955307,\n\
\ \"acc_norm_stderr\": 0.014931316703220503\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.027914055510468,\n\
\ \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.027914055510468\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6237942122186495,\n\
\ \"acc_stderr\": 0.02751392568354943,\n \"acc_norm\": 0.6237942122186495,\n\
\ \"acc_norm_stderr\": 0.02751392568354943\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6234567901234568,\n \"acc_stderr\": 0.026959344518747784,\n\
\ \"acc_norm\": 0.6234567901234568,\n \"acc_norm_stderr\": 0.026959344518747784\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.475177304964539,\n \"acc_stderr\": 0.02979071924382972,\n \
\ \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.02979071924382972\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4230769230769231,\n\
\ \"acc_stderr\": 0.012618204066588387,\n \"acc_norm\": 0.4230769230769231,\n\
\ \"acc_norm_stderr\": 0.012618204066588387\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.46691176470588236,\n \"acc_stderr\": 0.030306257722468317,\n\
\ \"acc_norm\": 0.46691176470588236,\n \"acc_norm_stderr\": 0.030306257722468317\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5424836601307189,\n \"acc_stderr\": 0.020154685712590888,\n \
\ \"acc_norm\": 0.5424836601307189,\n \"acc_norm_stderr\": 0.020154685712590888\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910508,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910508\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.028920583220675592,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.028920583220675592\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8009950248756219,\n\
\ \"acc_stderr\": 0.028231365092758406,\n \"acc_norm\": 0.8009950248756219,\n\
\ \"acc_norm_stderr\": 0.028231365092758406\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4759036144578313,\n\
\ \"acc_stderr\": 0.038879718495972646,\n \"acc_norm\": 0.4759036144578313,\n\
\ \"acc_norm_stderr\": 0.038879718495972646\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.03565079670708311,\n\
\ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.03565079670708311\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31456548347613217,\n\
\ \"mc1_stderr\": 0.01625524199317918,\n \"mc2\": 0.4676638816154199,\n\
\ \"mc2_stderr\": 0.015177166274149752\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7592738752959748,\n \"acc_stderr\": 0.012015559212224176\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4890068233510235,\n \
\ \"acc_stderr\": 0.013769155509690907\n }\n}\n```"
repo_url: https://huggingface.co/aboros98/merlin1.3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|arc:challenge|25_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|gsm8k|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hellaswag|10_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-15T01-26-49.217337.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-15T01-26-49.217337.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- '**/details_harness|winogrande|5_2024-03-15T01-26-49.217337.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-15T01-26-49.217337.parquet'
- config_name: results
data_files:
- split: 2024_03_15T01_26_49.217337
path:
- results_2024-03-15T01-26-49.217337.parquet
- split: latest
path:
- results_2024-03-15T01-26-49.217337.parquet
---
# Dataset Card for Evaluation run of aboros98/merlin1.3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [aboros98/merlin1.3](https://huggingface.co/aboros98/merlin1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_aboros98__merlin1.3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-15T01:26:49.217337](https://huggingface.co/datasets/open-llm-leaderboard/details_aboros98__merlin1.3/blob/main/results_2024-03-15T01-26-49.217337.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.578062943034698,
"acc_stderr": 0.03374196476906128,
"acc_norm": 0.5799459687976225,
"acc_norm_stderr": 0.03443622361311283,
"mc1": 0.31456548347613217,
"mc1_stderr": 0.01625524199317918,
"mc2": 0.4676638816154199,
"mc2_stderr": 0.015177166274149752
},
"harness|arc:challenge|25": {
"acc": 0.5802047781569966,
"acc_stderr": 0.014422181226303026,
"acc_norm": 0.5998293515358362,
"acc_norm_stderr": 0.014317197787809184
},
"harness|hellaswag|10": {
"acc": 0.5683130850428202,
"acc_stderr": 0.004942990623131118,
"acc_norm": 0.7519418442541327,
"acc_norm_stderr": 0.004310031044459142
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.24,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4888888888888889,
"acc_stderr": 0.04318275491977976,
"acc_norm": 0.4888888888888889,
"acc_norm_stderr": 0.04318275491977976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5789473684210527,
"acc_stderr": 0.040179012759817494,
"acc_norm": 0.5789473684210527,
"acc_norm_stderr": 0.040179012759817494
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5886792452830188,
"acc_stderr": 0.030285009259009794,
"acc_norm": 0.5886792452830188,
"acc_norm_stderr": 0.030285009259009794
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6458333333333334,
"acc_stderr": 0.039994111357535424,
"acc_norm": 0.6458333333333334,
"acc_norm_stderr": 0.039994111357535424
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5838150289017341,
"acc_stderr": 0.03758517775404947,
"acc_norm": 0.5838150289017341,
"acc_norm_stderr": 0.03758517775404947
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.04724007352383888,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.04724007352383888
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.03267862331014063,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.03267862331014063
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.35964912280701755,
"acc_stderr": 0.045144961328736334,
"acc_norm": 0.35964912280701755,
"acc_norm_stderr": 0.045144961328736334
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.025634258115554955,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.025634258115554955
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.04403438954768177,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.04403438954768177
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7290322580645161,
"acc_stderr": 0.025284416114900156,
"acc_norm": 0.7290322580645161,
"acc_norm_stderr": 0.025284416114900156
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4827586206896552,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.4827586206896552,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.03756335775187898,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.03756335775187898
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.030313710538198906,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.030313710538198906
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.772020725388601,
"acc_stderr": 0.030276909945178267,
"acc_norm": 0.772020725388601,
"acc_norm_stderr": 0.030276909945178267
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5717948717948718,
"acc_stderr": 0.025088301454694834,
"acc_norm": 0.5717948717948718,
"acc_norm_stderr": 0.025088301454694834
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.028317533496066468,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.028317533496066468
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6512605042016807,
"acc_stderr": 0.030956636328566548,
"acc_norm": 0.6512605042016807,
"acc_norm_stderr": 0.030956636328566548
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3973509933774834,
"acc_stderr": 0.0399552400768168,
"acc_norm": 0.3973509933774834,
"acc_norm_stderr": 0.0399552400768168
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7743119266055046,
"acc_stderr": 0.017923087667803064,
"acc_norm": 0.7743119266055046,
"acc_norm_stderr": 0.017923087667803064
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49537037037037035,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.49537037037037035,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.03283472056108561,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.03283472056108561
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7510548523206751,
"acc_stderr": 0.028146970599422644,
"acc_norm": 0.7510548523206751,
"acc_norm_stderr": 0.028146970599422644
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6188340807174888,
"acc_stderr": 0.03259625118416827,
"acc_norm": 0.6188340807174888,
"acc_norm_stderr": 0.03259625118416827
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6564885496183206,
"acc_stderr": 0.041649760719448786,
"acc_norm": 0.6564885496183206,
"acc_norm_stderr": 0.041649760719448786
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.039418975265163025,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.039418975265163025
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664743,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664743
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8162393162393162,
"acc_stderr": 0.02537213967172293,
"acc_norm": 0.8162393162393162,
"acc_norm_stderr": 0.02537213967172293
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6781609195402298,
"acc_stderr": 0.016706381415057897,
"acc_norm": 0.6781609195402298,
"acc_norm_stderr": 0.016706381415057897
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6560693641618497,
"acc_stderr": 0.02557412378654668,
"acc_norm": 0.6560693641618497,
"acc_norm_stderr": 0.02557412378654668
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2748603351955307,
"acc_stderr": 0.014931316703220503,
"acc_norm": 0.2748603351955307,
"acc_norm_stderr": 0.014931316703220503
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.027914055510468,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.027914055510468
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6237942122186495,
"acc_stderr": 0.02751392568354943,
"acc_norm": 0.6237942122186495,
"acc_norm_stderr": 0.02751392568354943
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6234567901234568,
"acc_stderr": 0.026959344518747784,
"acc_norm": 0.6234567901234568,
"acc_norm_stderr": 0.026959344518747784
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.02979071924382972,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.02979071924382972
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4230769230769231,
"acc_stderr": 0.012618204066588387,
"acc_norm": 0.4230769230769231,
"acc_norm_stderr": 0.012618204066588387
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.46691176470588236,
"acc_stderr": 0.030306257722468317,
"acc_norm": 0.46691176470588236,
"acc_norm_stderr": 0.030306257722468317
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5424836601307189,
"acc_stderr": 0.020154685712590888,
"acc_norm": 0.5424836601307189,
"acc_norm_stderr": 0.020154685712590888
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910508,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910508
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.028920583220675592,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.028920583220675592
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8009950248756219,
"acc_stderr": 0.028231365092758406,
"acc_norm": 0.8009950248756219,
"acc_norm_stderr": 0.028231365092758406
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4759036144578313,
"acc_stderr": 0.038879718495972646,
"acc_norm": 0.4759036144578313,
"acc_norm_stderr": 0.038879718495972646
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.03565079670708311,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.03565079670708311
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31456548347613217,
"mc1_stderr": 0.01625524199317918,
"mc2": 0.4676638816154199,
"mc2_stderr": 0.015177166274149752
},
"harness|winogrande|5": {
"acc": 0.7592738752959748,
"acc_stderr": 0.012015559212224176
},
"harness|gsm8k|5": {
"acc": 0.4890068233510235,
"acc_stderr": 0.013769155509690907
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
sorenmulli/angry-tweets-mini | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 177993
num_examples: 1024
- name: validation
num_bytes: 46172
num_examples: 256
- name: test
num_bytes: 355926
num_examples: 2048
download_size: 385803
dataset_size: 580091
---
# Dataset Card for "angry-tweets-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jannikseus/restaurant-reviews | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: labels
sequence:
class_label:
names:
'0': B
'1': I
'2': O
- name: sentence
sequence: string
splits:
- name: train
num_bytes: 455495
num_examples: 1850
- name: validation
num_bytes: 38845
num_examples: 150
- name: test
num_bytes: 170202
num_examples: 676
download_size: 150506
dataset_size: 664542
---
# Dataset Card for "restaurant-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Andalf/test-books | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 1950648.0
num_examples: 238
- name: test
num_bytes: 221292.0
num_examples: 27
download_size: 1101698
dataset_size: 2171940.0
---
# Dataset Card for "test-books"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Metahunter/ddpm-butterflies-128 | ---
license: cc-by-nc-sa-4.0
---
|
BangorAI/exl2-wiki-calibration-set-cy | ---
license: cc-by-sa-3.0
---
### Data Calibro Exl2
Detholiad o [Cofnod y Cynulliad](https://huggingface.co/datasets/techiaith/cofnodycynulliad_en-cy) Cymraeg i'w ddefnyddio yng ngham calibro ExLlama 2 wrth drosi modelau i fformat exl2. |
markhanslip/markhanslip_phd_saxophone_data | ---
pretty_name: "Mark Hanslip PhD Saxophone Data"
tags:
- tenor saxophone
- improvisation
- audio
license: "cc-by-nc-nd-4.0"
task_categories:
- text-to-audio
- audio-classification
---
# A collection of recordings of private practice sessions on tenor saxophone, recorded in Leeds, UK in 2019 as part of my PhD project.
## These recordings were initially created without a specific use-case in mind. They have since been used for the following tasks:
- Audio Classification ("Timbral" and "Melodic Improv")
- Generative Modelling with SampleRNN ("Tone Rows", "Scales and Arpeggios" and "Melodic Improv" worked best)
- Timbre Transfer with RAVE (iil trained a RAVE model on the "Timbral" set)
- Symbolic Modelling with modified Char-RNN (a composite of "Tone Rows", "Scales and Arpeggios" and "Melodic Improv" worked best) |
vargr/yt_main_image_dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: videoId
dtype: string
- name: imagePath
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 16042609970.48
num_examples: 114680
download_size: 949694879
dataset_size: 16042609970.48
---
# Dataset Card for "yt_main_image_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Bazsalanszky/alpaca-gpt4-hun | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
task_categories:
- text-generation
- conversational
- question-answering
language:
- hu
size_categories:
- 10K<n<100K
license: cc-by-nc-4.0
tags:
- gpt4
- alpaca
- instruction-finetuning
---
# Dataset Card for "alpaca-gpt4"
This dataset contains *Hungarian* (translated from English) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. Original model: https://huggingface.co/datasets/vicgalle/alpaca-gpt4
The dataset was originaly shared in this repository: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM. This is just a wraper for compatibility with huggingface's datasets library.
## Dataset Description
- **Homepage:** https://instruction-tuning-with-gpt-4.github.io
- **Repository:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
- `text`: `str`, all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'The odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nThe odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.'}
```
#### Same example from original Alpaca:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'Telegram',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nTelegram'}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). |
ohsumed | ---
pretty_name: Ohsumed
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: null
dataset_info:
features:
- name: seq_id
dtype: int64
- name: medline_ui
dtype: int64
- name: mesh_terms
dtype: string
- name: title
dtype: string
- name: publication_type
dtype: string
- name: abstract
dtype: string
- name: author
dtype: string
- name: source
dtype: string
config_name: ohsumed
splits:
- name: train
num_bytes: 60117860
num_examples: 54709
- name: test
num_bytes: 338533901
num_examples: 293855
download_size: 139454017
dataset_size: 398651761
---
# Dataset Card for ohsumed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://davis.wpi.edu/xmdv/datasets/ohsumed.html
- **Repository:** https://trec.nist.gov/data/filtering/t9.filtering.tar.gz
- **Paper:** https://link.springer.com/chapter/10.1007/978-1-4471-2099-5_20
- **Leaderboard:**
- **Point of Contact:** [William Hersh](mailto:hersh@OHSU.EDU) [Aakash Gupta](mailto:aakashg80@gmail.com)
### Dataset Summary
The OHSUMED test collection is a set of 348,566 references from
MEDLINE, the on-line medical information database, consisting of
titles and/or abstracts from 270 medical journals over a five-year
period (1987-1991). The available fields are title, abstract, MeSH
indexing terms, author, source, and publication type. The National
Library of Medicine has agreed to make the MEDLINE references in the
test database available for experimentation, restricted to the
following conditions:
1. The data will not be used in any non-experimental clinical,
library, or other setting.
2. Any human users of the data will explicitly be told that the data
is incomplete and out-of-date.
Please check this [readme](https://trec.nist.gov/data/filtering/README.t9.filtering) for more details
### Supported Tasks and Leaderboards
[Text Classification](https://paperswithcode.com/sota/text-classification-on-ohsumed)
### Languages
The text is primarily in English. The BCP 47 code is `en`
## Dataset Structure
### Data Instances
```
{'seq_id': 7770,
'medline_ui': 87120420,
'mesh_terms': 'Adult; Aged; Aneurysm/CO; Arteriovenous Fistula/*TH; Carotid Arteries; Case Report; Female; Human; Jugular Veins; Male; Methods; Middle Age; Neck/*BS; Vertebral Artery.',
'title': 'Arteriovenous fistulas of the large vessels of the neck: nonsurgical percutaneous occlusion.',
'publication_type': 'JOURNAL ARTICLE.',
'abstract': 'We describe the nonsurgical treatment of arteriovenous fistulas of the large vessels in the neck using three different means of endovascular occlusion of these large lesions, which are surgically difficult to approach and treat.',
'author': 'Vitek JJ; Keller FS.',
'source': 'South Med J 8705; 80(2):196-200'}
```
### Data Fields
Here are the field definitions:
- seg_id: sequential identifier
(important note: documents should be processed in this order)
- medline_ui: MEDLINE identifier (UI)
(<DOCNO> used for relevance judgements)
- mesh_terms: Human-assigned MeSH terms (MH)
- title: Title (TI)
- publication_type : Publication type (PT)
- abstract: Abstract (AB)
- author: Author (AU)
- source: Source (SO)
Note: some abstracts are truncated at 250 words and some references
have no abstracts at all (titles only). We do not have access to the
full text of the documents.
### Data Splits
The files are Train/ Test. Where the training has files from 1987 while the test files has abstracts from 1988-91
Total number of files:
Train: 54710
Test: 348567
## Dataset Creation
### Curation Rationale
The OHSUMED document collection was obtained by William Hersh
(hersh@OHSU.EDU) and colleagues for the experiments described in the
papers below. [Check citation](#citation-information)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The test collection was built as part of a study assessing the use of
MEDLINE by physicians in a clinical setting (Hersh and Hickam, above).
Novice physicians using MEDLINE generated 106 queries. Only a subset
of these queries were used in the TREC-9 Filtering Track. Before
they searched, they were asked to provide a statement of information
about their patient as well as their information need.
The data was collected by William Hersh & colleagues
### Annotations
#### Annotation process
The existing OHSUMED topics describe actual information needs, but the
relevance judgements probably do not have the same coverage provided
by the TREC pooling process. The MeSH terms do not directly represent
information needs, rather they are controlled indexing terms. However,
the assessment should be more or less complete and there are a lot of
them, so this provides an unusual opportunity to work with a very
large topic sample.
The topic statements are provided in the standard TREC format
#### Who are the annotators?
Each query was replicated by four searchers, two physicians
experienced in searching and two medical librarians. The results were
assessed for relevance by a different group of physicians, using a
three point scale: definitely, possibly, or not relevant. The list of
documents explicitly judged to be not relevant is not provided here.
Over 10% of the query-document pairs were judged in duplicate to
assess inter-observer reliability. For evaluation, all documents
judged here as either possibly or definitely relevant were
considered relevant. TREC-9 systems were allowed to distinguish
between these two categories during the learning process if desired.
### Personal and Sensitive Information
No PII data is present in the train, test or query files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[Aakash Gupta](mailto:aakashg80@gmail.com)
*Th!nkEvolve Consulting* and Researcher at CoronaWhy
### Licensing Information
CC BY-NC 4.0
### Citation Information
Hersh WR, Buckley C, Leone TJ, Hickam DH, OHSUMED: An interactive
retrieval evaluation and new large test collection for research,
Proceedings of the 17th Annual ACM SIGIR Conference, 1994, 192-201.
Hersh WR, Hickam DH, Use of a multi-application computer workstation
in a clinical setting, Bulletin of the Medical Library Association,
1994, 82: 382-389.
### Contributions
Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset. |
TalTechNLP/LongSumEt | ---
annotations_creators:
- machine-generated
language:
- et
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
pretty_name: LongSumEt
dataset_info:
features:
- name: text
dtype: string
- name: long_summary
dtype: string
- name: short_summary
dtype: string
- name: bulletpoints
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 85384791
num_examples: 8656
- name: test
num_bytes: 4819298
num_examples: 481
- name: validation
num_bytes: 4715166
num_examples: 481
download_size: 61950277
dataset_size: 94919255
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for "ERRnews"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
-
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/10_3_23_Harm.pdf
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
LongSumEt is an estonian language long summarization dataset with pages filtered from CulturaX dataset. The dataset consists of the page text, and machine generated short summary, long summary and bulletpoints.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Estonian
## Dataset Structure
### Data Instances
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
|train|test|valid|
|:----|:----|:----|
|8656|481|481|
### BibTeX entry and citation info
```bibtex
article{henryabstractive,
title={Abstractive Summarization of Broadcast News Stories for {Estonian}},
author={Henry, H{\"a}rm and Tanel, Alum{\"a}e},
journal={Baltic J. Modern Computing},
volume={10},
number={3},
pages={511-524},
year={2022}
}
```
|
dreamerdeo/multispider | ---
license: cc
language:
- en
- fr
- de
- vi
- zh
- ja
- es
size_categories:
- 10K<n<100K
---
# MultiSpider: Towards Benchmarking Multilingual Text-to-SQL Semantic Parsing
In this work, we present MultiSpider, a multilingual text-to-SQL dataset which covers seven languages (English, German, French, Spanish, Japanese, Chinese, and Vietnamese).
Find more details on [paper](https://arxiv.org/pdf/2212.13492.pdf) and [code](https://github.com/longxudou/multispider).
Please be aware that the MultiSpider dataset is available in two versions: `with_English_value` and `with_original_value`. Our reported results are based on the `with_English_value` version to circumvent any discrepancies between the entities in the questions and the values in the database.
The `with_original_value` version is a byproduct of the dataset creation process, which may be of interest for more in-depth research on this localized dataset.
`with_English_value`: Führen Sie die Namen der Sängerinnen und Sänger auf, deren Staatsbürgerschaft nicht „France“ lautet.
`with_original_value`: Führen Sie die Namen der Sängerinnen und Sänger auf, deren Staatsbürgerschaft nicht "Frankreich" lautet.
## Results
| Model | EN | DE | ES | FR | JA | ZH | VI |
| ------ | ----: | ----: | ----: | ----: | ----: | ----: | ----: |
| Paper Report | 68.8 | 64.8 | 67.4 | 65.3 | 60.2 | 66.1 | 67.1 |
| Released Model | 69.5 | 65.1 | 68.1 | 66.7 | 60.9 | 67.4 | 69.1 |
## Copyright
Except where stated explicitly otherwise, the copyright to the source code is licensed under the Creative Commons - Attribution-NonCommercial 4.0 International license (CC BY-NC 4.0): https://creativecommons.org/licenses/by-nc/4.0/.
Any commercial use (whether for the benefit of third parties or internally in production) requires an explicit license. |
2A2I/H4_no_robots | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 16496867
num_examples: 9500
- name: test
num_bytes: 887460
num_examples: 500
download_size: 11045465
dataset_size: 17384327
task_categories:
- text-generation
language:
- ar
pretty_name: لا روبوتات
license: cc-by-nc-4.0
---
### Dataset Card for "No Robots" 🙅♂️🤖
#### Summary
"No Robots" is a dataset consisting of 10,000 instructions and demonstrations, created by professional annotators. It was translated using the Google Cloud Platform Translation API. This dataset can be used to train language models to follow instructions more accurately (instruction-tuned fine-tuning - SFT). The "No Robots" dataset was created based on the dataset described in OpenAI's [InstructGPT](https://huggingface.co/papers/2203.02155) paper, and includes the following categories:
| Category | Count |
|-------------------|------:|
| Creation | 4560 |
| Open Questions | 1240 |
| Brainstorming | 1120 |
| Chatting | 850 |
| Rewriting | 660 |
| Summarization | 420 |
| Programming | 350 |
| Classification | 350 |
| Closed Questions | 260 |
| Extraction | 190 |
#### Languages
This dataset is available in Arabic only. The original version in **English** can be found at [this link](https://huggingface.co/datasets/HuggingFaceH4/no_robots), and the **Turkish** version at [this link](https://huggingface.co/datasets/merve/tr-h4-norobots).
#### Data Fields
Columns as follows:
* `prompt`: Specifies the instruction that the model should follow.
* `prompt_id`: A unique identifier.
* `messages`: A list containing dictionaries, each dictionary describes a message (key: content) and who sent it (key: role).
* `category`: The task category, I did not translate this.
#### Splits
| | train | test |
|------------------|------:|-----:|
| No Robots | 9500 | 500 |
#### License
The dataset is available under the [(CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode) license.
#### Citation Information
```
@misc{no_robots,
author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf},
title = {No Robots},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}}
}
```
|
Intuit-GenSRF/AnikaBasu-CyberbullyingDataset-es | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: __index_level_0__
dtype: int64
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 1407598
num_examples: 2955
download_size: 0
dataset_size: 1407598
---
# Dataset Card for "AnikaBasu-CyberbullyingDataset-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Francesco/circuit-voltages | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': circuit-voltages
'1': GND
'2': IDC
'3': IDC_I
'4': R
'5': VDC
'6': VDC_I
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: circuit-voltages
tags:
- rf100
---
# Dataset Card for circuit-voltages
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/circuit-voltages
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
circuit-voltages
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/circuit-voltages
### Citation Information
```
@misc{ circuit-voltages,
title = { circuit voltages Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/circuit-voltages } },
url = { https://universe.roboflow.com/object-detection/circuit-voltages },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Noise144/ticket_classification_IT_EN | ---
task_categories:
- text-classification
- fill-mask
- feature-extraction
- sentence-similarity
language:
- it
- en
tags:
- classification
- text
- ticket
- it_ticket
- italian
- english
pretty_name: data_IT_ticket_EN_IT_classification
size_categories:
- n<1K
---
# Dataset Description
This dataset is used to fine-tuning xlm-roberta-base for IT ticket classification, you can access it here:
https://huggingface.co/Noise144/xlm-roberta-base-finetuned-IT-EN-classification-ticket
Dataset contains a total of 1800 ticket texts.
The dataset contains 18 classes related to the following macros:
* Database 3 class
* Hardware 6 class
* Inquiry/help 2 class
* Network 5 class
* software 2 class

The data was generated using GPT 3.5-turbo and does not contain any sensitive data.
A total of 100 examples were generated for each class, all different from each other and not translated, 50 in Italian and 50 in English. |
Nasnoma/dataset | ---
license: mit
---
|
davebulaval/CSMD | ---
license:
- cc-by-4.0
multilinguality:
- monolingual
- aligned
task_categories:
- text-classification
- text2text-generation
source_datasets:
- original
- >-
extended|other-turkcorpus,other-asset,other-questeval,other-simplicity_da,other-simp_da
language:
- en
tags:
- simplification-evaluation
- meaning-evaluation
pretty_name: CSMD
size_categories:
- 1K<n<10K
dataset_info:
- config_name: meaning
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: label
dtype: float64
splits:
- name: train
num_bytes: 251558
num_examples: 853
- name: dev
num_bytes: 27794
num_examples: 95
- name: test
num_bytes: 117686
num_examples: 407
download_size: 397038
dataset_size: 1355
- config_name: meaning_with_data_augmentation
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: label
dtype: float64
splits:
- name: train
num_bytes: 1242540
num_examples: 4267
- name: dev
num_bytes: 134726
num_examples: 475
- name: test
num_bytes: 592052
num_examples: 2033
download_size: 1969318
dataset_size: 6775
- config_name: meaning_holdout_identical
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: label
dtype: float64
splits:
- name: test
num_bytes: 89866
num_examples: 359
download_size: 89866
dataset_size: 359
- config_name: meaning_holdout_unrelated
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: label
dtype: float64
splits:
- name: test
num_bytes: 247835
num_examples: 359
download_size: 247835
dataset_size: 359
config_names:
- meaning
- meaning_with_data_augmentation
- meaning_holdout_identical
- meaning_holdout_unrelated
viewer: true
configs:
- config_name: meaning
data_files:
- split: train
path: "train.tsv"
- split: dev
path: "dev.tsv"
- split: test
path: "test.tsv"
- config_name: meaning_with_data_augmentation
data_files:
- split: train
path: "train_da.tsv"
- split: dev
path: "dev_da.tsv"
- split: test
path: "test_da.tsv"
- config_name: meaning_holdout_identical
data_files:
- split: test
path: "identical.tsv"
- config_name: meaning_holdout_unrelated
data_files:
- split: test
path: "unrelated.tsv"
---
# Dataset Card for "Continuous Scale Meaning Dataset" (CSMD)
CSMD was created for [MeaningBERT: Assessing Meaning Preservation Between Sentences](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full).
It contains 1,355 English text simplification meaning preservation annotations. Meaning preservation measures how well the meaning of the output text corresponds to the meaning of the source ([Saggion, 2017](https://link.springer.com/book/10.1007/978-3-031-02166-4)).
The annotations were taken from the following four datasets:
- [ASSET](https://aclanthology.org/2020.acl-main.424/)
- [QuestEVal](https://arxiv.org/abs/2104.07560),
- [SimpDa_2022](https://aclanthology.org/2023.acl-long.905.pdf) and,
- [Simplicity-DA](https://direct.mit.edu/coli/article/47/4/861/106930/The-Un-Suitability-of-Automatic-Evaluation-Metrics).
It contains a data augmentation subset of 1,355 identical sentence triplets and 1,355 unrelated sentence triplets (See the "Sanity Checks" section (3.3.) in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
It also contains two holdout subsets of 359 identical sentence triplets and 359 unrelated sentence triples (See the "MeaningBERT" section (3.4.) in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
## Dataset Structure
### Data Instances
- `Meaning` configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label).
- `meaning_with_data_augmentation` configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label) along with 1,355 data augmentation triplets (Document, Document, 100) and 1,355 data augmentation triplets (Document, Unrelated Document, 0) (See the sanity checks in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
- `meaning_holdout_identical` configuration: an instance consists of 359 meaning holdout preservation identical triplets (Document, Document, 1) based on the ASSET Simplification dataset.
- `meaning_holdout_unrelated` configuration: an instance consists of 359 meaning holdout preservation unrelated triplets (Document, Unrelated Document, 0) based on the ASSET Simplification dataset.
### About the Data Augmentation
#### Unrelated Sentence
We have changed the data augmentation approach for the unrelated sentence. Instead of generating noisy sentences using an LLM, for each of the 1,355 sentences, we sample a sentence in the unlabeled sentence in ASSET (non included in the holdout nor the labelled sentence). We compute the Rouge1, Rouge2, RougeL and bleu scores to validate that the sentences are unrelated in terms of vocabulary. Namely, each metric score is below 0.20 or 20 for Bleu for all pairs. If a pair achieves a higher value, we select another sentence from ASSET to create a pair and reapply the test until a pair achieves a score below 0.20/20.
#### Commutative Property
Since meaning preservation is a commutative function, i.e., Meaning(Sent_a, Sent_b) = Meaning(Sent_b, Sent_a), we also include the commutative version of the triplet in the data augmentation version of the dataset for sentences that are not identical.
### Data Fields
- `original`: an original sentence from the source datasets.
- `simplification`: a simplification of the original obtained by an automated system or a human.
- `label`: a meaning preservation rating between 0 and 100.
### Data Splits
The split statistics of CSMD are given below.
| | Train | Dev | Test | Total |
| ------ | ------ | ------ | ---- | ----- |
| Meaning | 853 | 95 | 407 | 1,355 |
| Meaning With Data Augmentation | 2,560 | 285 | 1,220 | 4,065 |
| Meaning Holdout Identical | NA | NA | 359 | 359 |
| Meaning Holdout Unrelated | NA | NA | 359 | 359 |
All the splits are randomly split using a 60-10-30 split with the seed `42`.
# Citation Information
```
@ARTICLE{10.3389/frai.2023.1223924,
AUTHOR={Beauchemin, David and Saggion, Horacio and Khoury, Richard},
TITLE={{MeaningBERT: Assessing Meaning Preservation Between Sentences}},
JOURNAL={Frontiers in Artificial Intelligence},
VOLUME={6},
YEAR={2023},
URL={https://www.frontiersin.org/articles/10.3389/frai.2023.1223924},
DOI={10.3389/frai.2023.1223924},
ISSN={2624-8212},
}
``` |
GGital/CAI_ENG_NEW_01 | ---
library_name: peft
base_model: decapoda-research/llama-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
dudier/rc49-180-360 | ---
dataset_info:
features:
- name: image
dtype: image
- name: angle
dtype: float64
splits:
- name: train
num_bytes: 141319271.713
num_examples: 44149
- name: test
num_bytes: 156898605.824
num_examples: 44051
download_size: 230098947
dataset_size: 298217877.537
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
knkarthick/samsum | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: samsum-corpus
pretty_name: SAMSum Corpus
tags:
- conversations-summarization
---
# Dataset Card for SAMSum Corpus
## Dataset Description
### Links
- **Homepage:** hhttps://arxiv.org/abs/1911.12237v2
- **Repository:** https://arxiv.org/abs/1911.12237v2
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Languages
English
## Dataset Structure
### Data Instances
SAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 14732
- val: 818
- test: 819
## Dataset Creation
### Curation Rationale
In paper:
In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
### Annotation process
In paper:
Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.
## Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
## Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
## Contributions |
frank098/temp | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for temp
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.cfg`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("frank098/temp")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("frank098/temp")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| prompt | Prompt | TextField | True | False |
| answer1 | Answer1 | TextField | True | False |
| answer2 | Answer2 | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| relevant | Which of the two responses most accurately completes the instruction? | LabelQuestion | True | N/A | N/A |
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"answer1": "To determine if a prefix is reachable, BGP performs the following initial check:\n\n1. It checks if there is an existing route for the prefix in its routing table.\n2. If there is no existing route, it checks if the prefix is within its own IP address range.\n3. If the prefix is within the IP address range, BGP considers it reachable and advertises it to its neighboring routers.\n4. If the prefix is not within the IP address range, BGP considers it unreachable and does not advertise it to its neighboring routers.\n\nNote: This answer assumes that the user is familiar with BGP (Border Gateway Protocol) and may need additional explanation for clarification.\u003c/s\u003e",
"answer2": "set forwarding-options sampling family inet filter s1 term 1 then {\n destination {\n ip;\n }\n next-hop;\n protocol;\n port;\n loss-priority low;\n output-node 192.168.10.100;\n flow-server 192.168.10.100 8888;\n sampling-action random-detect 0.1;\n}\n\nThis Junos command sets up IPv4 traffic sampling on the router with instance s1 and sends the sampled data to the flow-server with the IP address 192.168.10.100 using port number 8888. The sampling action is set to random-detect with a drop probability of 0.1.\n\nNote: Make sure to replace the IP addresses and port numbers with the appropriate values for your network setup.\n\nPlease let me know if you need any further assistance.\u003c/s\u003e",
"prompt": "\nYou, as a helpful AI assistant, will provide a detailed, step-by-step answer to the user\u0027s question.\n\nUSER: Create a Junos command that configures the router to enable IPv4 traffic sampling with instance s1, and directs the sampled data to be sent to the flow-server with the IP address 198.51.100.2 on port 2055.\n\nASSISTANT: \n"
},
"metadata": null,
"responses": [
{
"status": "submitted",
"user_id": "c8f4058e-7b43-43b2-ae9d-4dc8da3fc138",
"values": {
"relevant": {
"value": "answer1"
}
}
}
]
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"answer1": "To determine if a prefix is reachable, BGP performs the following initial check:\n\n1. It checks if there is an existing route for the prefix in its routing table.\n2. If there is no existing route, it checks if the prefix is within its own IP address range.\n3. If the prefix is within the IP address range, BGP considers it reachable and advertises it to its neighboring routers.\n4. If the prefix is not within the IP address range, BGP considers it unreachable and does not advertise it to its neighboring routers.\n\nNote: This answer assumes that the user is familiar with BGP (Border Gateway Protocol) and may need additional explanation for clarification.\u003c/s\u003e",
"answer2": "set forwarding-options sampling family inet filter s1 term 1 then {\n destination {\n ip;\n }\n next-hop;\n protocol;\n port;\n loss-priority low;\n output-node 192.168.10.100;\n flow-server 192.168.10.100 8888;\n sampling-action random-detect 0.1;\n}\n\nThis Junos command sets up IPv4 traffic sampling on the router with instance s1 and sends the sampled data to the flow-server with the IP address 192.168.10.100 using port number 8888. The sampling action is set to random-detect with a drop probability of 0.1.\n\nNote: Make sure to replace the IP addresses and port numbers with the appropriate values for your network setup.\n\nPlease let me know if you need any further assistance.\u003c/s\u003e",
"external_id": null,
"metadata": null,
"prompt": "\nYou, as a helpful AI assistant, will provide a detailed, step-by-step answer to the user\u0027s question.\n\nUSER: Create a Junos command that configures the router to enable IPv4 traffic sampling with instance s1, and directs the sampled data to be sent to the flow-server with the IP address 198.51.100.2 on port 2055.\n\nASSISTANT: \n",
"relevant": {
"status": [
"submitted"
],
"user_id": [
"c8f4058e-7b43-43b2-ae9d-4dc8da3fc138"
],
"value": [
"answer1"
]
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **prompt** is of type `TextField`.
* **answer1** is of type `TextField`.
* **answer2** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
* **relevant** is of type `LabelQuestion`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Please, read the question carefully and try to answer it as accurately as possible.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
hassansh/multirc_n_shot | ---
dataset_info:
features:
- name: question
dtype: int64
- name: input
dtype: string
- name: target_str
dtype: string
- name: target
dtype: int64
splits:
- name: 0_shot
num_bytes: 7974385
num_examples: 4848
- name: 1_shot
num_bytes: 13783669
num_examples: 4848
- name: 2_shot
num_bytes: 19739595
num_examples: 4848
- name: 3_shot
num_bytes: 25535284
num_examples: 4848
- name: 4_shot
num_bytes: 31295175
num_examples: 4848
- name: 5_shot
num_bytes: 37173889
num_examples: 4848
download_size: 15251419
dataset_size: 135501997
configs:
- config_name: 0_shot
data_files:
- split: test
path: data/0_shot-*
- config_name: 1_shot
data_files:
- split: test
path: data/1_shot-*
- config_name: 2_shot
data_files:
- split: test
path: data/2_shot-*
- config_name: 3_shot
data_files:
- split: test
path: data/3_shot-*
- config_name: 4_shot
data_files:
- split: test
path: data/4_shot-*
- config_name: 5_shot
data_files:
- split: test
path: data/5_shot-*
- config_name: default
data_files:
- split: 0_shot
path: data/0_shot-*
- split: 1_shot
path: data/1_shot-*
- split: 2_shot
path: data/2_shot-*
- split: 3_shot
path: data/3_shot-*
- split: 4_shot
path: data/4_shot-*
- split: 5_shot
path: data/5_shot-*
---
|
CyberHarem/miriel_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of miriel (Fire Emblem)
This is the dataset of miriel (Fire Emblem), containing 89 images and their tags.
The core tags of this character are `glasses, short_hair, red_hair, hat, witch_hat, breasts, large_breasts, brown_hair, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 89 | 78.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miriel_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 89 | 48.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miriel_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 176 | 92.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miriel_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 89 | 71.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miriel_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 176 | 128.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miriel_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/miriel_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, hetero, solo_focus, 1boy, nipples, uncensored, pussy, sex, nude, blush, spread_legs, clitoris, cum, large_penis, navel, vaginal |
| 1 | 44 |  |  |  |  |  | 1girl, solo, book, cape, brown_eyes, simple_background, bridal_gauntlets |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hetero | solo_focus | 1boy | nipples | uncensored | pussy | sex | nude | blush | spread_legs | clitoris | cum | large_penis | navel | vaginal | solo | book | cape | brown_eyes | simple_background | bridal_gauntlets |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------------|:-------|:----------|:-------------|:--------|:------|:-------|:--------|:--------------|:-----------|:------|:--------------|:--------|:----------|:-------|:-------|:-------|:-------------|:--------------------|:-------------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 1 | 44 |  |  |  |  |  | X | | | | | | | | | | | | | | | | X | X | X | X | X | X |
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_67 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1208484360.0
num_examples: 237330
download_size: 1232010668
dataset_size: 1208484360.0
---
# Dataset Card for "chunk_67"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
phi0108/noun-phrase-es | ---
language:
- es
size_categories:
- 100K<n<1M
--- |
MinGue/eval_vl | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: dataset
dtype: string
- name: file_name
dtype: string
- name: query
dtype: string
- name: passages
sequence: string
- name: passage_count
dtype: int64
- name: answer_gpt
dtype: string
splits:
- name: train
num_bytes: 1344796
num_examples: 600
download_size: 521467
dataset_size: 1344796
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eval_vl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
othorizedshogun/poems_dataset | ---
dataset_info:
features:
- name: poem
dtype: string
- name: form
dtype: string
- name: topic
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 38858200.040046886
num_examples: 18428
- name: test
num_bytes: 4318514.959953116
num_examples: 2048
download_size: 28597631
dataset_size: 43176715.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
jtatman/tarot_dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: card_name
dtype: string
- name: card_reading
dtype: string
splits:
- name: train
num_bytes: 242566744.32
num_examples: 1120
download_size: 211822946
dataset_size: 242566744.32
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tarot_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GiovanniHD/Naomi | ---
license: openrail
---
|
ed001/ds-coder-instruct-v2 | ---
task_categories:
- text-generation
- conversational
- text2text-generation
language:
- en
tags:
- code
- machine learning
- deep learning
- data science
pretty_name: Data Science Coder
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: ds_coder.jsonl
license: cc-by-nc-sa-4.0
---
# Dataset Card for DS Coder Instruct v2 Dataset
Changes from v1:
* Added WizardLM evol data science samples
* Removed R samples from v2
<!-- Provide a quick summary of the dataset. -->
DS Coder is a dataset for instruction fine tuning of language models. It is a specialized dataset focusing only on
data science (eg. plotting, data wrangling, machine learnig models, deep learning, and numerical computations). The dataset contains code examples both in Python (R samples were removed in v2).
The goal of this dataset is to enable creation of small-scale, specialized language model assistants for data science projects.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
DS Coder instruct dataset contains *(input, instruction, output)* triplets. Instruction provides a task in the data science domain and output contains the code to solve the task.
Where available, it also contains *text* field holding Alpaca style input. Metadata, such as the programming language *(lang)* and topics *(topics)* are provided.
*topics* lists the concepts used in the code (eg. ML, neural networs, plotting, etc.). This is determined based on which kinds of libraries the code uses. This field can be used
to obtain subset of data for specific tasks, such as data vizualisation.
Additionally, the original data source is provided under the *dataset* field.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
DS Coder is filtered and preprocessed from a collection of publically available datasets on HuggingFace. All the sources all liste below with their corresponding links.
- **nickrosh/Evol-Instruct-Code-80k-v1:** https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1
- **TokenBender/code_instructions_122k_alpaca_style:** https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style
- **theblackcat102/evol-codealpaca-v1:** https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1
- **ise-uiuc/Magicoder-OSS-Instruct-75K:** https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K
- **WizardLM/WizardLM_evol_instruct_V2_196k:** https://huggingface.co/datasets/ise-uiuc/WizardLM_evol_instruct_V2_196k
Please make sure to cite the above mentioned source when using this dataset. You should visit these pages and look for specific usage instructions, if any.
## Dataset Creation
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
DS Coder was created by filtering and processing existing public datasets of *(instruction, code)* pairs. Source data was filtered to keep only code related to data science
applications. The filtering was done using regex to gather code that uses popular data science libraries (eg. Matplotlib, Sklearn, PyTorch, etc.) in Python and R.
Then, the data is further processed to filter out samples with very long or very short code. Code outputs with lots of comments and low amound of code were filtered out.
Additionally, samples with very long and very short instructions were also removed.
After filtering, exact deduplication based on output code and input instruction was performed. After this process, roughly *16K* samples remain.
More specific description dataset processing is provided below.
### Filtering
The first step of the filtering process is to gather all samples from source datasets that have code related to a data science application. To do so, regex filtering was
applied to the *code* and *instruction* to filter out such samples. Regex filters mainly look for imports and usage of popular data science libraries, such as Pandas or PyTorch.
Data science code in Python as well as R are gathered.
After gathering relevant code samples, further filtering based on line length, instruction length, alphanumeric ratio, and comment to code ratio are performed.
Code filtering is similar to [BigCode](https://github.com/bigcode-project/bigcode-dataset). Code filtering parameters shown below are derived from there.
This stage ensures that short, very, long and uninformative samples are removed. The script for filtering can be found in this repo
[Ea0011/wrangler](https://github.com/Ea0011/wrangler). You may use the filtering script to process additional datasets or tweak the params.
Parameters for filtering are listed below:
- **line_max**: Maximum line length allowed is 1000 characters.
- **line_mean**: Maximum mean line length allowed is 100 characters.
- **alpha_frac**: Minimum fraction of alphanumeric characters allowed is 25%.
- **min_inst_size**: Minimum instruction size in words is 5 words.
- **max_inst_size**: Maximum instruction size in words is 1000 words.
- **max_threshold_comments**: Maximum threshold for comment to code ratio is 80%.
- **min_threshold_comments**: Minimum threshold for comment to code ratio is 1%.
## Data Analysis
This section provides some analysis of the dataset. Code lengths and distribution of data science tasks are shown. Topic distribution shows
distribution of concepts used in the code. Some domains, such as plotting are underrepresanted compared to others. You may use the topics column to select samples for specific tasks.
<img src="ds_dist.png" width="60%" />
<img src="inst_len_total.png" width="60%"/>
<img src="topics.png" width="60%" />
As there are data points from several data sources, it is also worth showing distributions across samples from different datasets. As it can be seen, some sources
contain short and concise samples while others contain verbose samples. Use this information to choose specific data source if needed.
<img src="code_len.png" width="60%"/>
<img src="inst_len.png" width="60%" />
## Dataset Card Contact
For any suggestions and concerns please reach out to me: [Ea0011](https://github.com/Ea0011/) |
riotu-lab/Synthetic-UAV-Flight-Trajectories | ---
license: apache-2.0
language:
- en
tags:
- UAV
- Drones
- Trajectory
pretty_name: U
size_categories:
- 100K<n<1M
---
# UAV Trajectory Dataset
## Summary
This dataset comprises over 5000 random UAV (Unmanned Aerial Vehicle) trajectories collected over 20 hours of flight time. It is intended for training AI models such as trajectory prediction applications. The dataset is generated through an automated pipeline for the creation and preprocessing of UAV synthetic trajectories, making it ready for direct AI model training.
## Data Description
The dataset features parameterized trajectories following predefined patterns, specifically circular and infinity-like paths.
## Dataset Structure
### Data Fields
- `timestamp`: Recording time of the data point.
- `position`: 3D position of the UAV (x, y, z coordinates).
|
cnmoro/QuestionClassification | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
- pt
pretty_name: DirectedOrGenericClassification
size_categories:
- 100K<n<1M
--- |
johannes-garstenauer/structs_token_size_4_labelled_eval | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1323561990
num_examples: 5451270
download_size: 384897245
dataset_size: 1323561990
---
# Dataset Card for "structs_token_size_4_labelled_eval_"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
g-ronimo/riddles_evolved | ---
dataset_info:
features:
- name: number
dtype: int64
- name: messages
sequence: string
splits:
- name: train
num_bytes: 2253049
num_examples: 1682
download_size: 1196650
dataset_size: 2253049
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
tags:
- synthetic
---
# Riddles turned into conversations using mistralai/Mistral-7B-Instruct-v0.2
* Seeded with Hypersniper's [riddles_v1](https://huggingface.co/datasets/Hypersniper/riddles_v1), buy him [Ko-fi](https://ko-fi.com/hypersniper)
* Structure: each sample = conversation with two turns: Q/A/Q/A
* Process: use Mistral to 1) expand riddles 2) answer riddle 3) formulate human follow-up question 4) answer follow-up question
* Code: [GitHub](https://github.com/geronimi73/phi2-finetune/blob/main/nb_dataset.ipynb)
* _Note_: This is an unfiltered dataset, it for sure contains very bad answers.
|
AravindVadlapudi02/UA_speech_low | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': control
'1': pathology
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 768265600
num_examples: 800
- name: test
num_bytes: 4615355592
num_examples: 4806
download_size: 622015552
dataset_size: 5383621192
---
# Dataset Card for "UA_speech_low"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713004575 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 10840
num_examples: 24
download_size: 8990
dataset_size: 10840
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713004575"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
316usman/thematic4d-pw-embed-part4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: country
dtype: string
- name: document_url
dtype: string
- name: source_url
dtype: string
- name: num_tokens
dtype: int64
splits:
- name: train
num_bytes: 393512332
num_examples: 616325
download_size: 149778722
dataset_size: 393512332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_mnli_acomp_focusing_like | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 666901
num_examples: 3014
- name: dev_mismatched
num_bytes: 634871
num_examples: 2659
- name: test_matched
num_bytes: 630748
num_examples: 2842
- name: test_mismatched
num_bytes: 663011
num_examples: 2776
- name: train
num_bytes: 26042479
num_examples: 115925
download_size: 17910289
dataset_size: 28638010
---
# Dataset Card for "MULTI_VALUE_mnli_acomp_focusing_like"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kristinashemet/Training_dataset_Part1_23.03 | ---
dataset_info:
features:
- name: formatted_data
dtype: string
splits:
- name: train
num_bytes: 237437
num_examples: 372
download_size: 60072
dataset_size: 237437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
arbml/Ashaar_ardui | ---
dataset_info:
features:
- name: original_shatr
dtype: string
- name: arudi_style
dtype: string
- name: tafilaat
dtype: string
- name: pattern
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 18298
num_examples: 100
download_size: 12052
dataset_size: 18298
---
# Dataset Card for "Ashaar_arduiv2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SIA86/data_base_ner | ---
license: openrail
---
|
mdacampora/tax-chat-as-dicts | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7143
num_examples: 49
download_size: 5763
dataset_size: 7143
---
# Dataset Card for "tax-chat-as-dicts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
prateeky2806/bge_base_features_alpaca | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 182429113
num_examples: 52002
download_size: 204415093
dataset_size: 182429113
---
# Dataset Card for "bge_base_features_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vira-chatbot/vira-intents-mod | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 509234
num_examples: 7047
- name: validation
num_bytes: 213834
num_examples: 2971
download_size: 329146
dataset_size: 723068
---
# Dataset Card for "vira-intents-mod"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlsquare/CLIENT_samantar_mixed_train_val | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 1812197.76
num_examples: 14080
- name: valid
num_bytes: 453049.44
num_examples: 3520
download_size: 1520062
dataset_size: 2265247.2
---
# Dataset Card for "CLIENT_samantar_mixed_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/asada_shino_swordartonline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of asada_shino (Sword Art Online)
This is the dataset of asada_shino (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
Tensoic/gooftagoo | ---
license: apache-2.0
task_categories:
- text-generation
language:
- hi
- en
tags:
- hinglish
- conversation
- hindi
---
## Hindi/Hinglish Conversation Dataset
This repository contains a dataset of conversational text in conversational hindi and hinglish(a mix of Hindi and English languages).
The Conversation Dataset contains multi-turn conversations on multiple topics usually revolving around daily real-life experiences.
A small amount of reasoning tasks have also been added (specifically COT style reasoning and coding) with about 1k samples from Openhermes 2.5.
## Caution
This dataset was generated, please note that some content may not be entirely precise or reflect expert consensus.
Users are encouraged to verify information independently for scholarly or critical purposes.
## Author
Adithya Kamath (https://twitter.com/Adi_kmt) |
qbourbon/pb_valset | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': 000_airplane
'1': 001_alarm_clock
'2': 002_angel
'3': 003_ant
'4': 004_apple
'5': 005_arm
'6': 006_armchair
'7': 007_ashtray
'8': 008_axe
'9': 009_backpack
'10': 010_banana
'11': 011_barn
'12': 012_baseball_bat
'13': 013_basket
'14': 014_bathtub
'15': 015_bear_(animal)
'16': 016_bed
'17': 017_bee
'18': 018_beer-mug
'19': 019_bell
'20': 020_bench
'21': 021_bicycle
'22': 022_binoculars
'23': 023_blimp
'24': 024_book
'25': 025_bookshelf
'26': 026_boomerang
'27': 027_bottle_opener
'28': 028_bowl
'29': 029_brain
'30': 030_bread
'31': 031_bridge
'32': 032_bulldozer
'33': 033_bus
'34': 034_bush
'35': 035_butterfly
'36': 036_cabinet
'37': 037_cactus
'38': 038_cake
'39': 039_calculator
'40': 040_camel
'41': 041_camera
'42': 042_candle
'43': 043_cannon
'44': 044_canoe
'45': 045_car_(sedan)
'46': 046_carrot
'47': 047_castle
'48': 048_cat
'49': 049_cell_phone
'50': 050_chair
'51': 051_chandelier
'52': 052_church
'53': 053_cigarette
'54': 054_cloud
'55': 055_comb
'56': 056_computer_monitor
'57': 057_computer-mouse
'58': 058_couch
'59': 059_cow
'60': 060_crab
'61': 061_crane_(machine)
'62': 062_crocodile
'63': 063_crown
'64': 064_cup
'65': 065_diamond
'66': 066_dog
'67': 067_dolphin
'68': 068_donut
'69': 069_door
'70': 070_door_handle
'71': 071_dragon
'72': 072_duck
'73': 073_ear
'74': 074_elephant
'75': 075_envelope
'76': 076_eye
'77': 077_eyeglasses
'78': 078_face
'79': 079_fan
'80': 080_feather
'81': 081_fire_hydrant
'82': 082_fish
'83': 083_flashlight
'84': 084_floor_lamp
'85': 085_flower_with_stem
'86': 086_flying_bird
'87': 087_flying_saucer
'88': 088_foot
'89': 089_fork
'90': 090_frog
'91': 091_frying-pan
'92': 092_giraffe
'93': 093_grapes
'94': 094_grenade
'95': 095_guitar
'96': 096_hamburger
'97': 097_hammer
'98': 098_hand
'99': 099_harp
'100': 100_hat
'101': 101_head
'102': 102_head-phones
'103': 103_hedgehog
'104': 104_helicopter
'105': 105_helmet
'106': 106_horse
'107': 107_hot_air_balloon
'108': 108_hot-dog
'109': 109_hourglass
'110': 110_house
'111': 111_human-skeleton
'112': 112_ice-cream-cone
'113': 113_ipod
'114': 114_kangaroo
'115': 115_key
'116': 116_keyboard
'117': 117_knife
'118': 118_ladder
'119': 119_laptop
'120': 120_leaf
'121': 121_lightbulb
'122': 122_lighter
'123': 123_lion
'124': 124_lobster
'125': 125_loudspeaker
'126': 126_mailbox
'127': 127_megaphone
'128': 128_mermaid
'129': 129_microphone
'130': 130_microscope
'131': 131_monkey
'132': 132_moon
'133': 133_mosquito
'134': 134_motorbike
'135': 135_mouse_(animal)
'136': 136_mouth
'137': 137_mug
'138': 138_mushroom
'139': 139_nose
'140': 140_octopus
'141': 141_owl
'142': 142_palm_tree
'143': 143_panda
'144': 144_paper_clip
'145': 145_parachute
'146': 146_parking_meter
'147': 147_parrot
'148': 148_pear
'149': 149_pen
'150': 150_penguin
'151': 151_person_sitting
'152': 152_person_walking
'153': 153_piano
'154': 154_pickup_truck
'155': 155_pig
'156': 156_pigeon
'157': 157_pineapple
'158': 158_pipe_(for_smoking)
'159': 159_pizza
'160': 160_potted_plant
'161': 161_power_outlet
'162': 162_present
'163': 163_pretzel
'164': 164_pumpkin
'165': 165_purse
'166': 166_rabbit
'167': 167_race_car
'168': 168_radio
'169': 169_rainbow
'170': 170_revolver
'171': 171_rifle
'172': 172_rollerblades
'173': 173_rooster
'174': 174_sailboat
'175': 175_santa_claus
'176': 176_satellite
'177': 177_satellite_dish
'178': 178_saxophone
'179': 179_scissors
'180': 180_scorpion
'181': 181_screwdriver
'182': 182_sea_turtle
'183': 183_seagull
'184': 184_shark
'185': 185_sheep
'186': 186_ship
'187': 187_shoe
'188': 188_shovel
'189': 189_skateboard
'190': 190_skull
'191': 191_skyscraper
'192': 192_snail
'193': 193_snake
'194': 194_snowboard
'195': 195_snowman
'196': 196_socks
'197': 197_space_shuttle
'198': 198_speed-boat
'199': 199_spider
'200': 200_sponge_bob
'201': 201_spoon
'202': 202_squirrel
'203': 203_standing_bird
'204': 204_stapler
'205': 205_strawberry
'206': 206_streetlight
'207': 207_submarine
'208': 208_suitcase
'209': 209_sun
'210': 210_suv
'211': 211_swan
'212': 212_sword
'213': 213_syringe
'214': 214_t-shirt
'215': 215_table
'216': 216_tablelamp
'217': 217_teacup
'218': 218_teapot
'219': 219_teddy-bear
'220': 220_telephone
'221': 221_tennis-racket
'222': 222_tent
'223': 223_tiger
'224': 224_tire
'225': 225_toilet
'226': 226_tomato
'227': 227_tooth
'228': 228_toothbrush
'229': 229_tractor
'230': 230_traffic_light
'231': 231_train
'232': 232_tree
'233': 233_trombone
'234': 234_trousers
'235': 235_truck
'236': 236_trumpet
'237': 237_tv
'238': 238_umbrella
'239': 239_van
'240': 240_vase
'241': 241_violin
'242': 242_walkie_talkie
'243': 243_wheel
'244': 244_wheelbarrow
'245': 245_windmill
'246': 246_wine-bottle
'247': 247_wineglass
'248': 248_wrist-watch
'249': 249_zebra
'250': mistery_category
splits:
- name: validation
num_bytes: 9831424.368
num_examples: 324
download_size: 9713593
dataset_size: 9831424.368
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
DefectSpectrum/Defect_Spectrum | ---
license: mit
task_categories:
- image-segmentation
- image-to-text
language:
- en
tags:
- industry
pretty_name: DefectSpectrum
size_categories:
- 1K<n<10K
---
# Defect Spectrum Dataset
Welcome to the Defect Spectrum dataset repository. This comprehensive benchmark is a granular collection of large-scale defect datasets with rich semantics, designed to push the frontier of industrial defect inspection research and applications.
## Overview
Defect inspection is a critical component within the closed-loop manufacturing system. To facilitate advanced research and development in this domain, we introduce the Defect Spectrum dataset. It offers precise, semantics-abundant, and large-scale annotations for a wide range of industrial defects. This dataset is an enhancement over existing benchmarks, providing refined annotations and introducing detailed semantic layers, allowing for the distinction between multiple defect types within a single image.
### Features
- **Semantics-Abundant Annotations**: Each defect is meticulously labeled, not just at the pixel level but with rich contextual information, providing insights into the defect type and implications.
- **High Precision**: Annotations are refined by experts to capture even the subtlest of defects, ensuring high precision.
- **Large-Scale Data**: Building on four key industrial benchmarks, Defect Spectrum stands out with its extensive coverage and depth.
- **Incorporates Descriptive Captions**: To bridge the gap towards Vision Language Models (VLMs), each sample is accompanied by a descriptive caption.
### Directory Structure
```plaintext
DefectSpectrum/
├── DS-MVTec/
│ ├── bottle/
│ │ ├── image/ # Original images of the bottle category
│ │ ├── caption/ # Descriptive captions of the bottle category
│ │ ├── mask/ # Single channel defect masks for the bottle category
│ │ └── rgb_mask/ # Colored defect masks for better visualization
│ ├── cable/
│ │ ├── image/ # Original images of the cable category
│ │ ├── caption/ # Descriptive captions of the cable category
│ │ ├── mask/ # Single channel defect masks for the cable category
│ │ └── rgb_mask/ # Colored defect masks for better visualization
│ └── ...
├── DS-VISION/
│ └── ...
├── DS-DAGM/
│ └── ...
├── DS-Cotton-Fabric/
│ └── ...
```
## To-Do List
- [ ] Task 1: Release DS-MVTec image-mask pairs.
- [ ] Task 2: Release DS-VISION, DS-DAGM, and DS-Cotton-Fabric image-mask pairs.
- [ ] Task 3: Release captions.
- [ ] Task 4: Release selected synthetic data.
---
license: mit
--- |
mf99/autotrain-data-sum-200-random | ---
language:
- en
task_categories:
- conditional-text-generation
---
# AutoTrain Dataset for project: sum-200-random
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project sum-200-random.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "aen: {Forest hermit to Professor, it's never too late to change. | Dr. Gregory P. Smith | TEDxByronB[...]",
"target": "Fire, plenty of ferns to sleep on and an endless supply of alcohol. 65"
},
{
"text": "aen: {William Noel: Revealing the lost codex of Archimedes}{from 62% to 72%}{And combinatorics is a [...]",
"target": "The really astonishing thing though about this manuscript is that we looked at the other manuscripts[...]"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 451280 |
| valid | 112821 |
|
SeyedAli/Persian-OCR | ---
license: mit
---
|
umd-zhou-lab/Reflect_WizV2_All | ---
dataset_info:
features:
- name: data
struct:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: origin
num_bytes: 201101522
num_examples: 73000
- name: reflect_instruction
num_bytes: 173437259
num_examples: 73000
- name: reflect_both
num_bytes: 225629379
num_examples: 73000
download_size: 307701146
dataset_size: 600168160
---
# Dataset Card for "Reflect_WizV2_All"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vasuens/vasuhex_novel17_test | ---
license: apache-2.0
---
|
AdvayK/SFD_7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 382894422.7379618
num_examples: 625
- name: test
num_bytes: 164473290.26203808
num_examples: 268
download_size: 444577398
dataset_size: 547367712.9999999
---
# Dataset Card for "SFD_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alex-atelo/datasets-github-issues | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: labels
list:
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: id
dtype: int64
- name: name
dtype: string
- name: node_id
dtype: string
- name: url
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: assignees
list:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: milestone
struct:
- name: closed_at
dtype: string
- name: closed_issues
dtype: int64
- name: created_at
dtype: string
- name: creator
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: description
dtype: string
- name: due_on
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: labels_url
dtype: string
- name: node_id
dtype: string
- name: number
dtype: int64
- name: open_issues
dtype: int64
- name: state
dtype: string
- name: title
dtype: string
- name: updated_at
dtype: string
- name: url
dtype: string
- name: num_comments
dtype: int64
- name: created_at
dtype: timestamp[ns, tz=UTC]
- name: updated_at
dtype: timestamp[ns, tz=UTC]
- name: closed_at
dtype: timestamp[ns, tz=UTC]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: float64
- name: draft
dtype: float64
- name: pull_request
struct:
- name: diff_url
dtype: string
- name: html_url
dtype: string
- name: merged_at
dtype: string
- name: patch_url
dtype: string
- name: url
dtype: string
- name: body
dtype: string
- name: reactions
struct:
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: confused
dtype: int64
- name: eyes
dtype: int64
- name: heart
dtype: int64
- name: hooray
dtype: int64
- name: laugh
dtype: int64
- name: rocket
dtype: int64
- name: total_count
dtype: int64
- name: url
dtype: string
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: float64
- name: state_reason
dtype: string
- name: __index_level_0__
dtype: int64
- name: is_pr
dtype: bool
- name: comments
sequence: string
splits:
- name: train
num_bytes: 36763529
num_examples: 6650
download_size: 10752010
dataset_size: 36763529
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- wtfpl
multilinguality:
- monolingual
pretty_name: HuggingFace Datasets GitHub Issues
size_categories:
- unknown
source_datasets:
- original
tags: []
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
- document-retrieval
---
# Dataset Card for GitHub Issues
## Dataset Description
- **Point of Contact:** [Alex](https://huggingface.co/alex-atelo)
### Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
### Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
### Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
## Dataset Structure
### Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
```
{
'example_field': ...,
...
}
```
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `example_field`: description of `example_field`
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
### Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
### Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
#### Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
#### Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
### Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
### Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
## Considerations for Using the Data
### Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
### Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
### Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
Provide the license and link to the license webpage if available.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
```
If the dataset has a [DOI](https://www.doi.org/), please provide it here.
```
@misc{huggingfacecourse,
author = {Hugging Face},
title = {The Hugging Face Course, 2022},
howpublished = "\url{https://huggingface.co/course}",
year = {2022},
note = "[Online; accessed <today>]"
}
```
### Contributions
Thanks to [@alex-atelo](https://huggingface.co/alex-atelo) for adding this dataset. |
0x2f61667/fzskbxk-sc | ---
license: other
license_name: powerallowpro
license_link: https://www.foundertype.com/index.php/About/powerAllowPro.html
---
|
13Mai13/BuzzFeed | ---
license: mit
language:
- en
--- |
artificial-citizen/ava_chatml | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 22053608
num_examples: 6534
download_size: 10482250
dataset_size: 22053608
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bingsu/laion-translated-to-en-korean-subset | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ko
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: laion-translated-to-en-korean-subset
size_categories:
- 10M<n<100M
task_categories:
- feature-extraction
---
# laion-translated-to-en-korean-subset
## Dataset Description
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
- **Download Size** 1.40 GiB
- **Generated Size** 3.49 GiB
- **Total Size** 4.89 GiB
## About dataset
a subset data of [laion/laion2B-multi-joined-translated-to-en](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en) and [laion/laion1B-nolang-joined-translated-to-en](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en), including only korean
### Lisence
CC-BY-4.0
## Data Structure
### Data Instance
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion-translated-to-en-korean-subset")
>>> dataset
DatasetDict({
train: Dataset({
features: ['hash', 'URL', 'TEXT', 'ENG TEXT', 'WIDTH', 'HEIGHT', 'LANGUAGE', 'similarity', 'pwatermark', 'punsafe', 'AESTHETIC_SCORE'],
num_rows: 12769693
})
})
```
```py
>>> dataset["train"].features
{'hash': Value(dtype='int64', id=None),
'URL': Value(dtype='large_string', id=None),
'TEXT': Value(dtype='large_string', id=None),
'ENG TEXT': Value(dtype='large_string', id=None),
'WIDTH': Value(dtype='int32', id=None),
'HEIGHT': Value(dtype='int32', id=None),
'LANGUAGE': Value(dtype='large_string', id=None),
'similarity': Value(dtype='float32', id=None),
'pwatermark': Value(dtype='float32', id=None),
'punsafe': Value(dtype='float32', id=None),
'AESTHETIC_SCORE': Value(dtype='float32', id=None)}
```
### Data Size
download: 1.40 GiB<br>
generated: 3.49 GiB<br>
total: 4.89 GiB
### Data Field
- 'hash': `int`
- 'URL': `string`
- 'TEXT': `string`
- 'ENG TEXT': `string`, null data are dropped
- 'WIDTH': `int`, null data are filled with 0
- 'HEIGHT': `int`, null data are filled with 0
- 'LICENSE': `string`
- 'LANGUAGE': `string`
- 'similarity': `float32`, CLIP similarity score, null data are filled with 0.0
- 'pwatermark': `float32`, Probability of containing a watermark, null data are filled with 0.0
- 'punsafe': `float32`, Probability of nsfw image, null data are filled with 0.0
- 'AESTHETIC_SCORE': `float32`, null data are filled with 0.0
### Data Splits
| | train |
| --------- | -------- |
| # of data | 12769693 |
### polars
```sh
pip install polars[fsspec]
```
```py
import polars as pl
from huggingface_hub import hf_hub_url
url = hf_hub_url("Bingsu/laion-translated-to-en-korean-subset", filename="train.parquet", repo_type="dataset")
# url = "https://huggingface.co/datasets/Bingsu/laion-translated-to-en-korean-subset/resolve/main/train.parquet"
df = pl.read_parquet(url)
```
pandas broke my colab session. |
RohanAiLab/persian_blog | ---
language:
- fa
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: persian_blog
---
# Persian Blog
# Dataset Summary
persian_news_dataset is a collection of 400k blog posts. these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling and text generation tasks.
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
```
text : چرا کودکان به روانشناس نیاز دارند؟ روانشناسی کودکانکودکان همچون غنچههای زیبا هستند که برای شکوفایی و به ثمر رسیدن نیاز به مراقبت و رسیدگی دارند . روانشناس کودک فردیست که از زمان بدو تولد کودک در مراحل مختلف زندگی کودک در کنار والدین وی میباشد و به چگونگی تربیت کودک کمک میکند تا به بهترین شکل رشد کند . چرا که روانشناس کودک با روحیات ، نیازها و مشکلات کودکان و همچنین چگونگی برقراری ارتباط بین کودک و والدین وی آشنایی کامل دارد .بسیاری از کودکان در سنین مختلف بخاطر شرایط زندگی ، دچار انواع ناسازگاریها و مشکلات در زندگی خود میشود از ناسازگاری کودکان میتوان به موارد زیر اشاره کرد : 1 . پرخاشگری 2 . بد دهنی 3 . اختلال در خوابیدن 4 . اختلال در غذا خوردن و کم اشتهایی 5 . حالت افسردگی و اضطراب 6 . ترس از محیط پیرامون 7 . عدم آمادگی برای ورود به جامعه 8 . وجود مشکل در محیط مدرسه 9 . عدم تمرکز 10 . جویدن ناخن ها 11 . انزوا و گوشه گیری 12 . عدم هم بازی شدن با هم سن و سال هاو .این گونه ناسازگاریها در زندگی آینده کودک نقش به سزایی دارد .روانشناس کودکیک روانشناس کودک خوب ، با دلسوزی و با تکیه بر تجربیات و تخصص خود میکوشد تا رفتارهای کودک را مورد ارزیابی و بررسی قرار دهد سپس سعی میکند تا رفتارهای بعدی کودک را پیش بینی کند و منشاء این مشکلات و سطح پیشرفت آن را بیابد. سپس او بهترین روشهای درمان برای بهبود اختلال کودک را مییابد و با کمک والدین این ناسازگاریها ، مشکلات و ناهنجاریها را حل کرده و نهایتا رابطهای دوستانه و صمیمانه بین کودک و والدین وی ایجاد مینماید تاآیندهای درخشان در انتظار کودک شما باشد .
```
# Citation
```
rohanailab@gmail.com
title={persian_blog},
author={Saied Alimoradi},
year={2021}
}
``` |
Codec-SUPERB/opensinger_unit | ---
dataset_info:
features:
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 26511079
num_examples: 3924
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 26511079
num_examples: 3924
- name: academicodec_hifi_24k_320d
num_bytes: 39717383
num_examples: 3924
- name: audiodec_24k_320d
num_bytes: 84872647
num_examples: 3924
- name: dac_16k
num_bytes: 96472327
num_examples: 3924
- name: dac_24k
num_bytes: 388144583
num_examples: 3924
- name: dac_44k
num_bytes: 125967103
num_examples: 3924
- name: encodec_24k_12bps
num_bytes: 159082439
num_examples: 3924
- name: encodec_24k_1_5bps
num_bytes: 19969479
num_examples: 3924
- name: encodec_24k_24bps
num_bytes: 318068679
num_examples: 3924
- name: encodec_24k_3bps
num_bytes: 39842759
num_examples: 3924
- name: encodec_24k_6bps
num_bytes: 79589319
num_examples: 3924
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 212419783
num_examples: 3924
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 212419783
num_examples: 3924
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 212419271
num_examples: 3924
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 106761927
num_examples: 3924
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 212419271
num_examples: 3924
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 106761927
num_examples: 3924
- name: speech_tokenizer_16k
num_bytes: 53176967
num_examples: 3924
download_size: 384089438
dataset_size: 2521127805
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
---
|
tyzhu/random_letter_same_length_find_passage_train50_eval20_num | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 35610
num_examples: 120
- name: validation
num_bytes: 7230
num_examples: 20
download_size: 27281
dataset_size: 42840
---
# Dataset Card for "random_letter_same_length_find_passage_train50_eval20_num"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_mnli_non_coordinated_subj_obj | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 197760
num_examples: 954
- name: dev_mismatched
num_bytes: 241664
num_examples: 1092
- name: test_matched
num_bytes: 223485
num_examples: 1039
- name: test_mismatched
num_bytes: 238739
num_examples: 1094
- name: train
num_bytes: 8694029
num_examples: 39844
download_size: 5872542
dataset_size: 9595677
---
# Dataset Card for "MULTI_VALUE_mnli_non_coordinated_subj_obj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arthurmluz/cstnews_data-wiki_results | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: summary
dtype: string
- name: gen_summary
dtype: string
- name: rouge
struct:
- name: rouge1
dtype: float64
- name: rouge2
dtype: float64
- name: rougeL
dtype: float64
- name: rougeLsum
dtype: float64
- name: bert
struct:
- name: f1
sequence: float64
- name: hashcode
dtype: string
- name: precision
sequence: float64
- name: recall
sequence: float64
- name: moverScore
dtype: float64
splits:
- name: validation
num_bytes: 50786
num_examples: 16
download_size: 0
dataset_size: 50786
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "cstnews_data-wiki_results"
rouge= {'rouge1': 0.2655324963122997, 'rouge2': 0.11389233951370548, 'rougeL': 0.18758958356070038, 'rougeLsum': 0.18758958356070038}
bert= {'precision': 0.7567696012556553, 'recall': 0.6739303469657898, 'f1': 0.7126890532672405}
mover = 0.5746077032702939 |
CyberHarem/matsuwa_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of matsuwa (Kantai Collection)
This is the dataset of matsuwa (Kantai Collection), containing 361 images and their tags.
The core tags of this character are `long_hair, black_hair, multicolored_hair, green_eyes, gradient_hair, purple_hair, hat, white_headwear, freckles, sailor_hat`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 361 | 323.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 361 | 211.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 758 | 438.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 361 | 294.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 758 | 581.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/matsuwa_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, blue_neckerchief, blue_ribbon, blue_sailor_collar, blue_skirt, long_sleeves, pleated_skirt, serafuku, simple_background, white_background, white_gloves, looking_at_viewer, solo, twitter_username, kneehighs, one-hour_drawing_challenge, white_socks, dated |
| 1 | 7 |  |  |  |  |  | 1girl, alternate_costume, solo, looking_at_viewer, blush, black_dress, cowboy_shot, dated, long_sleeves, one-hour_drawing_challenge, simple_background, white_background, artist_logo, frilled_dress |
| 2 | 8 |  |  |  |  |  | 1girl, solo, yukata, alternate_costume, candy_apple, long_sleeves, obi, wide_sleeves, full_body, hair_flower, looking_at_viewer, hairband |
| 3 | 16 |  |  |  |  |  | 1girl, santa_costume, solo, long_sleeves, blush, christmas, open_mouth, red_dress, fur_trim, white_pantyhose, red_mittens, reindeer_antlers, full_body, looking_at_viewer, simple_background, beret, gift_box, fake_antlers, white_scarf |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_neckerchief | blue_ribbon | blue_sailor_collar | blue_skirt | long_sleeves | pleated_skirt | serafuku | simple_background | white_background | white_gloves | looking_at_viewer | solo | twitter_username | kneehighs | one-hour_drawing_challenge | white_socks | dated | alternate_costume | blush | black_dress | cowboy_shot | artist_logo | frilled_dress | yukata | candy_apple | obi | wide_sleeves | full_body | hair_flower | hairband | santa_costume | christmas | open_mouth | red_dress | fur_trim | white_pantyhose | red_mittens | reindeer_antlers | beret | gift_box | fake_antlers | white_scarf |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:--------------|:---------------------|:-------------|:---------------|:----------------|:-----------|:--------------------|:-------------------|:---------------|:--------------------|:-------|:-------------------|:------------|:-----------------------------|:--------------|:--------|:--------------------|:--------|:--------------|:--------------|:--------------|:----------------|:---------|:--------------|:------|:---------------|:------------|:--------------|:-----------|:----------------|:------------|:-------------|:------------|:-----------|:------------------|:--------------|:-------------------|:--------|:-----------|:---------------|:--------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | | | | X | | | X | X | | X | X | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | | | X | | | | | | X | X | | | | | | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 3 | 16 |  |  |  |  |  | X | | | | | X | | | X | | | X | X | | | | | | | X | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
catalonia_independence | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ca
- es
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: cic
pretty_name: Catalonia Independence Corpus
config_names:
- catalan
- spanish
tags:
- stance-detection
dataset_info:
- config_name: catalan
features:
- name: id_str
dtype: string
- name: TWEET
dtype: string
- name: LABEL
dtype:
class_label:
names:
'0': AGAINST
'1': FAVOR
'2': NEUTRAL
splits:
- name: train
num_bytes: 1406242
num_examples: 6028
- name: test
num_bytes: 469196
num_examples: 2010
- name: validation
num_bytes: 473385
num_examples: 2010
download_size: 1638682
dataset_size: 2348823
- config_name: spanish
features:
- name: id_str
dtype: string
- name: TWEET
dtype: string
- name: LABEL
dtype:
class_label:
names:
'0': AGAINST
'1': FAVOR
'2': NEUTRAL
splits:
- name: train
num_bytes: 1507380
num_examples: 6046
- name: test
num_bytes: 501775
num_examples: 2016
- name: validation
num_bytes: 505084
num_examples: 2015
download_size: 1760636
dataset_size: 2514239
configs:
- config_name: catalan
data_files:
- split: train
path: catalan/train-*
- split: test
path: catalan/test-*
- split: validation
path: catalan/validation-*
default: true
- config_name: spanish
data_files:
- split: train
path: spanish/train-*
- split: test
path: spanish/test-*
- split: validation
path: spanish/validation-*
---
# Dataset Card for Catalonia Independence Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/ixa-ehu/catalonia-independence-corpus
- **Repository:** https://github.com/ixa-ehu/catalonia-independence-corpus
- **Paper:** [Multilingual Stance Detection: The Catalonia Independence Corpus](https://www.aclweb.org/anthology/2020.lrec-1.171/)
- **Leaderboard:**
- **Point of Contact:** [Rodrigo Agerri](https://github.com/ragerri) (corpus creator)
### Dataset Summary
This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Spanish and Catalan
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset. |
heliosprime/twitter_dataset_1713111570 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 16765
num_examples: 45
download_size: 16947
dataset_size: 16765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713111570"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/mizuno_midori_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mizuno_midori/水野翠 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of mizuno_midori/水野翠 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags.
The core tags of this character are `long_hair, black_hair, ponytail, brown_eyes, breasts, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 44 | 38.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 44 | 27.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 95 | 51.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 44 | 35.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 95 | 65.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mizuno_midori_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, solo, gloves, dress, card_(medium), character_name, gem_(symbol), hair_ornament, open_mouth, smile |
| 1 | 7 |  |  |  |  |  | 1girl, blush, hair_bow, looking_at_viewer, solo, open_mouth, :d, earrings, necklace, parted_bangs, bare_shoulders, bracelet, green_dress, hand_up, medium_breasts, simple_background, sleeveless_dress, white_background |
| 2 | 7 |  |  |  |  |  | 1girl, blush, serafuku, short_sleeves, solo, white_background, closed_mouth, hair_ribbon, looking_at_viewer, neckerchief, pleated_skirt, simple_background, white_shirt, blue_sailor_collar, blue_skirt, mouth_hold, navel, red_ribbon |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | gloves | dress | card_(medium) | character_name | gem_(symbol) | hair_ornament | open_mouth | smile | blush | hair_bow | looking_at_viewer | :d | earrings | necklace | parted_bangs | bare_shoulders | bracelet | green_dress | hand_up | medium_breasts | simple_background | sleeveless_dress | white_background | serafuku | short_sleeves | closed_mouth | hair_ribbon | neckerchief | pleated_skirt | white_shirt | blue_sailor_collar | blue_skirt | mouth_hold | navel | red_ribbon |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------|:--------|:----------------|:-----------------|:---------------|:----------------|:-------------|:--------|:--------|:-----------|:--------------------|:-----|:-----------|:-----------|:---------------|:-----------------|:-----------|:--------------|:----------|:-----------------|:--------------------|:-------------------|:-------------------|:-----------|:----------------|:---------------|:--------------|:--------------|:----------------|:--------------|:---------------------|:-------------|:-------------|:--------|:-------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | | | | | | | | | X | | X | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_Walmart-the-bag__openchat-3.5-Infinity | ---
pretty_name: Evaluation run of Walmart-the-bag/openchat-3.5-Infinity
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Walmart-the-bag/openchat-3.5-Infinity](https://huggingface.co/Walmart-the-bag/openchat-3.5-Infinity)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Walmart-the-bag__openchat-3.5-Infinity\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-29T22:24:05.513640](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__openchat-3.5-Infinity/blob/main/results_2023-12-29T22-24-05.513640.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6482331644164944,\n\
\ \"acc_stderr\": 0.032166482561296374,\n \"acc_norm\": 0.6494647991303745,\n\
\ \"acc_norm_stderr\": 0.03282046947130963,\n \"mc1\": 0.3525091799265606,\n\
\ \"mc1_stderr\": 0.016724646380756547,\n \"mc2\": 0.5198754304913998,\n\
\ \"mc2_stderr\": 0.015470093705054921\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5836177474402731,\n \"acc_stderr\": 0.014405618279436176,\n\
\ \"acc_norm\": 0.6262798634812287,\n \"acc_norm_stderr\": 0.01413770860175909\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6629157538338977,\n\
\ \"acc_stderr\": 0.00471747833568963,\n \"acc_norm\": 0.8404700258912567,\n\
\ \"acc_norm_stderr\": 0.0036542123295166145\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n\
\ \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n\
\ \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n\
\ \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \
\ \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n\
\ \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5872340425531914,\n \"acc_stderr\": 0.03218471141400351,\n\
\ \"acc_norm\": 0.5872340425531914,\n \"acc_norm_stderr\": 0.03218471141400351\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.047028804320496165,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.047028804320496165\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n\
\ \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4074074074074074,\n \"acc_stderr\": 0.025305906241590632,\n \"\
acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.025305906241590632\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.49206349206349204,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.49206349206349204,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n\
\ \"acc_stderr\": 0.023540799358723306,\n \"acc_norm\": 0.7806451612903226,\n\
\ \"acc_norm_stderr\": 0.023540799358723306\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n\
\ \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.032568666616811015,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.032568666616811015\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586815,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586815\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.02293514405391945,\n\
\ \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.02293514405391945\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.02394672474156397,\n \
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.02394672474156397\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3333333333333333,\n \"acc_stderr\": 0.02874204090394848,\n \
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.02874204090394848\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977934,\n\
\ \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977934\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669237,\n \"\
acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669237\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5601851851851852,\n \"acc_stderr\": 0.0338517797604481,\n \"acc_norm\"\
: 0.5601851851851852,\n \"acc_norm_stderr\": 0.0338517797604481\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n\
\ \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n\
\ \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601446,\n\
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601446\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n\
\ \"acc_stderr\": 0.030769352008229143,\n \"acc_norm\": 0.6995515695067265,\n\
\ \"acc_norm_stderr\": 0.030769352008229143\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n\
\ \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n\
\ \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.034624199316156234,\n\
\ \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.034624199316156234\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n\
\ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.80970625798212,\n\
\ \"acc_stderr\": 0.014036945850381394,\n \"acc_norm\": 0.80970625798212,\n\
\ \"acc_norm_stderr\": 0.014036945850381394\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468348,\n\
\ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468348\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.37318435754189944,\n\
\ \"acc_stderr\": 0.016175692013381957,\n \"acc_norm\": 0.37318435754189944,\n\
\ \"acc_norm_stderr\": 0.016175692013381957\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n\
\ \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n\
\ \"acc_stderr\": 0.02549425935069491,\n \"acc_norm\": 0.7202572347266881,\n\
\ \"acc_norm_stderr\": 0.02549425935069491\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.02540719779889016,\n\
\ \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.02540719779889016\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422466,\n \
\ \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422466\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47196870925684486,\n\
\ \"acc_stderr\": 0.01275015180292244,\n \"acc_norm\": 0.47196870925684486,\n\
\ \"acc_norm_stderr\": 0.01275015180292244\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7242647058823529,\n \"acc_stderr\": 0.027146271936625166,\n\
\ \"acc_norm\": 0.7242647058823529,\n \"acc_norm_stderr\": 0.027146271936625166\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6519607843137255,\n \"acc_stderr\": 0.019270998708223977,\n \
\ \"acc_norm\": 0.6519607843137255,\n \"acc_norm_stderr\": 0.019270998708223977\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.02916273841024977,\n\
\ \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.02916273841024977\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.02619392354445412,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.02619392354445412\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197771,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197771\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8596491228070176,\n \"acc_stderr\": 0.026640582539133196,\n\
\ \"acc_norm\": 0.8596491228070176,\n \"acc_norm_stderr\": 0.026640582539133196\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3525091799265606,\n\
\ \"mc1_stderr\": 0.016724646380756547,\n \"mc2\": 0.5198754304913998,\n\
\ \"mc2_stderr\": 0.015470093705054921\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8011049723756906,\n \"acc_stderr\": 0.011218629972515328\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6429112964366944,\n \
\ \"acc_stderr\": 0.013197931775445206\n }\n}\n```"
repo_url: https://huggingface.co/Walmart-the-bag/openchat-3.5-Infinity
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|arc:challenge|25_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|gsm8k|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hellaswag|10_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-29T22-24-05.513640.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-29T22-24-05.513640.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- '**/details_harness|winogrande|5_2023-12-29T22-24-05.513640.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-29T22-24-05.513640.parquet'
- config_name: results
data_files:
- split: 2023_12_29T22_24_05.513640
path:
- results_2023-12-29T22-24-05.513640.parquet
- split: latest
path:
- results_2023-12-29T22-24-05.513640.parquet
---
# Dataset Card for Evaluation run of Walmart-the-bag/openchat-3.5-Infinity
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Walmart-the-bag/openchat-3.5-Infinity](https://huggingface.co/Walmart-the-bag/openchat-3.5-Infinity) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Walmart-the-bag__openchat-3.5-Infinity",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-29T22:24:05.513640](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__openchat-3.5-Infinity/blob/main/results_2023-12-29T22-24-05.513640.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6482331644164944,
"acc_stderr": 0.032166482561296374,
"acc_norm": 0.6494647991303745,
"acc_norm_stderr": 0.03282046947130963,
"mc1": 0.3525091799265606,
"mc1_stderr": 0.016724646380756547,
"mc2": 0.5198754304913998,
"mc2_stderr": 0.015470093705054921
},
"harness|arc:challenge|25": {
"acc": 0.5836177474402731,
"acc_stderr": 0.014405618279436176,
"acc_norm": 0.6262798634812287,
"acc_norm_stderr": 0.01413770860175909
},
"harness|hellaswag|10": {
"acc": 0.6629157538338977,
"acc_stderr": 0.00471747833568963,
"acc_norm": 0.8404700258912567,
"acc_norm_stderr": 0.0036542123295166145
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.04256193767901408,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.04256193767901408
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5872340425531914,
"acc_stderr": 0.03218471141400351,
"acc_norm": 0.5872340425531914,
"acc_norm_stderr": 0.03218471141400351
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.047028804320496165,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.047028804320496165
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.025305906241590632,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.025305906241590632
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723306,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723306
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586815,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586815
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.02293514405391945,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.02293514405391945
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.02394672474156397,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.02394672474156397
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.02874204090394848,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.02874204090394848
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.030066761582977934,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.030066761582977934
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8495412844036697,
"acc_stderr": 0.015328563932669237,
"acc_norm": 0.8495412844036697,
"acc_norm_stderr": 0.015328563932669237
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5601851851851852,
"acc_stderr": 0.0338517797604481,
"acc_norm": 0.5601851851851852,
"acc_norm_stderr": 0.0338517797604481
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601446,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.030769352008229143,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.030769352008229143
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990947,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990947
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.034624199316156234,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.034624199316156234
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.80970625798212,
"acc_stderr": 0.014036945850381394,
"acc_norm": 0.80970625798212,
"acc_norm_stderr": 0.014036945850381394
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.023948512905468348,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.023948512905468348
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.37318435754189944,
"acc_stderr": 0.016175692013381957,
"acc_norm": 0.37318435754189944,
"acc_norm_stderr": 0.016175692013381957
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242557,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242557
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.02549425935069491,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.02549425935069491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.02540719779889016,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.02540719779889016
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422466,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422466
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47196870925684486,
"acc_stderr": 0.01275015180292244,
"acc_norm": 0.47196870925684486,
"acc_norm_stderr": 0.01275015180292244
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7242647058823529,
"acc_stderr": 0.027146271936625166,
"acc_norm": 0.7242647058823529,
"acc_norm_stderr": 0.027146271936625166
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6519607843137255,
"acc_stderr": 0.019270998708223977,
"acc_norm": 0.6519607843137255,
"acc_norm_stderr": 0.019270998708223977
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.02916273841024977,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.02916273841024977
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.02619392354445412,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.02619392354445412
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197771,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197771
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8596491228070176,
"acc_stderr": 0.026640582539133196,
"acc_norm": 0.8596491228070176,
"acc_norm_stderr": 0.026640582539133196
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3525091799265606,
"mc1_stderr": 0.016724646380756547,
"mc2": 0.5198754304913998,
"mc2_stderr": 0.015470093705054921
},
"harness|winogrande|5": {
"acc": 0.8011049723756906,
"acc_stderr": 0.011218629972515328
},
"harness|gsm8k|5": {
"acc": 0.6429112964366944,
"acc_stderr": 0.013197931775445206
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.