datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
TREC-AToMiC/TREC-2023-Image-to-Text | ---
dataset_info:
features:
- name: image_url
dtype: string
- name: image_id
dtype: string
- name: language
sequence: string
- name: caption_reference_description
sequence: string
- name: caption_alt_text_description
sequence: string
- name: caption_attribution_description
sequence: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3267639.3360245144
num_examples: 200
download_size: 2638784
dataset_size: 3267639.3360245144
---
# Dataset Card for "TREC-2023-Image-to-Text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_rte_one_relativizer | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 969020
num_examples: 2764
- name: train
num_bytes: 827587
num_examples: 2258
download_size: 1143491
dataset_size: 1796607
---
# Dataset Card for "MULTI_VALUE_rte_one_relativizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LFBMS/class_dataset_donut2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_datev
'1': bilanz_lexware
'2': guv
'3': other
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 559064953.0
num_examples: 500
- name: train
num_bytes: 4343890380.0
num_examples: 4000
- name: validation
num_bytes: 548645901.0
num_examples: 500
download_size: 5424719748
dataset_size: 5451601234.0
---
# Dataset Card for "class_dataset_donut2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_chargoddard__llama-polyglot-13b | ---
pretty_name: Evaluation run of chargoddard/llama-polyglot-13b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [chargoddard/llama-polyglot-13b](https://huggingface.co/chargoddard/llama-polyglot-13b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__llama-polyglot-13b\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-03T18:35:49.174459](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama-polyglot-13b/blob/main/results_2023-12-03T18-35-49.174459.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2259287338893101,\n\
\ \"acc_stderr\": 0.011519098777279956\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.2259287338893101,\n \"acc_stderr\": 0.011519098777279956\n\
\ }\n}\n```"
repo_url: https://huggingface.co/chargoddard/llama-polyglot-13b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_03T18_35_49.174459
path:
- '**/details_harness|gsm8k|5_2023-12-03T18-35-49.174459.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-03T18-35-49.174459.parquet'
- config_name: results
data_files:
- split: 2023_12_03T18_35_49.174459
path:
- results_2023-12-03T18-35-49.174459.parquet
- split: latest
path:
- results_2023-12-03T18-35-49.174459.parquet
---
# Dataset Card for Evaluation run of chargoddard/llama-polyglot-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/llama-polyglot-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/llama-polyglot-13b](https://huggingface.co/chargoddard/llama-polyglot-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__llama-polyglot-13b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:35:49.174459](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama-polyglot-13b/blob/main/results_2023-12-03T18-35-49.174459.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2259287338893101,
"acc_stderr": 0.011519098777279956
},
"harness|gsm8k|5": {
"acc": 0.2259287338893101,
"acc_stderr": 0.011519098777279956
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
NLPC-UOM/Sinhala-News-Source-classification | ---
annotations_creators: []
language_creators:
- crowdsourced
language:
- si
license:
- mit
multilinguality:
- monolingual
pretty_name: sinhala-news-source-classification
size_categories: []
source_datasets: []
task_categories:
- text-classification
task_ids: []
---
This dataset contains Sinhala news headlines extracted from 9 news sources (websites) (Sri Lanka Army, Dinamina, GossipLanka, Hiru, ITN, Lankapuwath, NewsLK,
Newsfirst, World Socialist Web Site-Sinhala). This is a processed version of the corpus created by *Sachintha, D., Piyarathna, L., Rajitha, C., and Ranathunga, S. (2021). Exploiting parallel corpora to improve multilingual embedding based document and sentence alignment*. Single word sentences, invalid characters have been removed from the originally extracted corpus and also subsampled to handle class imbalance.
If you use this dataset please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} |
CVasNLPExperiments/FGVC_Aircraft_test_google_flan_t5_xxl_mode_A_ns_3333 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_ViT_L_14_Attributes_ViT_L_14_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 1426346
num_examples: 3333
- name: fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices
num_bytes: 1363907
num_examples: 3333
download_size: 418785
dataset_size: 2790253
---
# Dataset Card for "FGVC_Aircraft_test_google_flan_t5_xxl_mode_A_ns_3333"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/VALUE_sst2_negative_concord | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: sentence
dtype: string
- name: label
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 6956
num_examples: 48
- name: test
num_bytes: 12384
num_examples: 84
- name: train
num_bytes: 165604
num_examples: 1366
download_size: 95928
dataset_size: 184944
---
# Dataset Card for "VALUE_sst2_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kursathalat/label_ds | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for label_ds
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("kursathalat/label_ds")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("kursathalat/label_ds")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | Label | label_selection | True | N/A | ['World', 'Sports', 'Business', 'Sci/Tech'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "record-0",
"fields": {
"text": "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street\u0027s dwindling\\band of ultra-cynics, are seeing green again."
},
"metadata": {},
"responses": [
{
"status": "submitted",
"user_id": "525f4274-ebb4-4aee-a116-f8b422b2e2b4",
"values": {
"label": {
"value": "Sports"
}
}
},
{
"status": "submitted",
"user_id": "464c365f-9fd7-4db9-a06f-42c320c54547",
"values": {
"label": {
"value": "Sports"
}
}
}
],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "record-0",
"label": [
{
"status": "submitted",
"user_id": "525f4274-ebb4-4aee-a116-f8b422b2e2b4",
"value": "Sports"
},
{
"status": "submitted",
"user_id": "464c365f-9fd7-4db9-a06f-42c320c54547",
"value": "Sports"
}
],
"label-suggestion": null,
"label-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"metadata": "{}",
"text": "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street\u0027s dwindling\\band of ultra-cynics, are seeing green again."
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **label** is of type `label_selection` with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **label-suggestion** is of type `label_selection` with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Classify the articles into one of the four categories.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
KaioSan/Stolas | ---
license: openrail
---
|
Sentdex/WSB-003.002 | ---
license: apache-2.0
---
|
ajmangus/qm_charlie_easy_2_mixture_1.0e | ---
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: charlie_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 201936.0
num_examples: 1809
- name: validation
num_bytes: 18961.666666666668
num_examples: 173
- name: test
num_bytes: 21168.0
num_examples: 194
download_size: 69643
dataset_size: 242065.66666666666
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
esnli | ---
language:
- en
paperswithcode_id: e-snli
pretty_name: e-SNLI
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: explanation_1
dtype: string
- name: explanation_2
dtype: string
- name: explanation_3
dtype: string
config_name: plain_text
splits:
- name: test
num_bytes: 3387169
num_examples: 9824
- name: train
num_bytes: 108024142
num_examples: 549367
- name: validation
num_bytes: 3423725
num_examples: 9842
download_size: 204516010
dataset_size: 114835036
---
# Dataset Card for "esnli"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/OanaMariaCamburu/e-SNLI](https://github.com/OanaMariaCamburu/e-SNLI)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 204.51 MB
- **Size of the generated dataset:** 114.84 MB
- **Total amount of disk used:** 319.35 MB
### Dataset Summary
The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to
include human-annotated natural language explanations of the entailment
relations.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 204.51 MB
- **Size of the generated dataset:** 114.84 MB
- **Total amount of disk used:** 319.35 MB
An example of 'validation' looks as follows.
```
{
"explanation_1": "A woman must be present to smile.",
"explanation_2": "A woman smiling implies that she is present.",
"explanation_3": "A smiling woman is also present.",
"hypothesis": "A woman is present.",
"label": 0,
"premise": "A woman smiles at the child."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `explanation_1`: a `string` feature.
- `explanation_2`: a `string` feature.
- `explanation_3`: a `string` feature.
### Data Splits
| name |train |validation|test|
|----------|-----:|---------:|---:|
|plain_text|549367| 9842|9824|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@incollection{NIPS2018_8163,
title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
author = {Camburu, Oana-Maria and Rockt"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
booktitle = {Advances in Neural Information Processing Systems 31},
editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
pages = {9539--9549},
year = {2018},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
typingmonk/hello_world | ---
language:
- zh
pretty_name: hello_world
configs:
- config_name: "46251501064"
data_files: 46251501064.csv
dataset_info:
- config_name: "46251501064"
features:
- name: id
dtype: string
- name: published_at
dtype: string
- name: url
dtype: string
- name: text
dtype: string
- name: type
dtype: string
- name: reaction_count
dtype: int64
- name: comment_count
dtype: int64
- name: share_count
dtype: int64
- name: like_count
dtype: int64
- name: angry_count
dtype: int64
- name: haha_count
dtype: int64
- name: love_count
dtype: string
- name: sad_count
dtype: string
- name: wow_count
dtype: string
- name: support_count
dtype: string
- name: quote_url
dtype: string
- name: quote_title
dtype: string
- name: quote_text
dtype: string
- name: quote_type
dtype: string
--- |
SUSTech/harness-result | ---
dataset_info:
features:
- name: model
dtype: string
- name: winogrande
dtype: float64
- name: arc
dtype: float64
- name: hellaswag
dtype: float64
- name: truthfulqa
dtype: float64
splits:
- name: train
num_bytes: 288
num_examples: 6
download_size: 3077
dataset_size: 288
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tvergho/cards-underlining-kto | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: label
dtype: bool
- name: card_id
dtype: string
splits:
- name: train
num_bytes: 16063281
num_examples: 2085
download_size: 8983592
dataset_size: 16063281
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cards-underlining-kto"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Urna02/hangul_ivanov | ---
license: apache-2.0
---
|
hippocrates/Alpaca_train | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 44978419
num_examples: 52002
download_size: 16852893
dataset_size: 44978419
---
# Dataset Card for "Alpaca_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fun1021183/cvt2_GS3_0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 1832280306.75
num_examples: 13258
- name: test
num_bytes: 640923801.75
num_examples: 4722
download_size: 2373732866
dataset_size: 2473204108.5
---
# Dataset Card for "cvt2_GS3_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TuringsSolutions/Jedi500 | ---
license: mit
---
|
Doub7e/SDv2-Count-Repeated-7 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: T5_last_hidden_states
sequence:
sequence:
sequence: float32
- name: style
dtype: string
splits:
- name: train
num_bytes: 1333316312.25
num_examples: 1150
download_size: 1141543339
dataset_size: 1333316312.25
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_augtoma__qCammel-70-x | ---
pretty_name: Evaluation run of augtoma/qCammel-70-x
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [augtoma/qCammel-70-x](https://huggingface.co/augtoma/qCammel-70-x) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_augtoma__qCammel-70-x\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T15:29:16.459278](https://huggingface.co/datasets/open-llm-leaderboard/details_augtoma__qCammel-70-x/blob/main/results_2023-10-18T15-29-16.459278.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.033766778523489936,\n\
\ \"em_stderr\": 0.001849802869119515,\n \"f1\": 0.10340918624161041,\n\
\ \"f1_stderr\": 0.0022106009828094797,\n \"acc\": 0.5700654570173166,\n\
\ \"acc_stderr\": 0.011407494958111332\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.033766778523489936,\n \"em_stderr\": 0.001849802869119515,\n\
\ \"f1\": 0.10340918624161041,\n \"f1_stderr\": 0.0022106009828094797\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2971948445792267,\n \
\ \"acc_stderr\": 0.012588685966624186\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8429360694554064,\n \"acc_stderr\": 0.010226303949598479\n\
\ }\n}\n```"
repo_url: https://huggingface.co/augtoma/qCammel-70-x
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|arc:challenge|25_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T15_29_16.459278
path:
- '**/details_harness|drop|3_2023-10-18T15-29-16.459278.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T15-29-16.459278.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T15_29_16.459278
path:
- '**/details_harness|gsm8k|5_2023-10-18T15-29-16.459278.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T15-29-16.459278.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hellaswag|10_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T21:18:05.927693.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-31T21:18:05.927693.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-31T21:18:05.927693.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T15_29_16.459278
path:
- '**/details_harness|winogrande|5_2023-10-18T15-29-16.459278.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T15-29-16.459278.parquet'
- config_name: results
data_files:
- split: 2023_07_31T21_18_05.927693
path:
- results_2023-07-31T21:18:05.927693.parquet
- split: 2023_10_18T15_29_16.459278
path:
- results_2023-10-18T15-29-16.459278.parquet
- split: latest
path:
- results_2023-10-18T15-29-16.459278.parquet
---
# Dataset Card for Evaluation run of augtoma/qCammel-70-x
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/augtoma/qCammel-70-x
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [augtoma/qCammel-70-x](https://huggingface.co/augtoma/qCammel-70-x) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_augtoma__qCammel-70-x",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T15:29:16.459278](https://huggingface.co/datasets/open-llm-leaderboard/details_augtoma__qCammel-70-x/blob/main/results_2023-10-18T15-29-16.459278.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.033766778523489936,
"em_stderr": 0.001849802869119515,
"f1": 0.10340918624161041,
"f1_stderr": 0.0022106009828094797,
"acc": 0.5700654570173166,
"acc_stderr": 0.011407494958111332
},
"harness|drop|3": {
"em": 0.033766778523489936,
"em_stderr": 0.001849802869119515,
"f1": 0.10340918624161041,
"f1_stderr": 0.0022106009828094797
},
"harness|gsm8k|5": {
"acc": 0.2971948445792267,
"acc_stderr": 0.012588685966624186
},
"harness|winogrande|5": {
"acc": 0.8429360694554064,
"acc_stderr": 0.010226303949598479
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Intuit-GenSRF/jquiros-suicide-es | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 434028422
num_examples: 230832
download_size: 266158998
dataset_size: 434028422
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jquiros-suicide-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/VQAv2_sample_validation_google_flan_t5_xxl_mode_A_D_PNP_GENERIC_C_Q_rices_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_clean_
num_bytes: 141464
num_examples: 1000
download_size: 53158
dataset_size: 141464
---
# Dataset Card for "VQAv2_sample_validation_google_flan_t5_xxl_mode_A_D_PNP_GENERIC_C_Q_rices_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
asun17904/wikitext_bank_examples_with_labels | ---
dataset_info:
features:
- name: sentences
sequence: string
- name: soi
sequence: string
- name: other
sequence: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 5893831
num_examples: 105
download_size: 3321464
dataset_size: 5893831
---
# Dataset Card for "wikitext_bank_examples_with_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polinaeterna/test_push_no_conf | ---
dataset_info:
features:
- name: x
dtype: int64
- name: y
dtype: string
splits:
- name: train
num_bytes: 120
num_examples: 8
- name: test
num_bytes: 46
num_examples: 3
download_size: 1712
dataset_size: 166
---
# Dataset Card for "test_push_no_conf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmcooller/neural-matia | ---
dataset_info:
features:
- name: example
dtype: string
splits:
- name: train
num_bytes: 481929
num_examples: 634
- name: test
num_bytes: 71883
num_examples: 95
download_size: 97368
dataset_size: 553812
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
preference-agents/enron-jeff-dasovich | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: string
- name: message_id
dtype: string
- name: from
dtype: string
- name: to
dtype: string
- name: date
dtype: string
- name: subject
dtype: string
- name: content
dtype: string
- name: email_context
dtype: string
- name: token_count_content
dtype: int32
- name: token_count_context
dtype: int32
- name: __index_level_0__
dtype: int64
- name: generated_intent
dtype: string
- name: train_data_gemma_format
dtype: string
- name: baseline_gemma-7b-it
dtype: string
- name: baseline_gemma-2b-it
dtype: string
- name: baseline_Mistral-7B-Instruct-v0.2
dtype: string
- name: automatic_eval_Mistral-7B-Instruct-v0.2
struct:
- name: BERT Cosine Similarity
dtype: float64
- name: BLEU Score
dtype: float64
- name: Jaccard Similarity
dtype: float64
- name: Levenshtein Distance
dtype: int64
- name: TF-IDF Cosine Similarity
dtype: float64
- name: automatic_eval_gemma-2b-it
struct:
- name: BERT Cosine Similarity
dtype: float64
- name: BLEU Score
dtype: float64
- name: Jaccard Similarity
dtype: float64
- name: Levenshtein Distance
dtype: int64
- name: TF-IDF Cosine Similarity
dtype: float64
- name: automatic_eval_gemma-7b-it
struct:
- name: BERT Cosine Similarity
dtype: float64
- name: BLEU Score
dtype: float64
- name: Jaccard Similarity
dtype: float64
- name: Levenshtein Distance
dtype: int64
- name: TF-IDF Cosine Similarity
dtype: float64
- name: sft_prompt_gemma-2b-it
dtype: string
- name: sft_text_gemma
dtype: string
- name: automatic_eval_finetune_gemma-2b-it
struct:
- name: BERT Cosine Similarity
dtype: float64
- name: BLEU Score
dtype: float64
- name: Jaccard Similarity
dtype: float64
- name: Levenshtein Distance
dtype: int64
- name: TF-IDF Cosine Similarity
dtype: float64
- name: finetune_gemma-2b-it
dtype: string
- name: cleaned_finetune_gemma-2b-it
dtype: string
- name: finetune_gemma-7b-it
dtype: string
- name: cleaned_finetune_gemma-7b-it
dtype: string
- name: automatic_eval_finetune_gemma-7b-it
struct:
- name: BERT Cosine Similarity
dtype: float64
- name: BLEU Score
dtype: float64
- name: Jaccard Similarity
dtype: float64
- name: Levenshtein Distance
dtype: int64
- name: TF-IDF Cosine Similarity
dtype: float64
- name: automatic_eval_finetune_Mistral-7B-Instruct-v0.2
struct:
- name: BERT Cosine Similarity
dtype: float64
- name: BLEU Score
dtype: float64
- name: Jaccard Similarity
dtype: float64
- name: Levenshtein Distance
dtype: int64
- name: TF-IDF Cosine Similarity
dtype: float64
- name: sft_text_Mistral
dtype: string
- name: finetune_Mistral-7B-Instruct-v0.2
dtype: string
- name: cleaned_finetune_Mistral-7B-Instruct-v0.2
dtype: string
splits:
- name: train
num_bytes: 4645727
num_examples: 260
- name: test
num_bytes: 1228392
num_examples: 65
download_size: 3206836
dataset_size: 5874119
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
JCTN/hypernetworks | ---
license: other
---
|
gagan3012/Arabic-stsb_multi_mt_extend-benchmark | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float32
splits:
- name: validation
num_bytes: 242066
num_examples: 1379
download_size: 127924
dataset_size: 242066
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
cq01/mawps-asdiv-a_svamp | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Numbers
sequence: float64
- name: Equation
sequence: string
- name: Answer
dtype: float64
- name: group_nums
sequence: int64
- name: Body
dtype: string
- name: Ques
dtype: string
- name: Type
dtype: string
- name: Variation Type
sequence: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 1570340
num_examples: 3138
- name: validation
num_bytes: 595992
num_examples: 1000
download_size: 634229
dataset_size: 2166332
---
# Dataset Card for "mawps-asdiv-a_svamp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_moreh__MoMo-72B-lora-1.8.7-DPO | ---
pretty_name: Evaluation run of moreh/MoMo-72B-lora-1.8.7-DPO
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [moreh/MoMo-72B-lora-1.8.7-DPO](https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_moreh__MoMo-72B-lora-1.8.7-DPO\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-22T10:33:58.465501](https://huggingface.co/datasets/open-llm-leaderboard/details_moreh__MoMo-72B-lora-1.8.7-DPO/blob/main/results_2024-01-22T10-33-58.465501.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.76953499319056,\n\
\ \"acc_stderr\": 0.0279294705479517,\n \"acc_norm\": 0.7716820258755411,\n\
\ \"acc_norm_stderr\": 0.0284840002969871,\n \"mc1\": 0.631578947368421,\n\
\ \"mc1_stderr\": 0.016886551261046046,\n \"mc2\": 0.7470556249138,\n\
\ \"mc2_stderr\": 0.014379615349295343\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6800341296928327,\n \"acc_stderr\": 0.013631345807016195,\n\
\ \"acc_norm\": 0.7081911262798635,\n \"acc_norm_stderr\": 0.013284525292403511\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6733718382792272,\n\
\ \"acc_stderr\": 0.004680215003395925,\n \"acc_norm\": 0.8595897231627166,\n\
\ \"acc_norm_stderr\": 0.0034670217932838386\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7111111111111111,\n\
\ \"acc_stderr\": 0.03915450630414251,\n \"acc_norm\": 0.7111111111111111,\n\
\ \"acc_norm_stderr\": 0.03915450630414251\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8881578947368421,\n \"acc_stderr\": 0.02564834125169361,\n\
\ \"acc_norm\": 0.8881578947368421,\n \"acc_norm_stderr\": 0.02564834125169361\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.81,\n\
\ \"acc_stderr\": 0.03942772444036623,\n \"acc_norm\": 0.81,\n \
\ \"acc_norm_stderr\": 0.03942772444036623\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8452830188679246,\n \"acc_stderr\": 0.02225707555879128,\n\
\ \"acc_norm\": 0.8452830188679246,\n \"acc_norm_stderr\": 0.02225707555879128\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9305555555555556,\n\
\ \"acc_stderr\": 0.02125797482283205,\n \"acc_norm\": 0.9305555555555556,\n\
\ \"acc_norm_stderr\": 0.02125797482283205\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.63,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n\
\ \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7687861271676301,\n\
\ \"acc_stderr\": 0.03214737302029468,\n \"acc_norm\": 0.7687861271676301,\n\
\ \"acc_norm_stderr\": 0.03214737302029468\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5490196078431373,\n \"acc_stderr\": 0.049512182523962604,\n\
\ \"acc_norm\": 0.5490196078431373,\n \"acc_norm_stderr\": 0.049512182523962604\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n\
\ \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7914893617021277,\n \"acc_stderr\": 0.02655698211783873,\n\
\ \"acc_norm\": 0.7914893617021277,\n \"acc_norm_stderr\": 0.02655698211783873\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5964912280701754,\n\
\ \"acc_stderr\": 0.04615186962583707,\n \"acc_norm\": 0.5964912280701754,\n\
\ \"acc_norm_stderr\": 0.04615186962583707\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.7793103448275862,\n \"acc_stderr\": 0.03455930201924811,\n\
\ \"acc_norm\": 0.7793103448275862,\n \"acc_norm_stderr\": 0.03455930201924811\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.6957671957671958,\n \"acc_stderr\": 0.023695415009463087,\n \"\
acc_norm\": 0.6957671957671958,\n \"acc_norm_stderr\": 0.023695415009463087\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5714285714285714,\n\
\ \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.5714285714285714,\n\
\ \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8838709677419355,\n\
\ \"acc_stderr\": 0.018225757949432306,\n \"acc_norm\": 0.8838709677419355,\n\
\ \"acc_norm_stderr\": 0.018225757949432306\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.645320197044335,\n \"acc_stderr\": 0.0336612448905145,\n\
\ \"acc_norm\": 0.645320197044335,\n \"acc_norm_stderr\": 0.0336612448905145\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\"\
: 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8606060606060606,\n \"acc_stderr\": 0.0270459488258654,\n\
\ \"acc_norm\": 0.8606060606060606,\n \"acc_norm_stderr\": 0.0270459488258654\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9343434343434344,\n \"acc_stderr\": 0.01764652667723332,\n \"\
acc_norm\": 0.9343434343434344,\n \"acc_norm_stderr\": 0.01764652667723332\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9896373056994818,\n \"acc_stderr\": 0.007308424386792194,\n\
\ \"acc_norm\": 0.9896373056994818,\n \"acc_norm_stderr\": 0.007308424386792194\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8076923076923077,\n \"acc_stderr\": 0.019982347208637296,\n\
\ \"acc_norm\": 0.8076923076923077,\n \"acc_norm_stderr\": 0.019982347208637296\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.4666666666666667,\n \"acc_stderr\": 0.03041771696171748,\n \
\ \"acc_norm\": 0.4666666666666667,\n \"acc_norm_stderr\": 0.03041771696171748\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.023005459446673957,\n\
\ \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.023005459446673957\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.5629139072847682,\n \"acc_stderr\": 0.040500357222306355,\n \"\
acc_norm\": 0.5629139072847682,\n \"acc_norm_stderr\": 0.040500357222306355\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9284403669724771,\n \"acc_stderr\": 0.011051255247815476,\n \"\
acc_norm\": 0.9284403669724771,\n \"acc_norm_stderr\": 0.011051255247815476\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6805555555555556,\n \"acc_stderr\": 0.03179876342176853,\n \"\
acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.03179876342176853\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9215686274509803,\n \"acc_stderr\": 0.018869514646658928,\n \"\
acc_norm\": 0.9215686274509803,\n \"acc_norm_stderr\": 0.018869514646658928\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9071729957805907,\n \"acc_stderr\": 0.018889750550956715,\n \
\ \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.018889750550956715\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8026905829596412,\n\
\ \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.8026905829596412,\n\
\ \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8854961832061069,\n \"acc_stderr\": 0.027927473753597453,\n\
\ \"acc_norm\": 0.8854961832061069,\n \"acc_norm_stderr\": 0.027927473753597453\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8925619834710744,\n \"acc_stderr\": 0.028268812192540616,\n \"\
acc_norm\": 0.8925619834710744,\n \"acc_norm_stderr\": 0.028268812192540616\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8611111111111112,\n\
\ \"acc_stderr\": 0.0334327006286962,\n \"acc_norm\": 0.8611111111111112,\n\
\ \"acc_norm_stderr\": 0.0334327006286962\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8466257668711656,\n \"acc_stderr\": 0.028311601441438596,\n\
\ \"acc_norm\": 0.8466257668711656,\n \"acc_norm_stderr\": 0.028311601441438596\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6071428571428571,\n\
\ \"acc_stderr\": 0.046355501356099754,\n \"acc_norm\": 0.6071428571428571,\n\
\ \"acc_norm_stderr\": 0.046355501356099754\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.03393295729761011,\n\
\ \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.03393295729761011\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n\
\ \"acc_stderr\": 0.015537514263253874,\n \"acc_norm\": 0.9401709401709402,\n\
\ \"acc_norm_stderr\": 0.015537514263253874\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977725,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977725\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9157088122605364,\n\
\ \"acc_stderr\": 0.009934966499513784,\n \"acc_norm\": 0.9157088122605364,\n\
\ \"acc_norm_stderr\": 0.009934966499513784\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8410404624277457,\n \"acc_stderr\": 0.019685307033571946,\n\
\ \"acc_norm\": 0.8410404624277457,\n \"acc_norm_stderr\": 0.019685307033571946\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7027932960893855,\n\
\ \"acc_stderr\": 0.015285313353641597,\n \"acc_norm\": 0.7027932960893855,\n\
\ \"acc_norm_stderr\": 0.015285313353641597\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8496732026143791,\n \"acc_stderr\": 0.02046417512433263,\n\
\ \"acc_norm\": 0.8496732026143791,\n \"acc_norm_stderr\": 0.02046417512433263\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8488745980707395,\n\
\ \"acc_stderr\": 0.020342749744428647,\n \"acc_norm\": 0.8488745980707395,\n\
\ \"acc_norm_stderr\": 0.020342749744428647\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8796296296296297,\n \"acc_stderr\": 0.018105414094329676,\n\
\ \"acc_norm\": 0.8796296296296297,\n \"acc_norm_stderr\": 0.018105414094329676\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6453900709219859,\n \"acc_stderr\": 0.02853865002887863,\n \
\ \"acc_norm\": 0.6453900709219859,\n \"acc_norm_stderr\": 0.02853865002887863\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6088657105606258,\n\
\ \"acc_stderr\": 0.01246386183998206,\n \"acc_norm\": 0.6088657105606258,\n\
\ \"acc_norm_stderr\": 0.01246386183998206\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8345588235294118,\n \"acc_stderr\": 0.02257177102549473,\n\
\ \"acc_norm\": 0.8345588235294118,\n \"acc_norm_stderr\": 0.02257177102549473\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8169934640522876,\n \"acc_stderr\": 0.01564306991127334,\n \
\ \"acc_norm\": 0.8169934640522876,\n \"acc_norm_stderr\": 0.01564306991127334\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7545454545454545,\n\
\ \"acc_stderr\": 0.041220665028782855,\n \"acc_norm\": 0.7545454545454545,\n\
\ \"acc_norm_stderr\": 0.041220665028782855\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8163265306122449,\n \"acc_stderr\": 0.024789071332007643,\n\
\ \"acc_norm\": 0.8163265306122449,\n \"acc_norm_stderr\": 0.024789071332007643\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.900497512437811,\n\
\ \"acc_stderr\": 0.021166216304659393,\n \"acc_norm\": 0.900497512437811,\n\
\ \"acc_norm_stderr\": 0.021166216304659393\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.94,\n \"acc_stderr\": 0.02386832565759419,\n \
\ \"acc_norm\": 0.94,\n \"acc_norm_stderr\": 0.02386832565759419\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5903614457831325,\n\
\ \"acc_stderr\": 0.038284011150790206,\n \"acc_norm\": 0.5903614457831325,\n\
\ \"acc_norm_stderr\": 0.038284011150790206\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.02517298435015577,\n\
\ \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.02517298435015577\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.631578947368421,\n\
\ \"mc1_stderr\": 0.016886551261046046,\n \"mc2\": 0.7470556249138,\n\
\ \"mc2_stderr\": 0.014379615349295343\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.840568271507498,\n \"acc_stderr\": 0.010288617479454764\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7862016679302501,\n \
\ \"acc_stderr\": 0.01129305469863505\n }\n}\n```"
repo_url: https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|arc:challenge|25_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|gsm8k|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hellaswag|10_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T10-33-58.465501.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-22T10-33-58.465501.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- '**/details_harness|winogrande|5_2024-01-22T10-33-58.465501.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-22T10-33-58.465501.parquet'
- config_name: results
data_files:
- split: 2024_01_22T10_33_58.465501
path:
- results_2024-01-22T10-33-58.465501.parquet
- split: latest
path:
- results_2024-01-22T10-33-58.465501.parquet
---
# Dataset Card for Evaluation run of moreh/MoMo-72B-lora-1.8.7-DPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [moreh/MoMo-72B-lora-1.8.7-DPO](https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_moreh__MoMo-72B-lora-1.8.7-DPO",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-22T10:33:58.465501](https://huggingface.co/datasets/open-llm-leaderboard/details_moreh__MoMo-72B-lora-1.8.7-DPO/blob/main/results_2024-01-22T10-33-58.465501.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.76953499319056,
"acc_stderr": 0.0279294705479517,
"acc_norm": 0.7716820258755411,
"acc_norm_stderr": 0.0284840002969871,
"mc1": 0.631578947368421,
"mc1_stderr": 0.016886551261046046,
"mc2": 0.7470556249138,
"mc2_stderr": 0.014379615349295343
},
"harness|arc:challenge|25": {
"acc": 0.6800341296928327,
"acc_stderr": 0.013631345807016195,
"acc_norm": 0.7081911262798635,
"acc_norm_stderr": 0.013284525292403511
},
"harness|hellaswag|10": {
"acc": 0.6733718382792272,
"acc_stderr": 0.004680215003395925,
"acc_norm": 0.8595897231627166,
"acc_norm_stderr": 0.0034670217932838386
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7111111111111111,
"acc_stderr": 0.03915450630414251,
"acc_norm": 0.7111111111111111,
"acc_norm_stderr": 0.03915450630414251
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8881578947368421,
"acc_stderr": 0.02564834125169361,
"acc_norm": 0.8881578947368421,
"acc_norm_stderr": 0.02564834125169361
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036623,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036623
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8452830188679246,
"acc_stderr": 0.02225707555879128,
"acc_norm": 0.8452830188679246,
"acc_norm_stderr": 0.02225707555879128
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9305555555555556,
"acc_stderr": 0.02125797482283205,
"acc_norm": 0.9305555555555556,
"acc_norm_stderr": 0.02125797482283205
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7687861271676301,
"acc_stderr": 0.03214737302029468,
"acc_norm": 0.7687861271676301,
"acc_norm_stderr": 0.03214737302029468
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5490196078431373,
"acc_stderr": 0.049512182523962604,
"acc_norm": 0.5490196078431373,
"acc_norm_stderr": 0.049512182523962604
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7914893617021277,
"acc_stderr": 0.02655698211783873,
"acc_norm": 0.7914893617021277,
"acc_norm_stderr": 0.02655698211783873
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5964912280701754,
"acc_stderr": 0.04615186962583707,
"acc_norm": 0.5964912280701754,
"acc_norm_stderr": 0.04615186962583707
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7793103448275862,
"acc_stderr": 0.03455930201924811,
"acc_norm": 0.7793103448275862,
"acc_norm_stderr": 0.03455930201924811
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6957671957671958,
"acc_stderr": 0.023695415009463087,
"acc_norm": 0.6957671957671958,
"acc_norm_stderr": 0.023695415009463087
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8838709677419355,
"acc_stderr": 0.018225757949432306,
"acc_norm": 0.8838709677419355,
"acc_norm_stderr": 0.018225757949432306
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.645320197044335,
"acc_stderr": 0.0336612448905145,
"acc_norm": 0.645320197044335,
"acc_norm_stderr": 0.0336612448905145
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8606060606060606,
"acc_stderr": 0.0270459488258654,
"acc_norm": 0.8606060606060606,
"acc_norm_stderr": 0.0270459488258654
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9343434343434344,
"acc_stderr": 0.01764652667723332,
"acc_norm": 0.9343434343434344,
"acc_norm_stderr": 0.01764652667723332
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9896373056994818,
"acc_stderr": 0.007308424386792194,
"acc_norm": 0.9896373056994818,
"acc_norm_stderr": 0.007308424386792194
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8076923076923077,
"acc_stderr": 0.019982347208637296,
"acc_norm": 0.8076923076923077,
"acc_norm_stderr": 0.019982347208637296
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.03041771696171748,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.03041771696171748
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.023005459446673957,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.023005459446673957
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5629139072847682,
"acc_stderr": 0.040500357222306355,
"acc_norm": 0.5629139072847682,
"acc_norm_stderr": 0.040500357222306355
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9284403669724771,
"acc_stderr": 0.011051255247815476,
"acc_norm": 0.9284403669724771,
"acc_norm_stderr": 0.011051255247815476
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.03179876342176853,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.03179876342176853
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9215686274509803,
"acc_stderr": 0.018869514646658928,
"acc_norm": 0.9215686274509803,
"acc_norm_stderr": 0.018869514646658928
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9071729957805907,
"acc_stderr": 0.018889750550956715,
"acc_norm": 0.9071729957805907,
"acc_norm_stderr": 0.018889750550956715
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8026905829596412,
"acc_stderr": 0.02670985334496796,
"acc_norm": 0.8026905829596412,
"acc_norm_stderr": 0.02670985334496796
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8854961832061069,
"acc_stderr": 0.027927473753597453,
"acc_norm": 0.8854961832061069,
"acc_norm_stderr": 0.027927473753597453
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8925619834710744,
"acc_stderr": 0.028268812192540616,
"acc_norm": 0.8925619834710744,
"acc_norm_stderr": 0.028268812192540616
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8611111111111112,
"acc_stderr": 0.0334327006286962,
"acc_norm": 0.8611111111111112,
"acc_norm_stderr": 0.0334327006286962
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8466257668711656,
"acc_stderr": 0.028311601441438596,
"acc_norm": 0.8466257668711656,
"acc_norm_stderr": 0.028311601441438596
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6071428571428571,
"acc_stderr": 0.046355501356099754,
"acc_norm": 0.6071428571428571,
"acc_norm_stderr": 0.046355501356099754
},
"harness|hendrycksTest-management|5": {
"acc": 0.8640776699029126,
"acc_stderr": 0.03393295729761011,
"acc_norm": 0.8640776699029126,
"acc_norm_stderr": 0.03393295729761011
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.015537514263253874,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.015537514263253874
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.86,
"acc_stderr": 0.034873508801977725,
"acc_norm": 0.86,
"acc_norm_stderr": 0.034873508801977725
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9157088122605364,
"acc_stderr": 0.009934966499513784,
"acc_norm": 0.9157088122605364,
"acc_norm_stderr": 0.009934966499513784
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8410404624277457,
"acc_stderr": 0.019685307033571946,
"acc_norm": 0.8410404624277457,
"acc_norm_stderr": 0.019685307033571946
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7027932960893855,
"acc_stderr": 0.015285313353641597,
"acc_norm": 0.7027932960893855,
"acc_norm_stderr": 0.015285313353641597
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8496732026143791,
"acc_stderr": 0.02046417512433263,
"acc_norm": 0.8496732026143791,
"acc_norm_stderr": 0.02046417512433263
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8488745980707395,
"acc_stderr": 0.020342749744428647,
"acc_norm": 0.8488745980707395,
"acc_norm_stderr": 0.020342749744428647
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8796296296296297,
"acc_stderr": 0.018105414094329676,
"acc_norm": 0.8796296296296297,
"acc_norm_stderr": 0.018105414094329676
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6453900709219859,
"acc_stderr": 0.02853865002887863,
"acc_norm": 0.6453900709219859,
"acc_norm_stderr": 0.02853865002887863
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.6088657105606258,
"acc_stderr": 0.01246386183998206,
"acc_norm": 0.6088657105606258,
"acc_norm_stderr": 0.01246386183998206
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8345588235294118,
"acc_stderr": 0.02257177102549473,
"acc_norm": 0.8345588235294118,
"acc_norm_stderr": 0.02257177102549473
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8169934640522876,
"acc_stderr": 0.01564306991127334,
"acc_norm": 0.8169934640522876,
"acc_norm_stderr": 0.01564306991127334
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7545454545454545,
"acc_stderr": 0.041220665028782855,
"acc_norm": 0.7545454545454545,
"acc_norm_stderr": 0.041220665028782855
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8163265306122449,
"acc_stderr": 0.024789071332007643,
"acc_norm": 0.8163265306122449,
"acc_norm_stderr": 0.024789071332007643
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.900497512437811,
"acc_stderr": 0.021166216304659393,
"acc_norm": 0.900497512437811,
"acc_norm_stderr": 0.021166216304659393
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.94,
"acc_stderr": 0.02386832565759419,
"acc_norm": 0.94,
"acc_norm_stderr": 0.02386832565759419
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5903614457831325,
"acc_stderr": 0.038284011150790206,
"acc_norm": 0.5903614457831325,
"acc_norm_stderr": 0.038284011150790206
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8771929824561403,
"acc_stderr": 0.02517298435015577,
"acc_norm": 0.8771929824561403,
"acc_norm_stderr": 0.02517298435015577
},
"harness|truthfulqa:mc|0": {
"mc1": 0.631578947368421,
"mc1_stderr": 0.016886551261046046,
"mc2": 0.7470556249138,
"mc2_stderr": 0.014379615349295343
},
"harness|winogrande|5": {
"acc": 0.840568271507498,
"acc_stderr": 0.010288617479454764
},
"harness|gsm8k|5": {
"acc": 0.7862016679302501,
"acc_stderr": 0.01129305469863505
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
sarthak247/instruct-apr | ---
task_categories:
- fill-mask
language:
- en
tags:
- code
pretty_name: APR
size_categories:
- 100K<n<1M
---
Instruction based dataset used to instruct-tune Codellama model for APR based tasks
Three fields:
- Instruction: One of the chosen 5 instructions
- Context: The context/input supplied along with the instruction. In this case, it consists of `<PRE>` PREFIX CODE `<SUF>` SUFFIX CODE `<MID>` which are the prefix, suffix and middle tokens for Codellama but can be replaced with other tokens as well depending on model.
- Response: The predicted response (which is the middle missing part in the code)
The instructions supplied with the prompt are usually one of these:
1. Given a code snippet with context (prefix) and expected outcome (suffix), predict and complete the missing part to ensure a seamless integration between the provided context and expected outcome.
2. Insert the missing logic between the provided context (prefix) and the expected outcome (suffix) to ensure a smooth transition and logical flow in the code.
3. Implement the missing functionality in the code snippet, considering the provided context and the desired outcome. Ensure that the function aligns with the overall goal indicated by the context and expected outcome.
4. Continue the flow of the code by providing the missing lines that logically follow from the established context (prefix) and lead to the expected outcome (suffix).
5. Integrate the missing code to ensure coherence and logical flow between the provided context and expected outcome. Consider variables, data structures, or conditions established in the context and ensure their appropriate utilization in the missing part. |
szymonrucinski/tartakovsky-style | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 17443720
num_examples: 509
download_size: 16990015
dataset_size: 17443720
license: apache-2.0
task_categories:
- text-to-image
tags:
- stable diffusion
- images
- tartakovsky
- cartoon
- animation
pretty_name: Tartakovsky Style
size_categories:
- n<1K
---
# Dataset Card for "tartakovsky-style"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andreassa/sl | ---
license: openrail
---
|
Saads/birdsounds | ---
license: mit
---
|
CyberHarem/perlica_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Perlica (Arknights)
This is the dataset of Perlica (Arknights), containing 69 images and their tags.
The core tags of this character are `animal_ears, long_hair, blue_eyes, breasts, grey_hair, medium_breasts, blunt_bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 69 | 137.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/perlica_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 69 | 111.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/perlica_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 182 | 227.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/perlica_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/perlica_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, off_shoulder, open_jacket, solo, white_dress, white_jacket, black_gloves, long_sleeves, sleeveless_dress, black_pantyhose, gradient_background, simple_background, cowboy_shot, parted_lips, pouch, white_background |
| 1 | 8 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, sleeveless_dress, white_background, white_dress, white_jacket, off_shoulder, open_jacket, simple_background, solo, upper_body, blush, parted_lips, black_gloves, long_sleeves, closed_mouth, puffy_sleeves, small_breasts |
| 2 | 5 |  |  |  |  |  | 1girl, bare_shoulders, black_gloves, black_pantyhose, closed_mouth, feet_out_of_frame, long_sleeves, looking_at_viewer, off_shoulder, open_jacket, sleeveless_dress, solo, standing, white_dress, white_jacket, sidelocks, smile, from_above, holding, simple_background, white_background |
| 3 | 7 |  |  |  |  |  | 1girl, bare_shoulders, black_footwear, black_gloves, black_pantyhose, full_body, off_shoulder, open_jacket, sleeveless_dress, solo, standing, white_dress, white_jacket, looking_at_viewer, shoes, simple_background, white_background, long_sleeves, grey_jacket, holding, parted_lips, shadow, sword |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | looking_at_viewer | off_shoulder | open_jacket | solo | white_dress | white_jacket | black_gloves | long_sleeves | sleeveless_dress | black_pantyhose | gradient_background | simple_background | cowboy_shot | parted_lips | pouch | white_background | upper_body | blush | closed_mouth | puffy_sleeves | small_breasts | feet_out_of_frame | standing | sidelocks | smile | from_above | holding | black_footwear | full_body | shoes | grey_jacket | shadow | sword |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------------|:---------------|:--------------|:-------|:--------------|:---------------|:---------------|:---------------|:-------------------|:------------------|:----------------------|:--------------------|:--------------|:--------------|:--------|:-------------------|:-------------|:--------|:---------------|:----------------|:----------------|:--------------------|:-----------|:------------|:--------|:-------------|:----------|:-----------------|:------------|:--------|:--------------|:---------|:--------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | X | | X | | X | X | X | X | X | X | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | X | | | | X | | | X | | | X | X | X | X | X | X | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | X | | X | | X | | | | | | | X | | | | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_KoboldAI__Mixtral-8x7B-Holodeck-v1 | ---
pretty_name: Evaluation run of KoboldAI/Mixtral-8x7B-Holodeck-v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [KoboldAI/Mixtral-8x7B-Holodeck-v1](https://huggingface.co/KoboldAI/Mixtral-8x7B-Holodeck-v1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KoboldAI__Mixtral-8x7B-Holodeck-v1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-08T01:48:17.297315](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__Mixtral-8x7B-Holodeck-v1/blob/main/results_2024-03-08T01-48-17.297315.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7134257804439076,\n\
\ \"acc_stderr\": 0.030058748528777976,\n \"acc_norm\": 0.7183525568245921,\n\
\ \"acc_norm_stderr\": 0.030631711241976504,\n \"mc1\": 0.31334149326805383,\n\
\ \"mc1_stderr\": 0.01623806506905961,\n \"mc2\": 0.48282128629443755,\n\
\ \"mc2_stderr\": 0.014218563548108439\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6254266211604096,\n \"acc_stderr\": 0.014144193471893463,\n\
\ \"acc_norm\": 0.6655290102389079,\n \"acc_norm_stderr\": 0.013787460322441379\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6700856403106951,\n\
\ \"acc_stderr\": 0.004692208279690597,\n \"acc_norm\": 0.867755427205736,\n\
\ \"acc_norm_stderr\": 0.0033806414709899235\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6962962962962963,\n\
\ \"acc_stderr\": 0.03972552884785137,\n \"acc_norm\": 0.6962962962962963,\n\
\ \"acc_norm_stderr\": 0.03972552884785137\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8092105263157895,\n \"acc_stderr\": 0.031975658210325,\n\
\ \"acc_norm\": 0.8092105263157895,\n \"acc_norm_stderr\": 0.031975658210325\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.71,\n\
\ \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.71,\n \
\ \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7924528301886793,\n \"acc_stderr\": 0.024959918028911274,\n\
\ \"acc_norm\": 0.7924528301886793,\n \"acc_norm_stderr\": 0.024959918028911274\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8541666666666666,\n\
\ \"acc_stderr\": 0.02951424596429177,\n \"acc_norm\": 0.8541666666666666,\n\
\ \"acc_norm_stderr\": 0.02951424596429177\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n\
\ \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7109826589595376,\n\
\ \"acc_stderr\": 0.034564257450869995,\n \"acc_norm\": 0.7109826589595376,\n\
\ \"acc_norm_stderr\": 0.034564257450869995\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.46078431372549017,\n \"acc_stderr\": 0.04959859966384181,\n\
\ \"acc_norm\": 0.46078431372549017,\n \"acc_norm_stderr\": 0.04959859966384181\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n\
\ \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6723404255319149,\n \"acc_stderr\": 0.030683020843231008,\n\
\ \"acc_norm\": 0.6723404255319149,\n \"acc_norm_stderr\": 0.030683020843231008\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6491228070175439,\n\
\ \"acc_stderr\": 0.04489539350270698,\n \"acc_norm\": 0.6491228070175439,\n\
\ \"acc_norm_stderr\": 0.04489539350270698\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.696551724137931,\n \"acc_stderr\": 0.038312260488503336,\n\
\ \"acc_norm\": 0.696551724137931,\n \"acc_norm_stderr\": 0.038312260488503336\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.48412698412698413,\n \"acc_stderr\": 0.025738330639412152,\n \"\
acc_norm\": 0.48412698412698413,\n \"acc_norm_stderr\": 0.025738330639412152\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5476190476190477,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.5476190476190477,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8354838709677419,\n\
\ \"acc_stderr\": 0.02109084774593931,\n \"acc_norm\": 0.8354838709677419,\n\
\ \"acc_norm_stderr\": 0.02109084774593931\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6305418719211823,\n \"acc_stderr\": 0.03395970381998574,\n\
\ \"acc_norm\": 0.6305418719211823,\n \"acc_norm_stderr\": 0.03395970381998574\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n\
\ \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8636363636363636,\n \"acc_stderr\": 0.024450155973189835,\n \"\
acc_norm\": 0.8636363636363636,\n \"acc_norm_stderr\": 0.024450155973189835\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9481865284974094,\n \"acc_stderr\": 0.01599622932024412,\n\
\ \"acc_norm\": 0.9481865284974094,\n \"acc_norm_stderr\": 0.01599622932024412\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7025641025641025,\n \"acc_stderr\": 0.023177408131465942,\n\
\ \"acc_norm\": 0.7025641025641025,\n \"acc_norm_stderr\": 0.023177408131465942\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.37777777777777777,\n \"acc_stderr\": 0.029560707392465718,\n \
\ \"acc_norm\": 0.37777777777777777,\n \"acc_norm_stderr\": 0.029560707392465718\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7941176470588235,\n \"acc_stderr\": 0.026265024608275882,\n\
\ \"acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.026265024608275882\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4966887417218543,\n \"acc_stderr\": 0.04082393379449654,\n \"\
acc_norm\": 0.4966887417218543,\n \"acc_norm_stderr\": 0.04082393379449654\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8935779816513761,\n \"acc_stderr\": 0.013221554674594372,\n \"\
acc_norm\": 0.8935779816513761,\n \"acc_norm_stderr\": 0.013221554674594372\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6435185185185185,\n \"acc_stderr\": 0.032664783315272714,\n \"\
acc_norm\": 0.6435185185185185,\n \"acc_norm_stderr\": 0.032664783315272714\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8529411764705882,\n \"acc_stderr\": 0.024857478080250458,\n \"\
acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.024857478080250458\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8734177215189873,\n \"acc_stderr\": 0.021644195727955173,\n \
\ \"acc_norm\": 0.8734177215189873,\n \"acc_norm_stderr\": 0.021644195727955173\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7623318385650224,\n\
\ \"acc_stderr\": 0.028568079464714284,\n \"acc_norm\": 0.7623318385650224,\n\
\ \"acc_norm_stderr\": 0.028568079464714284\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8549618320610687,\n \"acc_stderr\": 0.030884661089515368,\n\
\ \"acc_norm\": 0.8549618320610687,\n \"acc_norm_stderr\": 0.030884661089515368\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035202,\n \"\
acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035202\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8518518518518519,\n\
\ \"acc_stderr\": 0.03434300243631002,\n \"acc_norm\": 0.8518518518518519,\n\
\ \"acc_norm_stderr\": 0.03434300243631002\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5625,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.5625,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8737864077669902,\n \"acc_stderr\": 0.03288180278808628,\n\
\ \"acc_norm\": 0.8737864077669902,\n \"acc_norm_stderr\": 0.03288180278808628\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.905982905982906,\n\
\ \"acc_stderr\": 0.019119892798924974,\n \"acc_norm\": 0.905982905982906,\n\
\ \"acc_norm_stderr\": 0.019119892798924974\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.876117496807152,\n\
\ \"acc_stderr\": 0.011781017100950737,\n \"acc_norm\": 0.876117496807152,\n\
\ \"acc_norm_stderr\": 0.011781017100950737\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7976878612716763,\n \"acc_stderr\": 0.02162807738019612,\n\
\ \"acc_norm\": 0.7976878612716763,\n \"acc_norm_stderr\": 0.02162807738019612\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4033519553072626,\n\
\ \"acc_stderr\": 0.016407123032195253,\n \"acc_norm\": 0.4033519553072626,\n\
\ \"acc_norm_stderr\": 0.016407123032195253\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.022292858284568055,\n\
\ \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.022292858284568055\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7845659163987139,\n\
\ \"acc_stderr\": 0.023350225475471442,\n \"acc_norm\": 0.7845659163987139,\n\
\ \"acc_norm_stderr\": 0.023350225475471442\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8364197530864198,\n \"acc_stderr\": 0.02058146613825714,\n\
\ \"acc_norm\": 0.8364197530864198,\n \"acc_norm_stderr\": 0.02058146613825714\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5425531914893617,\n \"acc_stderr\": 0.029719281272236834,\n \
\ \"acc_norm\": 0.5425531914893617,\n \"acc_norm_stderr\": 0.029719281272236834\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5384615384615384,\n\
\ \"acc_stderr\": 0.012732398286190431,\n \"acc_norm\": 0.5384615384615384,\n\
\ \"acc_norm_stderr\": 0.012732398286190431\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8161764705882353,\n \"acc_stderr\": 0.02352924218519311,\n\
\ \"acc_norm\": 0.8161764705882353,\n \"acc_norm_stderr\": 0.02352924218519311\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7777777777777778,\n \"acc_stderr\": 0.016819028375736383,\n \
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.016819028375736383\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7918367346938775,\n \"acc_stderr\": 0.025991117672813296,\n\
\ \"acc_norm\": 0.7918367346938775,\n \"acc_norm_stderr\": 0.025991117672813296\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n\
\ \"acc_stderr\": 0.022509345325101713,\n \"acc_norm\": 0.8855721393034826,\n\
\ \"acc_norm_stderr\": 0.022509345325101713\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.02876234912646612,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.02876234912646612\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.025172984350155768,\n\
\ \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.025172984350155768\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31334149326805383,\n\
\ \"mc1_stderr\": 0.01623806506905961,\n \"mc2\": 0.48282128629443755,\n\
\ \"mc2_stderr\": 0.014218563548108439\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8121546961325967,\n \"acc_stderr\": 0.01097748110343509\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5617892342683851,\n \
\ \"acc_stderr\": 0.01366691591725507\n }\n}\n```"
repo_url: https://huggingface.co/KoboldAI/Mixtral-8x7B-Holodeck-v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|arc:challenge|25_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|gsm8k|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hellaswag|10_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-08T01-48-17.297315.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-08T01-48-17.297315.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- '**/details_harness|winogrande|5_2024-03-08T01-48-17.297315.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-08T01-48-17.297315.parquet'
- config_name: results
data_files:
- split: 2024_03_08T01_48_17.297315
path:
- results_2024-03-08T01-48-17.297315.parquet
- split: latest
path:
- results_2024-03-08T01-48-17.297315.parquet
---
# Dataset Card for Evaluation run of KoboldAI/Mixtral-8x7B-Holodeck-v1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [KoboldAI/Mixtral-8x7B-Holodeck-v1](https://huggingface.co/KoboldAI/Mixtral-8x7B-Holodeck-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KoboldAI__Mixtral-8x7B-Holodeck-v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-08T01:48:17.297315](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__Mixtral-8x7B-Holodeck-v1/blob/main/results_2024-03-08T01-48-17.297315.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7134257804439076,
"acc_stderr": 0.030058748528777976,
"acc_norm": 0.7183525568245921,
"acc_norm_stderr": 0.030631711241976504,
"mc1": 0.31334149326805383,
"mc1_stderr": 0.01623806506905961,
"mc2": 0.48282128629443755,
"mc2_stderr": 0.014218563548108439
},
"harness|arc:challenge|25": {
"acc": 0.6254266211604096,
"acc_stderr": 0.014144193471893463,
"acc_norm": 0.6655290102389079,
"acc_norm_stderr": 0.013787460322441379
},
"harness|hellaswag|10": {
"acc": 0.6700856403106951,
"acc_stderr": 0.004692208279690597,
"acc_norm": 0.867755427205736,
"acc_norm_stderr": 0.0033806414709899235
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6962962962962963,
"acc_stderr": 0.03972552884785137,
"acc_norm": 0.6962962962962963,
"acc_norm_stderr": 0.03972552884785137
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8092105263157895,
"acc_stderr": 0.031975658210325,
"acc_norm": 0.8092105263157895,
"acc_norm_stderr": 0.031975658210325
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.71,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.71,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7924528301886793,
"acc_stderr": 0.024959918028911274,
"acc_norm": 0.7924528301886793,
"acc_norm_stderr": 0.024959918028911274
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8541666666666666,
"acc_stderr": 0.02951424596429177,
"acc_norm": 0.8541666666666666,
"acc_norm_stderr": 0.02951424596429177
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7109826589595376,
"acc_stderr": 0.034564257450869995,
"acc_norm": 0.7109826589595376,
"acc_norm_stderr": 0.034564257450869995
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.46078431372549017,
"acc_stderr": 0.04959859966384181,
"acc_norm": 0.46078431372549017,
"acc_norm_stderr": 0.04959859966384181
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6723404255319149,
"acc_stderr": 0.030683020843231008,
"acc_norm": 0.6723404255319149,
"acc_norm_stderr": 0.030683020843231008
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6491228070175439,
"acc_stderr": 0.04489539350270698,
"acc_norm": 0.6491228070175439,
"acc_norm_stderr": 0.04489539350270698
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.696551724137931,
"acc_stderr": 0.038312260488503336,
"acc_norm": 0.696551724137931,
"acc_norm_stderr": 0.038312260488503336
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.025738330639412152,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.025738330639412152
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5476190476190477,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.5476190476190477,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8354838709677419,
"acc_stderr": 0.02109084774593931,
"acc_norm": 0.8354838709677419,
"acc_norm_stderr": 0.02109084774593931
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6305418719211823,
"acc_stderr": 0.03395970381998574,
"acc_norm": 0.6305418719211823,
"acc_norm_stderr": 0.03395970381998574
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8242424242424242,
"acc_stderr": 0.02972094300622445,
"acc_norm": 0.8242424242424242,
"acc_norm_stderr": 0.02972094300622445
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8636363636363636,
"acc_stderr": 0.024450155973189835,
"acc_norm": 0.8636363636363636,
"acc_norm_stderr": 0.024450155973189835
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9481865284974094,
"acc_stderr": 0.01599622932024412,
"acc_norm": 0.9481865284974094,
"acc_norm_stderr": 0.01599622932024412
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7025641025641025,
"acc_stderr": 0.023177408131465942,
"acc_norm": 0.7025641025641025,
"acc_norm_stderr": 0.023177408131465942
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37777777777777777,
"acc_stderr": 0.029560707392465718,
"acc_norm": 0.37777777777777777,
"acc_norm_stderr": 0.029560707392465718
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.026265024608275882,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.026265024608275882
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4966887417218543,
"acc_stderr": 0.04082393379449654,
"acc_norm": 0.4966887417218543,
"acc_norm_stderr": 0.04082393379449654
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8935779816513761,
"acc_stderr": 0.013221554674594372,
"acc_norm": 0.8935779816513761,
"acc_norm_stderr": 0.013221554674594372
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6435185185185185,
"acc_stderr": 0.032664783315272714,
"acc_norm": 0.6435185185185185,
"acc_norm_stderr": 0.032664783315272714
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.024857478080250458,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.024857478080250458
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8734177215189873,
"acc_stderr": 0.021644195727955173,
"acc_norm": 0.8734177215189873,
"acc_norm_stderr": 0.021644195727955173
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7623318385650224,
"acc_stderr": 0.028568079464714284,
"acc_norm": 0.7623318385650224,
"acc_norm_stderr": 0.028568079464714284
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8549618320610687,
"acc_stderr": 0.030884661089515368,
"acc_norm": 0.8549618320610687,
"acc_norm_stderr": 0.030884661089515368
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.030083098716035202,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.030083098716035202
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8518518518518519,
"acc_stderr": 0.03434300243631002,
"acc_norm": 0.8518518518518519,
"acc_norm_stderr": 0.03434300243631002
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5625,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.8737864077669902,
"acc_stderr": 0.03288180278808628,
"acc_norm": 0.8737864077669902,
"acc_norm_stderr": 0.03288180278808628
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.905982905982906,
"acc_stderr": 0.019119892798924974,
"acc_norm": 0.905982905982906,
"acc_norm_stderr": 0.019119892798924974
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.876117496807152,
"acc_stderr": 0.011781017100950737,
"acc_norm": 0.876117496807152,
"acc_norm_stderr": 0.011781017100950737
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7976878612716763,
"acc_stderr": 0.02162807738019612,
"acc_norm": 0.7976878612716763,
"acc_norm_stderr": 0.02162807738019612
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4033519553072626,
"acc_stderr": 0.016407123032195253,
"acc_norm": 0.4033519553072626,
"acc_norm_stderr": 0.016407123032195253
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.022292858284568055,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.022292858284568055
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7845659163987139,
"acc_stderr": 0.023350225475471442,
"acc_norm": 0.7845659163987139,
"acc_norm_stderr": 0.023350225475471442
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8364197530864198,
"acc_stderr": 0.02058146613825714,
"acc_norm": 0.8364197530864198,
"acc_norm_stderr": 0.02058146613825714
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5425531914893617,
"acc_stderr": 0.029719281272236834,
"acc_norm": 0.5425531914893617,
"acc_norm_stderr": 0.029719281272236834
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5384615384615384,
"acc_stderr": 0.012732398286190431,
"acc_norm": 0.5384615384615384,
"acc_norm_stderr": 0.012732398286190431
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8161764705882353,
"acc_stderr": 0.02352924218519311,
"acc_norm": 0.8161764705882353,
"acc_norm_stderr": 0.02352924218519311
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.016819028375736383,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.016819028375736383
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7918367346938775,
"acc_stderr": 0.025991117672813296,
"acc_norm": 0.7918367346938775,
"acc_norm_stderr": 0.025991117672813296
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8855721393034826,
"acc_stderr": 0.022509345325101713,
"acc_norm": 0.8855721393034826,
"acc_norm_stderr": 0.022509345325101713
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.02876234912646612,
"acc_norm": 0.91,
"acc_norm_stderr": 0.02876234912646612
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8771929824561403,
"acc_stderr": 0.025172984350155768,
"acc_norm": 0.8771929824561403,
"acc_norm_stderr": 0.025172984350155768
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31334149326805383,
"mc1_stderr": 0.01623806506905961,
"mc2": 0.48282128629443755,
"mc2_stderr": 0.014218563548108439
},
"harness|winogrande|5": {
"acc": 0.8121546961325967,
"acc_stderr": 0.01097748110343509
},
"harness|gsm8k|5": {
"acc": 0.5617892342683851,
"acc_stderr": 0.01366691591725507
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
zhengyun21/PMC-Patients | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- patient summary
- medical
- biology
size_categories:
- 100K<n<1M
---
# Dataset Card for PMC-Patients
## Dataset Description
- **Homepage:** https://github.com/pmc-patients/pmc-patients
- **Repository:** https://github.com/pmc-patients/pmc-patients
- **Paper:** https://arxiv.org/pdf/2202.13876.pdf
- **Leaderboard:** https://pmc-patients.github.io/
- **Point of Contact:** zhengyun21@mails.tsinghua.edu.cn
### Dataset Summary
**PMC-Patients** is a first-of-its-kind dataset consisting of 167k patient summaries extracted from case reports in PubMed Central (PMC), 3.1M patient-article relevance and 293k patient-patient similarity annotations defined by PubMed citation graph.
### Supported Tasks and Leaderboards
**This is purely the patient summary dataset with relational annotations. For ReCDS benchmark, refer to [this dataset](https://huggingface.co/datasets/zhengyun21/PMC-Patients-ReCDS)**
Based on PMC-Patients, we define two tasks to benchmark Retrieval-based Clinical Decision Support (ReCDS) systems: Patient-to-Article Retrieval (PAR) and Patient-to-Patient Retrieval (PPR).
For details, please refer to [our paper](https://arxiv.org/pdf/2202.13876.pdf) and [leaderboard](https://pmc-patients.github.io/).
### Languages
English (en).
## Dataset Structure
### PMC-Paitents.csv
This file contains all information about patients summaries in PMC-Patients, with the following columns:
- `patient_id`: string. A continuous id of patients, starting from 0.
- `patient_uid`: string. Unique ID for each patient, with format PMID-x, where PMID is the PubMed Identifier of the source article of the patient and x denotes index of the patient in source article.
- `PMID`: string. PMID for source article.
- `file_path`: string. File path of xml file of source article.
- `title`: string. Source article title.
- `patient`: string. Patient summary.
- `age`: list of tuples. Each entry is in format `(value, unit)` where value is a float number and unit is in 'year', 'month', 'week', 'day' and 'hour' indicating age unit. For example, `[[1.0, 'year'], [2.0, 'month']]` indicating the patient is a one-year- and two-month-old infant.
- `gender`: 'M' or 'F'. Male or Female.
- `relevant_articles`: dict. The key is PMID of the relevant articles and the corresponding value is its relevance score (2 or 1 as defined in the ``Methods'' section).
- `similar_patients`: dict. The key is patient_uid of the similar patients and the corresponding value is its similarity score (2 or 1 as defined in the ``Methods'' section).
## Dataset Creation
If you are interested in the collection of PMC-Patients and reproducing our baselines, please refer to [this reporsitory](https://github.com/zhao-zy15/PMC-Patients).
### Citation Information
If you find PMC-Patients helpful in your research, please cite our work by:
```
@article{zhao2023large,
title={A large-scale dataset of patient summaries for retrieval-based clinical decision support systems},
author={Zhao, Zhengyun and Jin, Qiao and Chen, Fangyuan and Peng, Tuorui and Yu, Sheng},
journal={Scientific Data},
volume={10},
number={1},
pages={909},
year={2023},
publisher={Nature Publishing Group UK London}
}
``` |
CyberHarem/yahagi_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of yahagi/矢矧/矢矧 (Kantai Collection)
This is the dataset of yahagi/矢矧/矢矧 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `long_hair, black_hair, ponytail, red_eyes, breasts, sidelocks, large_breasts, hair_between_eyes, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 537.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yahagi_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 347.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yahagi_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1183 | 710.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yahagi_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 494.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yahagi_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1183 | 936.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yahagi_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yahagi_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, black_skirt, long_sleeves, pleated_skirt, solo, white_gloves, black_belt, black_sailor_collar, belt_buckle, buttons, collared_shirt, brown_eyes, closed_mouth, white_shirt, cowboy_shot, simple_background, smile, white_background, black_shirt, black_thighhighs, black_jacket, orange_neckerchief, ribbon, single_thighhigh |
| 1 | 39 |  |  |  |  |  | 1girl, serafuku, necktie, solo, red_skirt, white_gloves, pleated_skirt, looking_at_viewer, sleeveless_shirt, simple_background, white_background, black_sailor_collar, midriff, scrunchie, anchor_symbol, navel, single_thighhigh, cowboy_shot, smile, garter_straps |
| 2 | 18 |  |  |  |  |  | 1girl, serafuku, solo, looking_at_viewer, white_gloves, red_skirt, single_thighhigh, miniskirt, navel, midriff, smile, blush |
| 3 | 10 |  |  |  |  |  | 1boy, 1girl, blush, hetero, penis, paizuri, solo_focus, nipples, white_gloves, cum_on_breasts, open_mouth, school_uniform, brown_eyes, looking_at_viewer, breasts_squeezed_together, ejaculation, pov |
| 4 | 15 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, solo, navel, collarbone, smile, black_bikini, brown_eyes, side-tie_bikini_bottom, blush, simple_background, cowboy_shot, one-hour_drawing_challenge, twitter_username, white_background, closed_mouth |
| 5 | 9 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, solo, alternate_costume, smile, jacket, simple_background, coat, white_background, sweater, black_pantyhose, closed_mouth, holding, red_scarf |
| 6 | 12 |  |  |  |  |  | 1girl, solo, smile, blush, floral_print, obi, wide_sleeves, alternate_costume, long_sleeves, looking_at_viewer, yukata, closed_mouth, upper_body, white_kimono |
| 7 | 5 |  |  |  |  |  | detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, wrist_cuffs, 1girl, black_leotard, solo, strapless_leotard, alternate_costume, black_pantyhose, cleavage, looking_at_viewer, necktie, blush, bowtie, cowboy_shot, fishnet_pantyhose, full_body, high_heels, medium_breasts, simple_background, tail, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_skirt | long_sleeves | pleated_skirt | solo | white_gloves | black_belt | black_sailor_collar | belt_buckle | buttons | collared_shirt | brown_eyes | closed_mouth | white_shirt | cowboy_shot | simple_background | smile | white_background | black_shirt | black_thighhighs | black_jacket | orange_neckerchief | ribbon | single_thighhigh | serafuku | necktie | red_skirt | looking_at_viewer | sleeveless_shirt | midriff | scrunchie | anchor_symbol | navel | garter_straps | miniskirt | blush | 1boy | hetero | penis | paizuri | solo_focus | nipples | cum_on_breasts | open_mouth | school_uniform | breasts_squeezed_together | ejaculation | pov | cleavage | collarbone | black_bikini | side-tie_bikini_bottom | one-hour_drawing_challenge | twitter_username | alternate_costume | jacket | coat | sweater | black_pantyhose | holding | red_scarf | floral_print | obi | wide_sleeves | yukata | upper_body | white_kimono | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | wrist_cuffs | black_leotard | strapless_leotard | bowtie | fishnet_pantyhose | full_body | high_heels | medium_breasts | tail |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:---------------|:----------------|:-------|:---------------|:-------------|:----------------------|:--------------|:----------|:-----------------|:-------------|:---------------|:--------------|:--------------|:--------------------|:--------|:-------------------|:--------------|:-------------------|:---------------|:---------------------|:---------|:-------------------|:-----------|:----------|:------------|:--------------------|:-------------------|:----------|:------------|:----------------|:--------|:----------------|:------------|:--------|:-------|:---------|:--------|:----------|:-------------|:----------|:-----------------|:-------------|:-----------------|:----------------------------|:--------------|:------|:-----------|:-------------|:---------------|:-------------------------|:-----------------------------|:-------------------|:--------------------|:---------|:-------|:----------|:------------------|:----------|:------------|:---------------|:------|:---------------|:---------|:-------------|:---------------|:------------------|:-------------------|:----------------|:--------------|:--------------|:----------------|:--------------------|:---------|:--------------------|:------------|:-------------|:-----------------|:-------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 39 |  |  |  |  |  | X | | | X | X | X | | X | | | | | | | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 18 |  |  |  |  |  | X | | | | X | X | | | | | | | | | | | X | | | | | | | X | X | | X | X | | X | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | | | | | X | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 15 |  |  |  |  |  | X | | | | X | | | | | | | X | X | | X | X | X | X | | | | | | | | | | X | | | | | X | | | X | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | X | | X | | | | | | | | X | | | X | X | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 6 | 12 |  |  |  |  |  | X | | X | | X | | | | | | | | X | | | | X | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | | X | | | | | | | | | | X | X | | X | | | | | | | | X | | X | | | | | | | | X | | | | | | | | | | | | | X | | | | | | X | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
mohanraj/test | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 166037621.08240023
num_examples: 235987
- name: test
num_bytes: 18448780.917599767
num_examples: 26221
download_size: 67931456
dataset_size: 184486402.0
---
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rihoyn/atesting | ---
license: osl-3.0
---
|
tyzhu/squad_no_title_strict_v4_train_30_eval_10_recite_ans_sent | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 548439
num_examples: 368
- name: validation
num_bytes: 48707
num_examples: 50
download_size: 104798
dataset_size: 597146
---
# Dataset Card for "squad_no_title_strict_v4_train_30_eval_10_recite_ans_sent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hatman/Sunshine-the-banta-chicken | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 15439966.0
num_examples: 13
download_size: 15318668
dataset_size: 15439966.0
---
# Dataset Card for "Sunshine-the-Chicken"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ycsong-eugene/syc-hellaswag2 | ---
language:
- en
paperswithcode_id: hellaswag
pretty_name: HellaSwag
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 43232624
num_examples: 39905
- name: test
num_bytes: 10791853
num_examples: 10003
- name: validation
num_bytes: 11175717
num_examples: 10042
download_size: 71494896
dataset_size: 65200194
---
# Dataset Card for "hellaswag"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rowanzellers.com/hellaswag/](https://rowanzellers.com/hellaswag/)
- **Repository:** [https://github.com/rowanz/hellaswag/](https://github.com/rowanz/hellaswag/)
- **Paper:** [HellaSwag: Can a Machine Really Finish Your Sentence?](https://arxiv.org/abs/1905.07830)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 71.49 MB
- **Size of the generated dataset:** 65.32 MB
- **Total amount of disk used:** 136.81 MB
### Dataset Summary
HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 71.49 MB
- **Size of the generated dataset:** 65.32 MB
- **Total amount of disk used:** 136.81 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"activity_label": "Removing ice from car",
"ctx": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles. then",
"ctx_a": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles.",
"ctx_b": "then",
"endings": "[\", the man adds wax to the windshield and cuts it.\", \", a person board a ski lift, while two men supporting the head of the per...",
"ind": 4,
"label": "3",
"source_id": "activitynet~v_-1IBHYS3L-Y",
"split": "train",
"split_type": "indomain"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `ind`: a `int32` feature.
- `activity_label`: a `string` feature.
- `ctx_a`: a `string` feature.
- `ctx_b`: a `string` feature.
- `ctx`: a `string` feature.
- `endings`: a `list` of `string` features.
- `source_id`: a `string` feature.
- `split`: a `string` feature.
- `split_type`: a `string` feature.
- `label`: a `string` feature.
### Data Splits
| name |train|validation|test |
|-------|----:|---------:|----:|
|default|39905| 10042|10003|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
MIT https://github.com/rowanz/hellaswag/blob/master/LICENSE
### Citation Information
```
@inproceedings{zellers2019hellaswag,
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin},
booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year={2019}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
ChayanM/mimic-QA-llama2-121k_train_test | ---
dataset_info:
features:
- name: QA
dtype: string
splits:
- name: train
num_bytes: 54391024.52637656
num_examples: 109119
- name: test
num_bytes: 6043779.473623437
num_examples: 12125
download_size: 21179351
dataset_size: 60434804.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Back-up/Topic-Prediction-Context-With-Random-Prompts | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: topic
struct:
- name: topic
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: instruction
dtype: string
- name: prompt_name
dtype: string
splits:
- name: train
num_bytes: 256498
num_examples: 101
download_size: 124883
dataset_size: 256498
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Topic-Prediction-Context-With-Random-Prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dvilasuero/intel-orca-dpo-pair-rg | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for intel-orca-dpo-pair-rg
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("dvilasuero/intel-orca-dpo-pair-rg")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("dvilasuero/intel-orca-dpo-pair-rg")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| input | input | text | True | True |
| generations-1 | generations-1 | text | True | True |
| generations-2 | generations-2 | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| generations-1-rating | What's the rating for generations-1? | rating | True | N/A | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| generations-2-rating | What's the rating for generations-2? | rating | True | N/A | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| rating-rationale | What's the rationale behind each rating? | text | True | N/A | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| length-input | length-input | integer | None - None | True |
| length-generations-1 | length-generations-1 | integer | None - None | True |
| rating-generations-1 | rating-generations-1 | float | None - None | True |
| length-generations-2 | length-generations-2 | integer | None - None | True |
| rating-generations-2 | rating-generations-2 | float | None - None | True |
| distance-best-rating | distance-best-rating | float | None - None | True |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"generations-1": "[\n [\"AFC Ajax (amateurs)\", \"has ground\", \"Sportpark De Toekomst\"],\n [\"Ajax Youth Academy\", \"plays at\", \"Sportpark De Toekomst\"]\n]",
"generations-2": "Sure, I\u0027d be happy to help! Here are the RDF triplets for the input sentence:\n\n[AFC Ajax (amateurs), hasGround, Sportpark De Toekomst]\n[Ajax Youth Academy, playsAt, Sportpark De Toekomst]\n\nExplanation:\n\n* AFC Ajax (amateurs) is the subject of the first triplet, and hasGround is the predicate that describes the relationship between AFC Ajax (amateurs) and Sportpark De Toekomst.\n* Ajax Youth Academy is the subject of the second triplet, and playsAt is the predicate that describes the relationship between Ajax Youth Academy and Sportpark De Toekomst.\n\nNote that there may be other possible RDF triplets that could be derived from the input sentence, but the above triplets capture the main relationships present in the sentence.",
"input": "You will be given a definition of a task first, then some input of the task.\nThis task is about using the specified sentence and converting the sentence to Resource Description Framework (RDF) triplets of the form (subject, predicate object). The RDF triplets generated must be such that the triplets accurately capture the structure and semantics of the input sentence. The input is a sentence and the output is a list of triplets of the form [subject, predicate, object] that capture the relationships present in the sentence. When a sentence has more than 1 RDF triplet possible, the output must contain all of them.\n\nAFC Ajax (amateurs)\u0027s ground is Sportpark De Toekomst where Ajax Youth Academy also play.\nOutput:"
},
"metadata": {
"distance-best-rating": 0.0,
"labelling-model": "gpt-4-1106-preview",
"length-generations-1": 131,
"length-generations-2": 731,
"length-input": 718,
"rating-generations-1": 9.0,
"rating-generations-2": 9.0
},
"responses": [],
"suggestions": [
{
"agent": null,
"question_name": "rating-rationale",
"score": null,
"type": null,
"value": "\nBoth Assistant 1 and Assistant 2 provided correct RDF triplets for the given sentence. Both assistants used a format that accurately represents the relationships present in the sentence with minor differences in the stylistic representation of the predicate.\n\nAssistant 1 used the natural language format for predicates, stating \"has ground\" and \"plays at\", which clearly aligns with the typical RDF representation where the predicate tries to be a URI that is more formal. However, since the task prompt doesn\u0027t specify a requirement for the predicates to be in URI form, this representation is acceptable, especially considering human readability.\n\nAssistant 2 transformed the predicates into a more formal-looking format by using camel case (hasGround, playsAt), which may suggest a transition towards a URI, although they are not provided as such. This is a common practice when designing RDF predicates, intending to align with web standards, although again, the task did not explicitly require this form.\n\nBoth assistants explained the relationships captured by the triplets, which is helpful for understanding how the RDF structure relates to the original sentence. There are no factual inaccuracies in either output, and both sets of triplets are equivalent in terms of the information they represent.\n\nOverall, the level of detail was similar, with each assistant providing a brief explanation following their respective triplets. Neither output contained unnecessary or irrelevant information, and no critical information was missing. \n\nBoth assistants would have received a score of 10 if the predicates were provided in a format that hinted at being URIs (e.g., prefixed with a namespace or in a full URI format), which is the more standard and formal practice for RDF predicates. Nevertheless, the assistants\u0027 performance was high given the context of the question, which did not specify this requirement. Therefore, both receive a score of 9."
},
{
"agent": null,
"question_name": "generations-1-rating",
"score": null,
"type": null,
"value": 9
},
{
"agent": null,
"question_name": "generations-2-rating",
"score": null,
"type": null,
"value": 9
}
],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"generations-1": "[\n [\"AFC Ajax (amateurs)\", \"has ground\", \"Sportpark De Toekomst\"],\n [\"Ajax Youth Academy\", \"plays at\", \"Sportpark De Toekomst\"]\n]",
"generations-1-rating": [],
"generations-1-rating-suggestion": 9,
"generations-1-rating-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"generations-2": "Sure, I\u0027d be happy to help! Here are the RDF triplets for the input sentence:\n\n[AFC Ajax (amateurs), hasGround, Sportpark De Toekomst]\n[Ajax Youth Academy, playsAt, Sportpark De Toekomst]\n\nExplanation:\n\n* AFC Ajax (amateurs) is the subject of the first triplet, and hasGround is the predicate that describes the relationship between AFC Ajax (amateurs) and Sportpark De Toekomst.\n* Ajax Youth Academy is the subject of the second triplet, and playsAt is the predicate that describes the relationship between Ajax Youth Academy and Sportpark De Toekomst.\n\nNote that there may be other possible RDF triplets that could be derived from the input sentence, but the above triplets capture the main relationships present in the sentence.",
"generations-2-rating": [],
"generations-2-rating-suggestion": 9,
"generations-2-rating-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"input": "You will be given a definition of a task first, then some input of the task.\nThis task is about using the specified sentence and converting the sentence to Resource Description Framework (RDF) triplets of the form (subject, predicate object). The RDF triplets generated must be such that the triplets accurately capture the structure and semantics of the input sentence. The input is a sentence and the output is a list of triplets of the form [subject, predicate, object] that capture the relationships present in the sentence. When a sentence has more than 1 RDF triplet possible, the output must contain all of them.\n\nAFC Ajax (amateurs)\u0027s ground is Sportpark De Toekomst where Ajax Youth Academy also play.\nOutput:",
"metadata": "{\"length-input\": 718, \"length-generations-1\": 131, \"length-generations-2\": 731, \"rating-generations-1\": 9.0, \"rating-generations-2\": 9.0, \"distance-best-rating\": 0.0, \"labelling-model\": \"gpt-4-1106-preview\"}",
"rating-rationale": [],
"rating-rationale-suggestion": "\nBoth Assistant 1 and Assistant 2 provided correct RDF triplets for the given sentence. Both assistants used a format that accurately represents the relationships present in the sentence with minor differences in the stylistic representation of the predicate.\n\nAssistant 1 used the natural language format for predicates, stating \"has ground\" and \"plays at\", which clearly aligns with the typical RDF representation where the predicate tries to be a URI that is more formal. However, since the task prompt doesn\u0027t specify a requirement for the predicates to be in URI form, this representation is acceptable, especially considering human readability.\n\nAssistant 2 transformed the predicates into a more formal-looking format by using camel case (hasGround, playsAt), which may suggest a transition towards a URI, although they are not provided as such. This is a common practice when designing RDF predicates, intending to align with web standards, although again, the task did not explicitly require this form.\n\nBoth assistants explained the relationships captured by the triplets, which is helpful for understanding how the RDF structure relates to the original sentence. There are no factual inaccuracies in either output, and both sets of triplets are equivalent in terms of the information they represent.\n\nOverall, the level of detail was similar, with each assistant providing a brief explanation following their respective triplets. Neither output contained unnecessary or irrelevant information, and no critical information was missing. \n\nBoth assistants would have received a score of 10 if the predicates were provided in a format that hinted at being URIs (e.g., prefixed with a namespace or in a full URI format), which is the more standard and formal practice for RDF predicates. Nevertheless, the assistants\u0027 performance was high given the context of the question, which did not specify this requirement. Therefore, both receive a score of 9.",
"rating-rationale-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **input** is of type `text`.
* **generations-1** is of type `text`.
* **generations-2** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **generations-1-rating** is of type `rating` with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* **generations-2-rating** is of type `rating` with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* **rating-rationale** is of type `text`.
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **generations-1-rating-suggestion** is of type `rating` with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* (optional) **generations-2-rating-suggestion** is of type `rating` with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* (optional) **rating-rationale-suggestion** is of type `text`.
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
dim/joke_explaination | ---
dataset_info:
features:
- name: url
dtype: string
- name: joke
dtype: string
- name: explaination
dtype: string
splits:
- name: train
num_bytes: 262894
num_examples: 377
download_size: 143161
dataset_size: 262894
---
# Dataset Card for "joke_explaination"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
totoztak/totokatz | ---
license: unknown
---
|
breadlicker45/rlhf-musenet | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 594357232
num_examples: 12103
download_size: 166332399
dataset_size: 594357232
---
# Dataset Card for "rlhf-musenet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amitraheja82/Market_Mail_Synthetic_DataSet1 | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 19809
num_examples: 10
download_size: 25170
dataset_size: 19809
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Market_Mail_Synthetic_DataSet1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/VQAv2_sample_testdev_embeddings | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: int64
- name: vision_embeddings
sequence: float32
splits:
- name: openai_clip_vit_large_patch14
num_bytes: 161538407.0
num_examples: 1000
download_size: 160035377
dataset_size: 161538407.0
---
# Dataset Card for "VQAv2_sample_testdev_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
islamrokon/Example | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 15422.921348314607
num_examples: 80
- name: test
num_bytes: 1735.0786516853932
num_examples: 9
download_size: 13925
dataset_size: 17158.0
---
# Dataset Card for "Example"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CollectiveCognition/chats-data-2023-09-22 | ---
license: mit
---
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@collectivecognition.ai
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT |
CognitiveLab/image-prompts-raw | ---
dataset_info:
features:
- name: negativePrompt
dtype: string
- name: steps
dtype: string
- name: id
dtype: string
- name: sampler
dtype: string
- name: imageUrl
dtype: string
- name: cfgScale
dtype: string
- name: clipSkip
dtype: string
- name: url
dtype: string
- name: model
dtype: string
- name: prompt
dtype: string
- name: seed
dtype: string
splits:
- name: train
num_bytes: 11704189
num_examples: 12683
download_size: 4715565
dataset_size: 11704189
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jayabalambika/toy-diabetes | ---
license: mit
language:
- en
tags:
- code
pretty_name: toy dataset
size_categories:
- n<1K
--- |
ResplendentAI/Luna_Alpaca | ---
license: other
language:
- en
tags:
- not-for-all-audiences
size_categories:
- 1K<n<10K
--- |
jhosepharaujo/Test | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 14064240.0
num_examples: 5
download_size: 13373118
dataset_size: 14064240.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Gizachew/cybersecurity-ner | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-Indicator
'1': B-Malware
'2': B-Organization
'3': B-System
'4': B-Vulnerability
'5': I-Indicator
'6': I-Malware
'7': I-Organization
'8': I-System
'9': I-Vulnerability
'10': O
splits:
- name: train
num_bytes: 1197515
num_examples: 2664
- name: test
num_bytes: 336600
num_examples: 717
- name: validation
num_bytes: 339858
num_examples: 785
download_size: 385107
dataset_size: 1873973
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
liuyanchen1015/MULTI_VALUE_wnli_after_perfect | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 2761
num_examples: 12
- name: test
num_bytes: 14700
num_examples: 46
- name: train
num_bytes: 22396
num_examples: 98
download_size: 20636
dataset_size: 39857
---
# Dataset Card for "MULTI_VALUE_wnli_after_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_statistics-rule-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 225905
num_examples: 216
download_size: 114522
dataset_size: 225905
---
# Dataset Card for "mmlu-high_school_statistics-rule-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
allenai/sciq | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: sciq
pretty_name: SciQ
dataset_info:
features:
- name: question
dtype: string
- name: distractor3
dtype: string
- name: distractor1
dtype: string
- name: distractor2
dtype: string
- name: correct_answer
dtype: string
- name: support
dtype: string
splits:
- name: train
num_bytes: 6546183
num_examples: 11679
- name: validation
num_bytes: 554120
num_examples: 1000
- name: test
num_bytes: 563927
num_examples: 1000
download_size: 4674410
dataset_size: 7664230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "sciq"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/sciq](https://allenai.org/data/sciq)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
### Dataset Summary
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"correct_answer": "coriolis effect",
"distractor1": "muon effect",
"distractor2": "centrifugal effect",
"distractor3": "tropical effect",
"question": "What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?",
"support": "\"Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `question`: a `string` feature.
- `distractor3`: a `string` feature.
- `distractor1`: a `string` feature.
- `distractor2`: a `string` feature.
- `correct_answer`: a `string` feature.
- `support`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|11679| 1000|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the [Creative Commons Attribution-NonCommercial 3.0 Unported License](http://creativecommons.org/licenses/by-nc/3.0/).
### Citation Information
```
@inproceedings{SciQ,
title={Crowdsourcing Multiple Choice Science Questions},
author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
year={2017},
journal={arXiv:1707.06209v1}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
tuanmanh28/VIVOS_CommonVoice_FOSD_CleanControl_dataset | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 2725057166.0
num_examples: 39334
- name: test
num_bytes: 246735548.24
num_examples: 5046
download_size: 2900684229
dataset_size: 2971792714.24
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "VIVOS_CommonVoice_FOSD_CleanControl_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KBLab/rixvox | ---
language: sv
license: cc-by-4.0
tags:
- audio
- speech-recognition
task_categories:
- automatic-speech-recognition
size_categories:
- 100K<n<1M
multilinguality:
- monolingual
---
# Dataset Card for RixVox
## Dataset Description
- **Repository:** [Riksdagen anföranden repository](https://github.com/kb-labb/riksdagen_anforanden)
- **Paper:** ["RixVox: A Swedish Speech Corpus with 5500 Hours of Speech from Parliamentary Debates"](https://kb-labb.github.io/posts/2023-03-09-rixvox-a-swedish-speech-corpus/)
- **Point of Contact:** [KBLab](mailto:kblabb@kb.se)
- **Total amount of disk used:** ca. 1.2 TB
### Dataset Summary
RixVox is a speech dataset comprised of speeches from the Riksdag (the Swedish Parliament). It covers speeches from debates during the period 2003-2023. Audio from speeches have been aligned, on the sentence level, with transcripts from written protocols using `aeneas`. An observation may consist of one or several concatenated sentences (up to 30 seconds in duration). Detailed speaker metadata is available for each observation, including the speaker's name, gender, political party, birth year and the electoral district they represent. The dataset contains a total of 5493 hours of speech with transcriptions.
## How to use & Supported Tasks
### Supported Tasks
Tasks are not supported by default (there are no label fields). The dataset may however be suited for:
- Automatic Speech Recognition (ASR).
- Speaker identification and verification.
- Creation of synthetic diarization datasets.
- Research on bias in ASR systems.
### How to use
To download and extract the files locally you can use `load_dataset()`. We recommend you set the `cache_dir` argument to point to a location that has plenty of disk space (1.2TB+). Here's how to download the `train` split:
```python
from datasets import load_dataset
# To download/load all splits at once, don't specify a split
rixvox = load_dataset("KBLab/rixvox", split="train", cache_dir="data_rixvox")
```
You can also stream the dataset. This is useful if you want to explore the dataset or if you don't have enough disk space to download the entire dataset. Here's how to stream the `train` split:
```python
from datasets import load_dataset
rixvox = load_dataset("KBLab/rixvox", cache_dir="data_rixvox", split="train", streaming=True)
print(next(iter(rixvox)))
# Grab 5 observations
rixvox_subset = rixvox.take(5)
for example in rixvox_subset:
print(example)
```
**Create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch)** with your dataset.
Local mode:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
# Dataset is not pre-shuffled, recommend shuffling it before training.
rixvox = load_dataset("KBLab/rixvox", split="train", cache_dir="data_rixvox")
batch_sampler = BatchSampler(RandomSampler(rixvox), batch_size=32, drop_last=False)
dataloader = DataLoader(rixvox, batch_sampler=batch_sampler)
```
Streaming mode:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
rixvox = load_dataset("KBLab/rixvox", split="train", cache_dir="data_rixvox")
dataloader = DataLoader(rixvox, batch_size=32)
```
See Huggingface's guide on [streaming datasets](https://huggingface.co/docs/datasets/v1.11.0/dataset_streaming.html) for more information on how to shuffle in streaming mode.
### Languages
- Swedish. The BCP 47 language tag for Swedish is `sv`.
## Dataset Structure
### Data Instances
There are a total of `835044` observations from `1194` different speakers. Each observation can be up to 30 seconds in duration. An observation belongs to a debate (`dokid`), is extratected from a speech (`anforande_nummer`), and is numbered according to its order within the speech (`observation_nr`). Here is an example of an observation:
```
{'dokid': 'GR01BOU3',
'anforande_nummer': 191,
'observation_nr': 0,
'audio': {'path': 'GR01BOU3/2442210220028601121_anf191_1_25.wav',
'array': array([0.01171875, 0.01242065, 0.01071167, ..., 0.00689697, 0.00918579,
0.00650024]),
'sampling_rate': 16000},
'text': 'Kristdemokraterna står bakom alla reservationer med kristdemokratiska förtecken, men jag nöjer mig med att yrka bifall till reservation 1. Jag ska i det här inlägget beröra några av de åtta punkter som är föremål för reservationer från kristdemokratiskt håll, i vissa fall tillsammans med andra partier.',
'debatedate': datetime.datetime(2003, 12, 4, 0, 0),
'speaker': 'Göran Hägglund',
'party': 'KD',
'gender': 'male',
'birth_year': 1959,
'electoral_district': 'Hallands län',
'intressent_id': '0584659199514',
'speaker_from_id': True,
'speaker_audio_meta': 'Göran Hägglund (Kd)',
'start': 1.4,
'end': 24.96,
'duration': 23.560000000000002,
'bleu_score': 0.7212783273624307,
'filename': 'GR01BOU3/2442210220028601121_anf191_1_25.wav',
'path': 'GR01BOU3/2442210220028601121_anf191_1_25.wav',
'speaker_total_hours': 30.621333333333332}
```
See more examples in the [dataset viewer](https://huggingface.co/datasets/KBLab/rixvox/viewer/default/train).
### Data Fields
* `dokid`: Document id for the debate used by the Riksdag. This is the same for all speeches in a debate.
* `anforande_nummer`: Speech number within the debate, or within the debate sessions on a particular day. Should create a unique primary key for a speech in combination with `dokid` (sometimes there are duplicates, but we removed them from this dataset).
* `observation_nr`: Observation number within the speech. Creates a unique identifier for an observation in combination with `dokid` and `anforande_nummer`.
* `text`: The text transcript from written protocols. The transcripts are not always verbatim. Transcribers have to different degrees adjusted sentence ordering, words and phrasing when they deemed it appropriate.
* `debatedate`: The date of the debate.
* `start`: The start time of the observation within a speech (in seconds).
* `end`: The end time of the observation within a speech (in seconds).
* `duration`: The duration of the observation (`end` subtracted with `start`).
* `intressent_id`: Unique id for the speaker within the Riksdag's database (see [person.csv.zip](https://data.riksdagen.se/dataset/person/person.csv.zip) from the Riksdag).
* `speaker`: The speaker's name retrieved via the `intressent_id`.
* `party`: The speaker's party retrieved via the `intressent_id`.
* `gender`: The speaker's gender retrieved via the `intressent_id`.
* `birth_year`: The speaker's bith year retrieved via the `intressent_id`.
* `electoral_district`: The electoral district which the speaker represents if they are/were a member of parliament (retrieved via the `intressent_id`).
* `speaker_audio_meta`: The speaker's name and title as listed in the Riksdag's oroginal text format metadata (sometimes wrong and mismatched against `intressent_id`).
* `speaker_from_id`: Whether the speaker metadata was retrieved via the `intressent_id` or via the Riksdag's original metadata (for those speeches with a missing `intressent_id`).
* `bleu_score`: The BLEU score of the automatic speech recognition (ASR) transcript against the Riksdag's written protocol. Calculated on the entirity of the speech that an observation (30s snippet) is extracted from. A low number for a speech may indicate that either i) the ASR model had trouble transcribing the speaker's accent or dialect, or ii) the transcription took certain liberties in editing and rephrasing the speech.
* `speaker_total_hours`: The total number of hours of speech from the speaker in the RixVox dataset.
* `filename`: The filename of the observation in the compressed tar.gz files. Useful if you don't want to use Huggingface `datasets`, but would rather manually download and extract the files from the data shards.
* `path`: Dynamically created variable. Contains the local path to the observation's audio file after you download and extract the files via `load_dataset()` in the `datasets` library.
### Data Splits
Dataset splits were randomly sampled on the speaker level. That is, a speaker is only present in a single split. We sample speakers for each split until the following conditions are met:
- 98% of the total number of hours of speech are included in the train split.
- 1% of the total number of hours of speech are included in the validation split.
- 1% of the total number of hours of speech are included in the test split.
| Dataset Split | Observations | Total duration of speech (hours) | Average duration obs. (seconds) | Number of speakers |
| ------------- | ----------------: | -------------------------------: | ------------------------------: | -----------------: |
| Train | 818227 | 5383 | 23.69 | 1165 |
| Validation | 7933 | 52 | 23.50 | 18 |
| Test | 8884 | 59 | 23.74 | 11 |
## Dataset Creation
For more information about the creation of this dataset, see the article ["Finding Speeches in the Riksdag's Debates"](https://kb-labb.github.io/posts/2023-02-15-finding-speeches-in-the-riksdags-debates/) from our blog.
### Curation Rationale
Before RixVox, there was only a couple of hundred hours of transcribed speech available to train ASR models for Swedish. ASR models such as Whisper have shown that the performance of models can benefit significantly from adding more supervised data during pretraining or finetuning. Media from debates in the Riksdag are published openly on the web together with transcripts and other metadata. The open data initiatives of the Riksdag presented an opportunity to create a high quality open speech corpus for Swedish.
### Source Data
The Swedish Parliament.
- [Transcripts of speeches](https://data.riksdagen.se/data/anforanden/).
- Use the `rel_dok_id` of transcripts of speeches to query the Riksdag's media API (e.g. https://data.riksdagen.se/api/mhs-vodapi?H901FiU1 ) for available media and metadata.
#### Initial Data Collection and Normalization
For information on how the speeches were segmented and identified in debate audio files, see the article ["Finding Speeches in the Riksdag's Debates"](https://kb-labb.github.io/posts/2023-02-15-finding-speeches-in-the-riksdags-debates/).
For information on how the speech segmentations were used to create the final RixVox dataset, see the article ["RixVox: A Swedish Speech Corpus with 5500 Hours of Speech from Parliamentary Debates"](https://kb-labb.github.io/posts/2023-03-09-rixvox-a-swedish-speech-corpus/).
The code to replicate the creation of the dataset is open and available at the GitHub repository [KBLab/riksdagen_anforanden](https://github.com/kb-labb/riksdagen_anforanden). Processing everything can take 1-3 weeks on a workstation with consumer grade GPU.
#### Who are the source language producers?
The written protocols of speeches are manually produced by the Riksdag. Transcription is not always verbatim, but rather catches the intent of the speaker.
Segmenting speeches to determine when they start and end in a debate was done automatically. Sentence level alignment of the written protocols to the audio files was also done automatically using `aeneas`. See the articles in citation information for more details.
### Annotations
#### Annotation process
The process of aligning speech to written protocols was automatic. It followed the following general steps:
1. We used ASR to automatically transcribe the debate audio files and get word timestamps for the machine generated transcription.
2. We used fuzzy string matching to determine approximate start/end of a speech, matching the official written protocol of the speech to the machine generated transcription of the debate.
3. We perform speaker diarization using pyannote.audio.
4. We assign speaker diarization segments to speeches by the degree of overlap between approximate start/end from fuzzy string matching and the speaker diarization segments. The start and end of the diarization segment is used as our new adjusted start and end metadata of the speech.
5. Based on adjusted metadata of start/end of as speech, we split and extract the audio of speeches from the debates and then align the segmented speeches to the written protocol using `aeneas` (sentence-level alignment).
#### Who are the annotators?
No manual annotations.
### Personal and Sensitive Information
The speakers are members of parliament or ministers speaking publicly in the Riksdag. The Riksdag is a public institution and the speeches are publicly available on the web as open data.
## Considerations for Using the Data
### Social Impact of Dataset
We except the dataset primarily to be used in training ASR models for Swedish. The performance of Swedish text-to-speech in multillingual ASR models may also benefit from the availability of a large Swedish speech corpus. In turn, improved ASR models can serve to help increase accessibility of audio and video media content for people with hearing impairments.
The dataset can also be used to train models for other audio tasks such as speaker diarization, speaker verification, and speaker recognition.
Since metadata regarding the age, gender, and electoral district of the speaker is included, the dataset can possibly also be used to study bias in ASR models.
### Discussion of Biases
The dataset includes parliamentary speeches, which are often more formal than everyday speech.
During the creation of the dataset, we found that speech segmentations based on speaker diarization were more likely to fail when a preceding speaker, the speaker of the house, and the speaker of the following speech were all of the same gender. However, all in all, only a small number of speeches were filtered out of the final RixVox dataset. After quality filtering of the dataset, 5500 out of 5858 hours remained. We do not believe any significant systematic bias was introduced by this filtering.
Only minimal deduplication was performed to weed out commonly repeated phrases. For example, certain phrases such as "Fru talman!", "Herr Talman!", tend to be used a lot as a matter of formality. These phrases tend to be present at the beginning of most transcripts regardless whether it was uttered by the speaker or not. For this reason we have removed the first aligned sentence of each speech when creating RixVox. The aforementioned phrases are repeated frequently in speeches as well, though. As such it might be beneficial to perform more aggressive deduplication of the dataset before training models.
### Other Known Limitations
## Additional Information
### Dataset Curators
KBLab at the the National Library of Sweden.
### Future updates
There is a possiblity RixVox will be periodically, and irregularly, updated by including both older and newer speeches. Older recordings of parliamentary debates from 1966 to 2002 do exist, but they are not yet part of the Riksdag's open data. KBLab are exploring the possibility of adding metadata to these recordings by applying the existing speech segmentation and alignment pipeline to them.
Each year also brings new parliamentary debates, with recent years adding 400-500 hours of speech per year.
### Licensing Information
[CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)
Cite the Swedish Parliament.
To reference RixVox, feel free to cite KBLab blog posts in the citation information below.
### Citation Information
```
@misc{rekathati2023rixvox:,
author = {Rekathati, Faton},
title = {The KBLab Blog: RixVox: A Swedish Speech Corpus with 5500 Hours of Speech from Parliamentary Debates},
url = {https://kb-labb.github.io/posts/2023-03-09-rixvox-a-swedish-speech-corpus/},
year = {2023}
}
```
```
@misc{rekathati2023finding,
author = {Rekathati, Faton},
title = {The KBLab Blog: Finding Speeches in the Riksdag's Debates},
url = {https://kb-labb.github.io/posts/2023-02-15-finding-speeches-in-the-riksdags-debates/},
year = {2023}
}
```
The Swedish Parliament.
### Contributions
Thanks to [@lhoestq](https://huggingface.co/lhoestq) for reviewing the dataset script. |
open-llm-leaderboard/details_microsoft__CodeGPT-small-py | ---
pretty_name: Evaluation run of microsoft/CodeGPT-small-py
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [microsoft/CodeGPT-small-py](https://huggingface.co/microsoft/CodeGPT-small-py)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_microsoft__CodeGPT-small-py\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-16T10:55:21.745604](https://huggingface.co/datasets/open-llm-leaderboard/details_microsoft__CodeGPT-small-py/blob/main/results_2023-10-16T10-55-21.745604.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.00388003355704698,\n\
\ \"em_stderr\": 0.0006366682825519956,\n \"f1\": 0.016416736577181235,\n\
\ \"f1_stderr\": 0.0008900949322041355,\n \"acc\": 0.24388318863456984,\n\
\ \"acc_stderr\": 0.007024139410202808\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.00388003355704698,\n \"em_stderr\": 0.0006366682825519956,\n\
\ \"f1\": 0.016416736577181235,\n \"f1_stderr\": 0.0008900949322041355\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.4877663772691397,\n\
\ \"acc_stderr\": 0.014048278820405616\n }\n}\n```"
repo_url: https://huggingface.co/microsoft/CodeGPT-small-py
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T10_55_21.745604
path:
- '**/details_harness|drop|3_2023-10-16T10-55-21.745604.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T10-55-21.745604.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T10_55_21.745604
path:
- '**/details_harness|gsm8k|5_2023-10-16T10-55-21.745604.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-16T10-55-21.745604.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T10_55_21.745604
path:
- '**/details_harness|winogrande|5_2023-10-16T10-55-21.745604.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T10-55-21.745604.parquet'
- config_name: results
data_files:
- split: 2023_10_16T10_55_21.745604
path:
- results_2023-10-16T10-55-21.745604.parquet
- split: latest
path:
- results_2023-10-16T10-55-21.745604.parquet
---
# Dataset Card for Evaluation run of microsoft/CodeGPT-small-py
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/microsoft/CodeGPT-small-py
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [microsoft/CodeGPT-small-py](https://huggingface.co/microsoft/CodeGPT-small-py) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_microsoft__CodeGPT-small-py",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T10:55:21.745604](https://huggingface.co/datasets/open-llm-leaderboard/details_microsoft__CodeGPT-small-py/blob/main/results_2023-10-16T10-55-21.745604.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.00388003355704698,
"em_stderr": 0.0006366682825519956,
"f1": 0.016416736577181235,
"f1_stderr": 0.0008900949322041355,
"acc": 0.24388318863456984,
"acc_stderr": 0.007024139410202808
},
"harness|drop|3": {
"em": 0.00388003355704698,
"em_stderr": 0.0006366682825519956,
"f1": 0.016416736577181235,
"f1_stderr": 0.0008900949322041355
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.4877663772691397,
"acc_stderr": 0.014048278820405616
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ammarnasr/data_engineering_8_with_code_dataset | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: repo_url
dtype: string
- name: repo_description
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_last_updated
dtype: string
- name: repo_created_at
dtype: string
- name: repo_size
dtype: int64
- name: repo_license
dtype: string
- name: language
dtype: string
- name: text
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphnanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 18580918
num_examples: 495
download_size: 7162238
dataset_size: 18580918
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
edbeeching/prj_gia_dataset_atari_2B_atari_stargunner_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_stargunner environment, sample for the policy atari_2B_atari_stargunner_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
open-llm-leaderboard/details_ddyuudd__mistral_dmbr03_32_sig | ---
pretty_name: Evaluation run of ddyuudd/mistral_dmbr03_32_sig
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ddyuudd/mistral_dmbr03_32_sig](https://huggingface.co/ddyuudd/mistral_dmbr03_32_sig)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ddyuudd__mistral_dmbr03_32_sig\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-23T08:05:12.174377](https://huggingface.co/datasets/open-llm-leaderboard/details_ddyuudd__mistral_dmbr03_32_sig/blob/main/results_2024-02-23T08-05-12.174377.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6103662016330086,\n\
\ \"acc_stderr\": 0.032961748269303454,\n \"acc_norm\": 0.6156773617227023,\n\
\ \"acc_norm_stderr\": 0.03363799686250605,\n \"mc1\": 0.3353733170134639,\n\
\ \"mc1_stderr\": 0.016527534039668987,\n \"mc2\": 0.4789785473368172,\n\
\ \"mc2_stderr\": 0.015240699677840055\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5674061433447098,\n \"acc_stderr\": 0.014478005694182524,\n\
\ \"acc_norm\": 0.5998293515358362,\n \"acc_norm_stderr\": 0.014317197787809174\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6381198964349731,\n\
\ \"acc_stderr\": 0.004795622757327147,\n \"acc_norm\": 0.8322047400916153,\n\
\ \"acc_norm_stderr\": 0.0037292066767701934\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n\
\ \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n\
\ \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6381578947368421,\n \"acc_stderr\": 0.03910525752849725,\n\
\ \"acc_norm\": 0.6381578947368421,\n \"acc_norm_stderr\": 0.03910525752849725\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n\
\ \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n\
\ \"acc_stderr\": 0.03773809990686934,\n \"acc_norm\": 0.7152777777777778,\n\
\ \"acc_norm_stderr\": 0.03773809990686934\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\"\
: 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5780346820809249,\n\
\ \"acc_stderr\": 0.0376574669386515,\n \"acc_norm\": 0.5780346820809249,\n\
\ \"acc_norm_stderr\": 0.0376574669386515\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.04724007352383886,\n\
\ \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.04724007352383886\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.03265019475033582,\n\
\ \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.03265019475033582\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n\
\ \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n\
\ \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42328042328042326,\n \"acc_stderr\": 0.025446365634406786,\n \"\
acc_norm\": 0.42328042328042326,\n \"acc_norm_stderr\": 0.025446365634406786\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7290322580645161,\n\
\ \"acc_stderr\": 0.025284416114900156,\n \"acc_norm\": 0.7290322580645161,\n\
\ \"acc_norm_stderr\": 0.025284416114900156\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n\
\ \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n\
\ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7777777777777778,\n \"acc_stderr\": 0.02962022787479048,\n \"\
acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.02962022787479048\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8134715025906736,\n \"acc_stderr\": 0.02811209121011746,\n\
\ \"acc_norm\": 0.8134715025906736,\n \"acc_norm_stderr\": 0.02811209121011746\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5743589743589743,\n \"acc_stderr\": 0.02506909438729652,\n \
\ \"acc_norm\": 0.5743589743589743,\n \"acc_norm_stderr\": 0.02506909438729652\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34444444444444444,\n \"acc_stderr\": 0.02897264888484427,\n \
\ \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.02897264888484427\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6260504201680672,\n \"acc_stderr\": 0.03142946637883708,\n \
\ \"acc_norm\": 0.6260504201680672,\n \"acc_norm_stderr\": 0.03142946637883708\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\"\
: 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7779816513761468,\n\
\ \"acc_stderr\": 0.017818849564796624,\n \"acc_norm\": 0.7779816513761468,\n\
\ \"acc_norm_stderr\": 0.017818849564796624\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.4398148148148148,\n \"acc_stderr\": 0.03385177976044812,\n\
\ \"acc_norm\": 0.4398148148148148,\n \"acc_norm_stderr\": 0.03385177976044812\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849313,\n \"\
acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849313\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7468354430379747,\n \"acc_stderr\": 0.02830465794303529,\n \
\ \"acc_norm\": 0.7468354430379747,\n \"acc_norm_stderr\": 0.02830465794303529\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n\
\ \"acc_stderr\": 0.031602951437766785,\n \"acc_norm\": 0.6681614349775785,\n\
\ \"acc_norm_stderr\": 0.031602951437766785\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.03915345408847836,\n\
\ \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.03915345408847836\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098823,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098823\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7037037037037037,\n\
\ \"acc_stderr\": 0.04414343666854933,\n \"acc_norm\": 0.7037037037037037,\n\
\ \"acc_norm_stderr\": 0.04414343666854933\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n\
\ \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.021901905115073325,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.021901905115073325\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7943805874840357,\n\
\ \"acc_stderr\": 0.01445250045678583,\n \"acc_norm\": 0.7943805874840357,\n\
\ \"acc_norm_stderr\": 0.01445250045678583\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7023121387283237,\n \"acc_stderr\": 0.024617055388677003,\n\
\ \"acc_norm\": 0.7023121387283237,\n \"acc_norm_stderr\": 0.024617055388677003\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3217877094972067,\n\
\ \"acc_stderr\": 0.015624236160792579,\n \"acc_norm\": 0.3217877094972067,\n\
\ \"acc_norm_stderr\": 0.015624236160792579\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.026643278474508755,\n\
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.026643278474508755\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\
\ \"acc_stderr\": 0.02575586592263295,\n \"acc_norm\": 0.7106109324758842,\n\
\ \"acc_norm_stderr\": 0.02575586592263295\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409828,\n\
\ \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409828\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.42633637548891784,\n\
\ \"acc_stderr\": 0.012630884771599698,\n \"acc_norm\": 0.42633637548891784,\n\
\ \"acc_norm_stderr\": 0.012630884771599698\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6323529411764706,\n \"acc_stderr\": 0.029289413409403192,\n\
\ \"acc_norm\": 0.6323529411764706,\n \"acc_norm_stderr\": 0.029289413409403192\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6258169934640523,\n \"acc_stderr\": 0.01957695312208883,\n \
\ \"acc_norm\": 0.6258169934640523,\n \"acc_norm_stderr\": 0.01957695312208883\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n\
\ \"acc_stderr\": 0.046313813194254656,\n \"acc_norm\": 0.6272727272727273,\n\
\ \"acc_norm_stderr\": 0.046313813194254656\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.029043088683304328,\n\
\ \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.029043088683304328\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \
\ \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\
\ \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n\
\ \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3353733170134639,\n\
\ \"mc1_stderr\": 0.016527534039668987,\n \"mc2\": 0.4789785473368172,\n\
\ \"mc2_stderr\": 0.015240699677840055\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7805840568271507,\n \"acc_stderr\": 0.01163126836060778\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.35329795299469297,\n \
\ \"acc_stderr\": 0.013166337192115686\n }\n}\n```"
repo_url: https://huggingface.co/ddyuudd/mistral_dmbr03_32_sig
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|arc:challenge|25_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|gsm8k|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hellaswag|10_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-23T08-05-12.174377.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-23T08-05-12.174377.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- '**/details_harness|winogrande|5_2024-02-23T08-05-12.174377.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-23T08-05-12.174377.parquet'
- config_name: results
data_files:
- split: 2024_02_23T08_05_12.174377
path:
- results_2024-02-23T08-05-12.174377.parquet
- split: latest
path:
- results_2024-02-23T08-05-12.174377.parquet
---
# Dataset Card for Evaluation run of ddyuudd/mistral_dmbr03_32_sig
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ddyuudd/mistral_dmbr03_32_sig](https://huggingface.co/ddyuudd/mistral_dmbr03_32_sig) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ddyuudd__mistral_dmbr03_32_sig",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-23T08:05:12.174377](https://huggingface.co/datasets/open-llm-leaderboard/details_ddyuudd__mistral_dmbr03_32_sig/blob/main/results_2024-02-23T08-05-12.174377.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6103662016330086,
"acc_stderr": 0.032961748269303454,
"acc_norm": 0.6156773617227023,
"acc_norm_stderr": 0.03363799686250605,
"mc1": 0.3353733170134639,
"mc1_stderr": 0.016527534039668987,
"mc2": 0.4789785473368172,
"mc2_stderr": 0.015240699677840055
},
"harness|arc:challenge|25": {
"acc": 0.5674061433447098,
"acc_stderr": 0.014478005694182524,
"acc_norm": 0.5998293515358362,
"acc_norm_stderr": 0.014317197787809174
},
"harness|hellaswag|10": {
"acc": 0.6381198964349731,
"acc_stderr": 0.004795622757327147,
"acc_norm": 0.8322047400916153,
"acc_norm_stderr": 0.0037292066767701934
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6381578947368421,
"acc_stderr": 0.03910525752849725,
"acc_norm": 0.6381578947368421,
"acc_norm_stderr": 0.03910525752849725
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.03773809990686934,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.03773809990686934
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5780346820809249,
"acc_stderr": 0.0376574669386515,
"acc_norm": 0.5780346820809249,
"acc_norm_stderr": 0.0376574669386515
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.04724007352383886,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.04724007352383886
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.03265019475033582,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.03265019475033582
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42328042328042326,
"acc_stderr": 0.025446365634406786,
"acc_norm": 0.42328042328042326,
"acc_norm_stderr": 0.025446365634406786
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7290322580645161,
"acc_stderr": 0.025284416114900156,
"acc_norm": 0.7290322580645161,
"acc_norm_stderr": 0.025284416114900156
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.02962022787479048,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.02962022787479048
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8134715025906736,
"acc_stderr": 0.02811209121011746,
"acc_norm": 0.8134715025906736,
"acc_norm_stderr": 0.02811209121011746
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5743589743589743,
"acc_stderr": 0.02506909438729652,
"acc_norm": 0.5743589743589743,
"acc_norm_stderr": 0.02506909438729652
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.02897264888484427,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.02897264888484427
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6260504201680672,
"acc_stderr": 0.03142946637883708,
"acc_norm": 0.6260504201680672,
"acc_norm_stderr": 0.03142946637883708
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7779816513761468,
"acc_stderr": 0.017818849564796624,
"acc_norm": 0.7779816513761468,
"acc_norm_stderr": 0.017818849564796624
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4398148148148148,
"acc_stderr": 0.03385177976044812,
"acc_norm": 0.4398148148148148,
"acc_norm_stderr": 0.03385177976044812
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849313,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849313
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7468354430379747,
"acc_stderr": 0.02830465794303529,
"acc_norm": 0.7468354430379747,
"acc_norm_stderr": 0.02830465794303529
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.031602951437766785,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.031602951437766785
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7251908396946565,
"acc_stderr": 0.03915345408847836,
"acc_norm": 0.7251908396946565,
"acc_norm_stderr": 0.03915345408847836
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098823,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098823
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.04414343666854933,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.04414343666854933
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.021901905115073325,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.021901905115073325
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.71,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7943805874840357,
"acc_stderr": 0.01445250045678583,
"acc_norm": 0.7943805874840357,
"acc_norm_stderr": 0.01445250045678583
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7023121387283237,
"acc_stderr": 0.024617055388677003,
"acc_norm": 0.7023121387283237,
"acc_norm_stderr": 0.024617055388677003
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3217877094972067,
"acc_stderr": 0.015624236160792579,
"acc_norm": 0.3217877094972067,
"acc_norm_stderr": 0.015624236160792579
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.026643278474508755,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.026643278474508755
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.02575586592263295,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.02575586592263295
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.026105673861409828,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.026105673861409828
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.42633637548891784,
"acc_stderr": 0.012630884771599698,
"acc_norm": 0.42633637548891784,
"acc_norm_stderr": 0.012630884771599698
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6323529411764706,
"acc_stderr": 0.029289413409403192,
"acc_norm": 0.6323529411764706,
"acc_norm_stderr": 0.029289413409403192
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6258169934640523,
"acc_stderr": 0.01957695312208883,
"acc_norm": 0.6258169934640523,
"acc_norm_stderr": 0.01957695312208883
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6272727272727273,
"acc_stderr": 0.046313813194254656,
"acc_norm": 0.6272727272727273,
"acc_norm_stderr": 0.046313813194254656
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.710204081632653,
"acc_stderr": 0.029043088683304328,
"acc_norm": 0.710204081632653,
"acc_norm_stderr": 0.029043088683304328
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640038,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640038
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3353733170134639,
"mc1_stderr": 0.016527534039668987,
"mc2": 0.4789785473368172,
"mc2_stderr": 0.015240699677840055
},
"harness|winogrande|5": {
"acc": 0.7805840568271507,
"acc_stderr": 0.01163126836060778
},
"harness|gsm8k|5": {
"acc": 0.35329795299469297,
"acc_stderr": 0.013166337192115686
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ura-hcmut/wiki_lingua-dpo | ---
language:
- vi
size_categories:
- 1K<n<10K
license: mit
configs:
- config_name: default
data_files:
- split: test
path: wiki_lingua-dpo.json
--- |
CyberHarem/asbestos_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of asbestos/アスベストス/石棉 (Arknights)
This is the dataset of asbestos/アスベストス/石棉 (Arknights), containing 107 images and their tags.
The core tags of this character are `short_hair, multicolored_hair, green_eyes, hat, blue_hair, two-tone_hair, baseball_cap, eyepatch, hair_over_one_eye, headphones, black_hair, tail, lizard_tail, breasts, purple_hair, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 107 | 165.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asbestos_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 107 | 139.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asbestos_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 263 | 269.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asbestos_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/asbestos_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, looking_at_viewer, upper_body, black_shirt, sharp_teeth, solo, yellow_coat, yellow_jacket, simple_background, white_background, blue_tongue, infection_monitor_(arknights), open_mouth, long_sleeves, one_eye_covered, tongue_out |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | upper_body | black_shirt | sharp_teeth | solo | yellow_coat | yellow_jacket | simple_background | white_background | blue_tongue | infection_monitor_(arknights) | open_mouth | long_sleeves | one_eye_covered | tongue_out |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------------|:--------------|:--------------|:-------|:--------------|:----------------|:--------------------|:-------------------|:--------------|:--------------------------------|:-------------|:---------------|:------------------|:-------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
luotr123/sd | ---
license: apache-2.0
---
|
Circularmachines/batch_indexing_machine_100_small_imgs | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 14045789.0
num_examples: 100
download_size: 14047503
dataset_size: 14045789.0
---
# Dataset Card for "batch_indexing_machine_100_small_imgs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
piyushaaryan011/openorca | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 41315084.0412
num_examples: 22870
download_size: 27916299
dataset_size: 41315084.0412
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DavidLanz/medical_reward | ---
license: apache-2.0
language:
- zh
- en
tags:
- text-generation
pretty_name: medical
task_categories:
- text-generation
size_categories:
- 1M<n<10M
---
## Chinese Medical Dialogue Dataset
### Description
- **medical_reward_tw.json**: This dataset consists of 4,000 entries derived from the Chinese Medical Dialogue dataset (Toyhom/Chinese-medical-dialogue-data). The questions in this dataset are randomly selected from the Chinese Medical Dialogue dataset. The "response_chosen" field contains responses from medical professionals in the Chinese Medical Dialogue dataset, while the "response_rejected" field contains responses from the herbal medicine model SCIR-HI/Huatuo-Llama-Med-Chinese.
### Data Format
- **medical_reward_tw.json**: JSON format with fields including "question," "response_chosen," and "response_rejected."
### License
Please refer to the respective data sources for licensing information.
### Dataset Citation
If you use this dataset in your research or work, please consider citing the original data sources as specified above.
|
hojzas/setfit-proj8-multilabel_2_validation | ---
license: apache-2.0
---
|
CyberHarem/syrene_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of syrene (Fire Emblem)
This is the dataset of syrene (Fire Emblem), containing 12 images and their tags.
The core tags of this character are `green_eyes, green_hair, long_hair, headband, breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 12 | 10.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syrene_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 12 | 7.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syrene_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 27 | 15.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syrene_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 12 | 9.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syrene_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 27 | 18.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syrene_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/syrene_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, solo, breastplate, white_gloves, looking_at_viewer, simple_background, thighhighs, white_background, belt, blush, boots, nipples, shoulder_armor, smile, sword |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | breastplate | white_gloves | looking_at_viewer | simple_background | thighhighs | white_background | belt | blush | boots | nipples | shoulder_armor | smile | sword |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------|:---------------|:--------------------|:--------------------|:-------------|:-------------------|:-------|:--------|:--------|:----------|:-----------------|:--------|:--------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
sam1120/parking-utcustom-train | ---
dataset_info:
features:
- name: name
dtype: string
- name: pixel_values
dtype: image
- name: labels
dtype: image
splits:
- name: train
num_bytes: 49822561.0
num_examples: 18
download_size: 13869917
dataset_size: 49822561.0
---
# Dataset Card for "parking-utcustom-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hikam22/newprocessed_bert_dataset | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 8629178400.0
num_examples: 2396994
download_size: 2322333334
dataset_size: 8629178400.0
---
# Dataset Card for "newprocessed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_80_1713046323 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3192331
num_examples: 7799
download_size: 1594513
dataset_size: 3192331
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vigneshgs7/Boundary_detection_Doc_15 | ---
dataset_info:
features:
- name: name
dtype: string
- name: uuid
dtype: string
- name: status
dtype: string
- name: image
dtype: image
- name: label.annotations
list:
- name: id
dtype: int32
- name: category_id
dtype: int32
- name: label.segmentation_bitmap
dtype: image
splits:
- name: train
num_bytes: 32831386666.0
num_examples: 660
download_size: 2162040161
dataset_size: 32831386666.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mboth/luftBereitstellen-200-undersampled | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: Datatype
dtype: string
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Unit
dtype: string
- name: text
dtype: string
- name: Grundfunktion
dtype: string
- name: ZweiteGrundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
'0': AbluftAllgemein
'1': Abluftfilter
'2': Abluftklappe
'3': Abluftventilator
'4': Außenluftfilter
'5': Außenluftklappe
'6': Befeuchter
'7': Erhitzer
'8': Filter
'9': Fortluftklappe
'10': GerätAllgemein
'11': Kaeltemengenzaehler
'12': KlappenAllgemein
'13': Kühler
'14': Regler
'15': Umluft
'16': Ventilator
'17': Wärmemengenzähler
'18': Wärmerückgewinnung
'19': ZuluftAllgemein
'20': Zuluftfilter
'21': Zuluftklappe
'22': Zuluftventilator
splits:
- name: train
num_bytes: 594806.5793571349
num_examples: 2797
- name: test
num_bytes: 238179
num_examples: 1124
- name: valid
num_bytes: 238179
num_examples: 1124
download_size: 347666
dataset_size: 1071164.5793571349
---
# Dataset Card for "luftBereitstellen-200-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/luxurious_food_photography_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 116535
num_examples: 1000
download_size: 1927
dataset_size: 116535
---
# Dataset Card for "luxurious_food_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kgr123/quality_mcqa_2048 | ---
dataset_info:
features:
- name: context
dtype: string
- name: query
dtype: string
- name: option_0
dtype: string
- name: option_1
dtype: string
- name: option_2
dtype: string
- name: option_3
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 59735826
num_examples: 1732
- name: validation
num_bytes: 12686042
num_examples: 367
- name: test
num_bytes: 12774063
num_examples: 367
download_size: 10315738
dataset_size: 85195931
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
louisbrulenaudet/code-artisanat | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code de l'artisanat
source_datasets:
- original
pretty_name: Code de l'artisanat
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code de l'artisanat, non-instruct (2024-04-15)
This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice.
Fine-tuning is the process of adapting a pre-trained model to perform specific tasks or cater to particular domains. It involves adjusting the model's parameters through a further round of training on task-specific or domain-specific data. While conventional fine-tuning strategies involve supervised learning with labeled data, instruction-based fine-tuning introduces a more structured and interpretable approach.
Instruction-based fine-tuning leverages the power of human-provided instructions to guide the model's behavior. These instructions can be in the form of text prompts, prompts with explicit task descriptions, or a combination of both. This approach allows for a more controlled and context-aware interaction with the LLM, making it adaptable to a multitude of specialized tasks.
Instruction-based fine-tuning significantly enhances the performance of LLMs in the following ways:
- Task-Specific Adaptation: LLMs, when fine-tuned with specific instructions, exhibit remarkable adaptability to diverse tasks. They can switch seamlessly between translation, summarization, and question-answering, guided by the provided instructions.
- Reduced Ambiguity: Traditional LLMs might generate ambiguous or contextually inappropriate responses. Instruction-based fine-tuning allows for a clearer and more context-aware generation, reducing the likelihood of nonsensical outputs.
- Efficient Knowledge Transfer: Instructions can encapsulate domain-specific knowledge, enabling LLMs to benefit from expert guidance. This knowledge transfer is particularly valuable in fields like tax practice, law, medicine, and more.
- Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
- Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
## Concurrent reading of the LegalKit
To use all the legal data published on LegalKit, you can use this code snippet:
```python
# -*- coding: utf-8 -*-
import concurrent.futures
import os
import datasets
from tqdm.notebook import tqdm
def dataset_loader(
name:str,
streaming:bool=True
) -> datasets.Dataset:
"""
Helper function to load a single dataset in parallel.
Parameters
----------
name : str
Name of the dataset to be loaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
dataset : datasets.Dataset
Loaded dataset object.
Raises
------
Exception
If an error occurs during dataset loading.
"""
try:
return datasets.load_dataset(
name,
split="train",
streaming=streaming
)
except Exception as exc:
logging.error(f"Error loading dataset {name}: {exc}")
return None
def load_datasets(
req:list,
streaming:bool=True
) -> list:
"""
Downloads datasets specified in a list and creates a list of loaded datasets.
Parameters
----------
req : list
A list containing the names of datasets to be downloaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
datasets_list : list
A list containing loaded datasets as per the requested names provided in 'req'.
Raises
------
Exception
If an error occurs during dataset loading or processing.
Examples
--------
>>> datasets = load_datasets(["dataset1", "dataset2"], streaming=False)
"""
datasets_list = []
with concurrent.futures.ThreadPoolExecutor() as executor:
future_to_dataset = {executor.submit(dataset_loader, name): name for name in req}
for future in tqdm(concurrent.futures.as_completed(future_to_dataset), total=len(req)):
name = future_to_dataset[future]
try:
dataset = future.result()
if dataset:
datasets_list.append(dataset)
except Exception as exc:
logging.error(f"Error processing dataset {name}: {exc}")
return datasets_list
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=True
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
## Dataset generation
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `string`, presenting the instruction linked to the element.
- `input`: `string`, signifying the input details for the element.
- `output`: `string`, indicating the output information for the element.
- `start`: `string`, the date of entry into force of the article.
- `expiration`: `string`, the date of expiration of the article.
- `num`: `string`, the id of the article.
We used the following list of instructions for generating the dataset:
```python
instructions = [
"Compose l'intégralité de l'article sous forme écrite.",
"Écris la totalité du contenu de l'article.",
"Formule la totalité du texte présent dans l'article.",
"Produis l'intégralité de l'article en écriture.",
"Développe l'article dans son ensemble par écrit.",
"Génère l'ensemble du texte contenu dans l'article.",
"Formule le contenu intégral de l'article en entier.",
"Rédige la totalité du texte de l'article en entier.",
"Compose l'intégralité du contenu textuel de l'article.",
"Rédige l'ensemble du texte qui constitue l'article.",
"Formule l'article entier dans son contenu écrit.",
"Composez l'intégralité de l'article sous forme écrite.",
"Écrivez la totalité du contenu de l'article.",
"Formulez la totalité du texte présent dans l'article.",
"Développez l'article dans son ensemble par écrit.",
"Générez l'ensemble du texte contenu dans l'article.",
"Formulez le contenu intégral de l'article en entier.",
"Rédigez la totalité du texte de l'article en entier.",
"Composez l'intégralité du contenu textuel de l'article.",
"Écrivez l'article dans son intégralité en termes de texte.",
"Rédigez l'ensemble du texte qui constitue l'article.",
"Formulez l'article entier dans son contenu écrit.",
"Composer l'intégralité de l'article sous forme écrite.",
"Écrire la totalité du contenu de l'article.",
"Formuler la totalité du texte présent dans l'article.",
"Produire l'intégralité de l'article en écriture.",
"Développer l'article dans son ensemble par écrit.",
"Générer l'ensemble du texte contenu dans l'article.",
"Formuler le contenu intégral de l'article en entier.",
"Rédiger la totalité du texte de l'article en entier.",
"Composer l'intégralité du contenu textuel de l'article.",
"Rédiger l'ensemble du texte qui constitue l'article.",
"Formuler l'article entier dans son contenu écrit.",
"Quelles sont les dispositions de l'article ?",
"Quelles dispositions sont incluses dans l'article ?",
"Quelles sont les dispositions énoncées dans l'article ?",
"Quel est le texte intégral de l'article ?",
"Quelle est la lettre de l'article ?"
]
```
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
scaredmeow/shopee-reviews-tl-binary | ---
license: odc-by
task_categories:
- text-classification
language:
- tl
tags:
- reviews
- shopee
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [Enhancement to Low Resource Text Classification via Sequential Transfer Learning](#)
- **Leaderboard:**
- **Point of Contact:** [Neil Riego](mailto:neilchristianriego3@gmail.com)
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A typical data point, comprises of a text and the corresponding label.
An example from the YelpReviewFull test set looks as follows:
```
{
'label': pos,
'text': 'Huyyy ang gandaaaaaaaaaaa. Grabe sobrang ganda talaga wala ako masabi. Complete orders pa pinadala sa akin. Buti hindi nabasag kahit walang bubble wrap. Okay na lang din para save mother earth and at least hindi nabasag hehe. Oorder ulit ako ang ganda eh'
}
```
### Data Fields
- 'text': The review texts are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes ("").
- 'label': Corresponds to the score associated with the review (between positive and negative).
### Data Splits
The Shopee reviews tl binary dataset is constructed by randomly taking 14000 training samples and 3000 samples for testing and validation for each review star from neg and pos.
In total there are 28000 training samples and 6000 each in validation and testing samples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-56000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 991868
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ameya-akkalkotkar/VideoGamesList | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 71509
num_examples: 99
download_size: 41508
dataset_size: 71509
---
# Dataset Card for "VideoGamesList"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFace-CN-community/translation | ---
license: apache-2.0
---
|
FastFit/dbpedia_70 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 156266768
num_examples: 240942
- name: validation
num_bytes: 23270885
num_examples: 36003
- name: test
num_bytes: 39434486
num_examples: 60794
download_size: 139608652
dataset_size: 218972139
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
dvilasuero/news-programmatic-labeling | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Business
'1': Sci/Tech
'2': Sports
'3': World
splits:
- name: train
num_bytes: 167543.4299287411
num_examples: 673
- name: test
num_bytes: 42072.57007125891
num_examples: 169
download_size: 146616
dataset_size: 209616.0
---
# Dataset Card for "news-programmatic-labeling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vitorbr2009/voz-treinada-marola | ---
license: openrail
---
|
Falah/2M_fantastic_creatures_SDXL_refiner_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1423844345
num_examples: 2000000
download_size: 192068725
dataset_size: 1423844345
---
# Dataset Card for "2M_fantastic_creatures_SDXL_refiner_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
smit-mehta/marvel-actors-faces | ---
license: odc-by
---
|
Jerry46/tldr-preference | ---
dataset_info:
features:
- name: text
dtype: string
- name: sum_0
dtype: string
- name: sum_1
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 78602956
num_examples: 50000
- name: test
num_bytes: 1594758
num_examples: 1000
download_size: 45620764
dataset_size: 80197714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "tldr-preference"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sxpatch/mydataset1 | ---
license: mit
---
|
chikino/luxuria | ---
license: openrail
---
|
mikegarts/oa_tell_a_joke_20000 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
struct:
- name: link
dtype: string
- name: nsfw
dtype: bool
splits:
- name: train
num_bytes: 11848430
num_examples: 20000
download_size: 6222319
dataset_size: 11848430
license: mit
language:
- en
---
# Dataset Card for "oa_tell_a_joke_20000"
This dataset is based on the SocialGrep/one-million-reddit-jokes dataset, and augmented using KeyBert to be used for the [Open Assistant project](https://github.com/LAION-AI/Open-Assistant).
Addition details of dataset creation are [here](https://github.com/mikegarts/Open-Assistant/blob/OA-261.tell_a_joke_dataset/data/datasets/tell_a_joke/tell_a_joke.ipynb)
# Data fields:
### INSTRUCTION - The instruction to the assistant
### RESPONSE - The response of the assistant
### SOURCE - source of the data
### METADATA - additional link, such as a link to the source webpage on reddit
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rosa/LoRA_Evangelion | ---
license: apache-2.0
language:
- en
- zh
tags:
- LoRA
pretty_name: eva
size_categories:
- 1K<n<10K
--- |
mirav/artistic-imagery | ---
pretty_name: Artistic Imagery
size_categories:
- 1K<n<10K
task_categories:
- text-to-image
---
#### Warning: Due to the nature of the source, certain images are very large.
Large number of artistic images, mostly (but hardly exclusively) sourced from Wikimedia Commons. <br>
Pull requests are allowed, and even encouraged. |
sfblaauw/cofrico-v3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 266372.6
num_examples: 3
- name: test
num_bytes: 122883.2
num_examples: 1
- name: validation
num_bytes: 42102.2
num_examples: 1
download_size: 425753
dataset_size: 431358.0
---
# Dataset Card for "cofrico-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.