datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
mangoesai/DepressionDetection-prompted | ---
dataset_info:
features:
- name: clean_text
dtype: string
- name: is_depression
dtype: int64
- name: instances
sequence: string
splits:
- name: train
num_bytes: 4664474
num_examples: 5411
- name: test
num_bytes: 1897494
num_examples: 2320
download_size: 3523750
dataset_size: 6561968
---
# Dataset Card for "DepressionDetection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eitanturok/toxic-pile-small | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: texts
sequence: string
- name: meta
struct:
- name: pile_set_name
dtype: string
- name: scores
sequence: float64
- name: avg_score
dtype: float64
- name: num_sents
dtype: int64
splits:
- name: train
num_bytes: 5791634
num_examples: 953
download_size: 3316229
dataset_size: 5791634
---
# Dataset Card for "toxic-pile-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-emotion-5a29f55d-11295506 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion
metrics: ['bertscore']
dataset_name: emotion
dataset_config: default
dataset_split: validation
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model. |
cj-mills/hagrid-sample-500k-384p | ---
license: cc-by-sa-4.0
task_categories:
- object-detection
language:
- en
size_categories:
- 100K<n<1M
pretty_name: HaGRID Sample 500k 384p
---
This dataset contains 509,323 images from [HaGRID](https://github.com/hukenovs/hagrid) (HAnd Gesture Recognition Image Dataset) downscaled to 384p. The original dataset is 716GB and contains 552,992 1080p images. I created this sample for a tutorial so readers can use the dataset in the free tiers of Google Colab and Kaggle Notebooks.
### Original Authors:
* [Alexander Kapitanov](https://www.linkedin.com/in/hukenovs)
* [Andrey Makhlyarchuk](https://www.linkedin.com/in/makhliarchuk)
* [Karina Kvanchiani](https://www.linkedin.com/in/kvanchiani)
### Original Dataset Links
* [GitHub](https://github.com/hukenovs/hagrid)
* [Kaggle Datasets Page](https://www.kaggle.com/datasets/kapitanov/hagrid)
### Object Classes
```text
['call',
'no_gesture',
'dislike',
'fist',
'four',
'like',
'mute',
'ok',
'one',
'palm',
'peace',
'peace_inverted',
'rock',
'stop',
'stop_inverted',
'three',
'three2',
'two_up',
'two_up_inverted']
```
### Annotations
* `bboxes`: `[top-left-X-position, top-left-Y-position, width, height]`
* Multiply `top-left-X-position` and `width` values by the image width and multiply `top-left-Y-position` and `height` values by the image height.
<div style="overflow-x: auto; overflow-y: auto">
<table>
<thead>
<tr style="text-align: right">
<th></th>
<th>00005c9c-3548-4a8f-9d0b-2dd4aff37fc9</th>
</tr>
</thead>
<tbody>
<tr>
<th>bboxes</th>
<td>[[0.23925175, 0.28595301, 0.25055143, 0.20777627]]</td>
</tr>
<tr>
<th>labels</th>
<td>[call]</td>
</tr>
<tr>
<th>leading_hand</th>
<td>right</td>
</tr>
<tr>
<th>leading_conf</th>
<td>1</td>
</tr>
<tr>
<th>user_id</th>
<td>5a389ffe1bed6660a59f4586c7d8fe2770785e5bf79b09334aa951f6f119c024</td>
</tr>
</tbody>
</table>
</div> |
Eididkd/Freddy | ---
license: openrail
---
|
LabHC/Moji_stratify | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: sa
dtype: int64
splits:
- name: train
num_bytes: 3843951
num_examples: 50000
- name: test
num_bytes: 154027
num_examples: 2000
- name: dev
num_bytes: 152283
num_examples: 2000
download_size: 2169706
dataset_size: 4150261
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
---
|
PriscilaRubim/Framingham | ---
task_categories:
- text-generation
tags:
- medical
pretty_name: Framingham
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
wim-uoc/sarscov | ---
license: mit
---
|
autoevaluate/autoeval-eval-squad_v2-squad_v2-e15d25-1483654271 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: Jiqing/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Jiqing/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
tyzhu/squad_qa_context_v5_full | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4350151
num_examples: 2385
- name: validation
num_bytes: 570908
num_examples: 300
download_size: 0
dataset_size: 4921059
---
# Dataset Card for "squad_qa_context_v5_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/tripclick_train_head_dctr | ---
pretty_name: '`tripclick/train/head/dctr`'
viewer: false
source_datasets: ['irds/tripclick', 'irds/tripclick_train_head']
task_categories:
- text-retrieval
---
# Dataset Card for `tripclick/train/head/dctr`
The `tripclick/train/head/dctr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/head/dctr).
# Data
This dataset provides:
- `qrels`: (relevance assessments); count=128,420
- For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick)
- For `queries`, use [`irds/tripclick_train_head`](https://huggingface.co/datasets/irds/tripclick_train_head)
## Usage
```python
from datasets import load_dataset
qrels = load_dataset('irds/tripclick_train_head_dctr', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Rekabsaz2021TripClick,
title={TripClick: The Log Files of a Large Health Web Search Engine},
author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff},
year={2021},
booktitle={SIGIR}
}
```
|
quocanh34/synthesis_data_v1 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: 'null'
- name: sampling_rate
dtype: int64
- name: transcription
dtype: string
- name: idx
dtype: int64
splits:
- name: train
num_bytes: 1212488387
num_examples: 2594
download_size: 285097019
dataset_size: 1212488387
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "synthesis_data_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pontusnorman123/wild1000_swetest | ---
dataset_info:
features:
- name: id
dtype: int64
- name: words
sequence: string
- name: bboxes
sequence:
sequence: float64
- name: ner_tags
sequence:
class_label:
names:
'0': I-COMPANY
'1': I-DATE
'2': I-ADDRESS
'3': I-TOTAL
'4': I-TAX
'5': I-PRODUCT
'6': O
- name: image
dtype: image
splits:
- name: train
num_bytes: 708345971.0
num_examples: 1000
- name: test
num_bytes: 53446922.0
num_examples: 50
download_size: 760098731
dataset_size: 761792893.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Mrfine/Mrfine | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245921
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Seeker38/image_text_wikipedia_vi | ---
language:
- vi
pretty_name: Images and corresponding abstracts in Vietnamese Wikipedia
source_datasets:
- original
size_categories:
- 100K<n<1M
tags:
- wikipedia
- images
- text
- LM
dataset_info:
features:
- name: image
dtype: image
- name: title
dtype: string
- name: text
dtype: string
# splits:
# - name: train
---
# Dataset Card for image_text_wikipedia_vi
### Dataset Summary
Dataset Summary: Image-Text Wikipedia Abstracts (Vietnamese version) <br>
This dataset comprises nearly 380.000 pairs of images and corresponding textual abstracts extracted from Vietnamese Wikipedia articles. The dataset is designed to facilitate research and development in the field of multimodal learning, particularly in tasks that involve understanding and processing both textual and visual information.
Description:
- Total Images: 374748
- Total Textual Abstracts: 374748
Dataset Composition:
- Each entry in the dataset consists of an image along with the corresponding abstract text extracted from the introductory section of Vietnamese Wikipedia articles.<br>
- The images are diverse in content, ranging from objects and scenes to landmarks and people, providing a rich and varied set of visual information.
### Data Collection:
The dataset was curated by combining 2 methods:
- Extracting and filtering abstracts text directly from XML Wikimedia dump file.
- Scraping Vietnamese Wikipedia articles, focusing on the introductory paragraphs known as abstracts. These abstracts serve as concise summaries of the corresponding articles, providing context and key information related to the image.
### Intended Use:
Researchers and developers can utilize this dataset for various tasks such as:
- Multimodal learning: Training models to understand and generate descriptions for both images and text.
- Image captioning: Generating descriptive captions for images.
- Visual question answering (VQA): Developing models that can answer questions about visual content.
- Cross-modal retrieval: Matching images to their corresponding textual abstracts and vice versa.
### Data Preprocessing:
- Image Format: The images are provided in a standardized JPG format.
- Text Preprocessing: The textual abstracts have undergone basic preprocessing steps such as removal of unnecessary brackets which are mainly use in XML, removal of unknown character such as: '\u00A0', removal of the tagging of comment: [1],[2],[3],..., removal of unnecessary empty lines inside each text,....
### Potential Challenges:
- Language Complexity: As abstracts are extracted from Wikipedia, the text might include complex vocabulary and diverse topics.
- Ambiguity: Some abstracts may contain ambiguous or figurative language, challenging comprehension.
- Image Quality: Variation in image quality and resolution may impact model performance.
- Text length imbalance: the longest text has the length of 8903 whereas the shortest is 1. This can create a situation of highly ram usage with using LSTM model,etc..
### View dataset:
There are 2 ways to load dataset:
<b>1. Use datasets library instead of downloading the dataset to local</b>
```python
from datasets import load_dataset
dataset = load_dataset("Seeker38/image_text_wikipedia_vi", split="train")
```
##### you can use the link from this <b>[Google Colab](https://colab.research.google.com/drive/1BOAEsiVXNGm__vhZ4v_oyqytweG3JTm_?usp=sharing)</b> to see a little viewing demo.
<b>2. For dataset that has been downloaded to local</b>
```python
import pandas as pd
from datasets import Dataset
parquet_file = 'articles_data.parquet'
df = pd.read_parquet(parquet_file)
# Convert the pandas DataFrame to a datasets.arrow_dataset.Dataset object
dataset = Dataset.from_pandas(df)
```
<b>To view the element's text</b>
```python
# Example: element number 3
dataset[3]["text"]
```
<b>If you use the 2nd way, then to view,or even use for training the element's image, you need to contain the convertion step</b>
```python
from PIL import Image
import io
# Example: element number 3
image_bytes = dataset[3]["image"]["bytes"]
# Convert bytes to Image
image = Image.open(io.BytesIO(image_bytes))
image_rgb = image.convert("RGB") # some images have error: ValueError: Could not save to JPEG for display
image_rgb
```
<b>Else</b>
```python
dataset[2]["image"]
``` |
autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845708 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: sysresearch101/t5-large-finetuned-xsum
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sysresearch101/t5-large-finetuned-xsum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model. |
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_35 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 594541920.0
num_examples: 116760
download_size: 607169123
dataset_size: 594541920.0
---
# Dataset Card for "chunk_35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Kokusho/vehicles_sales | ---
license: openrail
---
|
RyanZZZZZ/w5_train_all_input_bhc_sampled | ---
dataset_info:
features:
- name: input
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 15019637.435487388
num_examples: 1000
download_size: 7926552
dataset_size: 15019637.435487388
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Joe02/Nanahoshi_refs | ---
license: other
---
|
AdapterOcean/med_alpaca_standardized_cluster_91_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 25146272
num_examples: 35699
download_size: 12680105
dataset_size: 25146272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_91_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
George6584/testing | ---
license: afl-3.0
---
|
one-sec-cv12/chunk_97 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 23263593984.0
num_examples: 242208
download_size: 21528015709
dataset_size: 23263593984.0
---
# Dataset Card for "chunk_97"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aertit/xglm_dataenth | ---
license: apache-2.0
size_categories:
- n<1K
--- |
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-1500 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 2833426184
num_examples: 500
download_size: 578660152
dataset_size: 2833426184
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Kelkin/article_embeddings | ---
license: mit
---
|
jbarat/plant_species | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aechmea_fasciata
'1': agave_americana
'2': agave_attenuata
'3': agave_tequilana
'4': aglaonema_commutatum
'5': albuca_spiralis
'6': allium_cepa
'7': allium_sativum
splits:
- name: train
num_bytes: 82083349.0
num_examples: 800
download_size: 82004194
dataset_size: 82083349.0
license: unknown
task_categories:
- image-classification
language:
- en
pretty_name: Plant Species
size_categories:
- 10K<n<100K
---
# Dataset Card for "plant_species"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MichiganNLP/levin_verbs | ---
license: other
language:
- en
pretty_name: Levin's Verb Classes
---
# Levin's Verb Classes
Levin's Verb Classes from [English Verb Classes And Alternations: A Preliminary Investigation](https://press.uchicago.edu/ucp/books/book/chicago/E/bo3684144.html), by [Beth Levin](https://web.stanford.edu/~bclevin/), published by [The University of Chicago Press](https://press.uchicago.edu/index.html), © 1993 by The University of Chicago. All rights reserved.
The contents of this repository where generated from the reverse indexes [1](https://websites.umich.edu/~jlawler/levin.verbs) and [2](https://websites.umich.edu/~jlawler/levin.verbs2.txt), generated by [John M. Lawler](https://websites.umich.edu/~jlawler/) from [the original index](https://websites.umich.edu/~jlawler/levin.html). They are provided to ease the access to Levin's Verb Classes.
## License
The context of this repository may be used and shared in accordance with the fair-use provisions of US copyright law, and it may be archived and redistributed in electronic form, provided that this entire notice is carried and provided that the University of Chicago Press is notified and no fee is charged for access. Archiving, redistribution, or republication of this text on other terms, in any medium, requires both the consent of the author and the University of Chicago Press.
Any work, published or unpublished, based in whole or in part on the use of this index should acknowledge English Verb Classes And Alternations: A Preliminary Investigation. Beth Levin would appreciate being informed of such work or other significant uses of the index. |
SGBTalha/SarinhaC | ---
license: openrail
---
|
trajanson/ralph-lauren-purple-label-polo-images | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 15882713.0
num_examples: 117
download_size: 15861077
dataset_size: 15882713.0
---
# Dataset Card for "ralph-lauren-purple-label-polo-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lamini/icd-11-qa | ---
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
- text-generation
language:
- en
tags:
- medical
size_categories:
- 50k<n<100K
---
# Lamini ICD-11 QA Dataset
## Description
ICD-11\( International Classification of Diseases 11th Revision\) is the international standard for systematic recording, reporting, analysis, interpretation and comparison of mortality and morbidity data. These data and statistics support payment systems, service planning, administration of quality and safety, and health services research. Diagnostic guidance linked to categories of ICD also standardizes data collection and enables large scale research.
## Format
The questions and answers are in the form of jsonlines files, with each json object in the file containing the an entity object which gives metadata about the scraped datapoint.
## Data Pipeline Code
The entire data pipeline used to create this dataset is open source at: [https://github.com/lamini-ai/lamini-sdk](https://github.com/lamini-ai/lamini-sdk/blob/greg.cpt-gpt/04_IFT/generate_data_pipeline.py)
It can be used to reproduce this dataset, or add new icd codes to it.
## License
The dataset is released under the CC-BY license.
## Citation
If you use this dataset in your research, please cite us. lamini.ai
## Contributing
If you would like to contribute to this dataset, please submit a pull request with your changes. |
communityai/Open-Orca___1million-gpt-4-50k | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 92733280.61425516
num_examples: 50000
download_size: 48942788
dataset_size: 92733280.61425516
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
euclaise/DirtyWritingPrompts | ---
dataset_info:
features:
- name: post_title
dtype: string
- name: body
dtype: string
- name: score
dtype: int64
- name: gilded
dtype: int64
- name: post_score
dtype: int64
splits:
- name: train
num_bytes: 36315869
num_examples: 27921
download_size: 18528856
dataset_size: 36315869
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc0-1.0
tags:
- not-for-all-audiences
---
# Dataset Card for "DirtyWritingPrompts"
Data collected from r/DirtyWritingPrompts, up to 12-2022, from PushShift. |
open-llm-leaderboard/details_robinsmits__Qwen1.5-7B-Dutch-Chat-Sft | ---
pretty_name: Evaluation run of robinsmits/Qwen1.5-7B-Dutch-Chat-Sft
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [robinsmits/Qwen1.5-7B-Dutch-Chat-Sft](https://huggingface.co/robinsmits/Qwen1.5-7B-Dutch-Chat-Sft)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_robinsmits__Qwen1.5-7B-Dutch-Chat-Sft\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-29T20:04:27.887464](https://huggingface.co/datasets/open-llm-leaderboard/details_robinsmits__Qwen1.5-7B-Dutch-Chat-Sft/blob/main/results_2024-03-29T20-04-27.887464.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5978454719542368,\n\
\ \"acc_stderr\": 0.03318702690067964,\n \"acc_norm\": 0.6052027087641982,\n\
\ \"acc_norm_stderr\": 0.03386918430024484,\n \"mc1\": 0.2839657282741738,\n\
\ \"mc1_stderr\": 0.01578537085839673,\n \"mc2\": 0.4389025272424357,\n\
\ \"mc2_stderr\": 0.014718286096688073\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.4803754266211604,\n \"acc_stderr\": 0.014600132075947087,\n\
\ \"acc_norm\": 0.5068259385665529,\n \"acc_norm_stderr\": 0.014610029151379812\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5421230830511851,\n\
\ \"acc_stderr\": 0.004972042602001382,\n \"acc_norm\": 0.7349133638717387,\n\
\ \"acc_norm_stderr\": 0.004404772735765973\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5259259259259259,\n\
\ \"acc_stderr\": 0.04313531696750575,\n \"acc_norm\": 0.5259259259259259,\n\
\ \"acc_norm_stderr\": 0.04313531696750575\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n\
\ \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.67,\n\
\ \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\": 0.67,\n \
\ \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6566037735849056,\n \"acc_stderr\": 0.02922452646912479,\n\
\ \"acc_norm\": 0.6566037735849056,\n \"acc_norm_stderr\": 0.02922452646912479\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6944444444444444,\n\
\ \"acc_stderr\": 0.03852084696008534,\n \"acc_norm\": 0.6944444444444444,\n\
\ \"acc_norm_stderr\": 0.03852084696008534\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.05021167315686779,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.05021167315686779\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n\
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5780346820809249,\n\
\ \"acc_stderr\": 0.037657466938651504,\n \"acc_norm\": 0.5780346820809249,\n\
\ \"acc_norm_stderr\": 0.037657466938651504\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n\
\ \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.03265019475033582,\n\
\ \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.03265019475033582\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n\
\ \"acc_stderr\": 0.04657047260594962,\n \"acc_norm\": 0.4298245614035088,\n\
\ \"acc_norm_stderr\": 0.04657047260594962\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6068965517241379,\n \"acc_stderr\": 0.040703290137070705,\n\
\ \"acc_norm\": 0.6068965517241379,\n \"acc_norm_stderr\": 0.040703290137070705\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.47354497354497355,\n \"acc_stderr\": 0.025715239811346755,\n \"\
acc_norm\": 0.47354497354497355,\n \"acc_norm_stderr\": 0.025715239811346755\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n\
\ \"acc_stderr\": 0.04403438954768176,\n \"acc_norm\": 0.4126984126984127,\n\
\ \"acc_norm_stderr\": 0.04403438954768176\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7096774193548387,\n\
\ \"acc_stderr\": 0.025822106119415888,\n \"acc_norm\": 0.7096774193548387,\n\
\ \"acc_norm_stderr\": 0.025822106119415888\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.541871921182266,\n \"acc_stderr\": 0.03505630140785742,\n\
\ \"acc_norm\": 0.541871921182266,\n \"acc_norm_stderr\": 0.03505630140785742\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\"\
: 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.02985751567338642,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.02985751567338642\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8031088082901554,\n \"acc_stderr\": 0.028697873971860667,\n\
\ \"acc_norm\": 0.8031088082901554,\n \"acc_norm_stderr\": 0.028697873971860667\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5871794871794872,\n \"acc_stderr\": 0.024962683564331796,\n\
\ \"acc_norm\": 0.5871794871794872,\n \"acc_norm_stderr\": 0.024962683564331796\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3111111111111111,\n \"acc_stderr\": 0.02822644674968352,\n \
\ \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.02822644674968352\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6134453781512605,\n \"acc_stderr\": 0.031631458075523776,\n\
\ \"acc_norm\": 0.6134453781512605,\n \"acc_norm_stderr\": 0.031631458075523776\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.37748344370860926,\n \"acc_stderr\": 0.0395802723112157,\n \"\
acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.0395802723112157\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7908256880733945,\n \"acc_stderr\": 0.01743793717334323,\n \"\
acc_norm\": 0.7908256880733945,\n \"acc_norm_stderr\": 0.01743793717334323\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49537037037037035,\n \"acc_stderr\": 0.03409825519163572,\n \"\
acc_norm\": 0.49537037037037035,\n \"acc_norm_stderr\": 0.03409825519163572\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7647058823529411,\n \"acc_stderr\": 0.029771775228145635,\n \"\
acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.029771775228145635\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159267,\n \
\ \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159267\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6367713004484304,\n\
\ \"acc_stderr\": 0.032277904428505,\n \"acc_norm\": 0.6367713004484304,\n\
\ \"acc_norm_stderr\": 0.032277904428505\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n\
\ \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.03749492448709698,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.03749492448709698\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.656441717791411,\n \"acc_stderr\": 0.037311335196738925,\n\
\ \"acc_norm\": 0.656441717791411,\n \"acc_norm_stderr\": 0.037311335196738925\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.36607142857142855,\n\
\ \"acc_stderr\": 0.0457237235873743,\n \"acc_norm\": 0.36607142857142855,\n\
\ \"acc_norm_stderr\": 0.0457237235873743\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822584,\n\
\ \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822584\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n\
\ \"acc_stderr\": 0.02308663508684141,\n \"acc_norm\": 0.8547008547008547,\n\
\ \"acc_norm_stderr\": 0.02308663508684141\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.046482319871173156,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.046482319871173156\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7535121328224776,\n\
\ \"acc_stderr\": 0.015411308769686933,\n \"acc_norm\": 0.7535121328224776,\n\
\ \"acc_norm_stderr\": 0.015411308769686933\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.02530525813187972,\n\
\ \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.02530525813187972\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.33743016759776534,\n\
\ \"acc_stderr\": 0.015813901283913048,\n \"acc_norm\": 0.33743016759776534,\n\
\ \"acc_norm_stderr\": 0.015813901283913048\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7026143790849673,\n \"acc_stderr\": 0.02617390850671858,\n\
\ \"acc_norm\": 0.7026143790849673,\n \"acc_norm_stderr\": 0.02617390850671858\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n\
\ \"acc_stderr\": 0.026981478043648033,\n \"acc_norm\": 0.6559485530546624,\n\
\ \"acc_norm_stderr\": 0.026981478043648033\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.02712511551316686,\n\
\ \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.02712511551316686\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.40070921985815605,\n \"acc_stderr\": 0.029233465745573086,\n \
\ \"acc_norm\": 0.40070921985815605,\n \"acc_norm_stderr\": 0.029233465745573086\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4256844850065189,\n\
\ \"acc_stderr\": 0.012628393551811945,\n \"acc_norm\": 0.4256844850065189,\n\
\ \"acc_norm_stderr\": 0.012628393551811945\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5257352941176471,\n \"acc_stderr\": 0.03033257809455502,\n\
\ \"acc_norm\": 0.5257352941176471,\n \"acc_norm_stderr\": 0.03033257809455502\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5800653594771242,\n \"acc_stderr\": 0.01996681117825648,\n \
\ \"acc_norm\": 0.5800653594771242,\n \"acc_norm_stderr\": 0.01996681117825648\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n\
\ \"acc_stderr\": 0.04653429807913507,\n \"acc_norm\": 0.6181818181818182,\n\
\ \"acc_norm_stderr\": 0.04653429807913507\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.029162738410249776,\n\
\ \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.029162738410249776\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n\
\ \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n\
\ \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5,\n \
\ \"acc_stderr\": 0.03892494720807614,\n \"acc_norm\": 0.5,\n \"\
acc_norm_stderr\": 0.03892494720807614\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7543859649122807,\n \"acc_stderr\": 0.03301405946987251,\n\
\ \"acc_norm\": 0.7543859649122807,\n \"acc_norm_stderr\": 0.03301405946987251\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2839657282741738,\n\
\ \"mc1_stderr\": 0.01578537085839673,\n \"mc2\": 0.4389025272424357,\n\
\ \"mc2_stderr\": 0.014718286096688073\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6874506708760852,\n \"acc_stderr\": 0.013027563620748842\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2934040940106141,\n \
\ \"acc_stderr\": 0.01254183081546149\n }\n}\n```"
repo_url: https://huggingface.co/robinsmits/Qwen1.5-7B-Dutch-Chat-Sft
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|arc:challenge|25_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|gsm8k|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hellaswag|10_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T20-04-27.887464.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T20-04-27.887464.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- '**/details_harness|winogrande|5_2024-03-29T20-04-27.887464.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-29T20-04-27.887464.parquet'
- config_name: results
data_files:
- split: 2024_03_29T20_04_27.887464
path:
- results_2024-03-29T20-04-27.887464.parquet
- split: latest
path:
- results_2024-03-29T20-04-27.887464.parquet
---
# Dataset Card for Evaluation run of robinsmits/Qwen1.5-7B-Dutch-Chat-Sft
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [robinsmits/Qwen1.5-7B-Dutch-Chat-Sft](https://huggingface.co/robinsmits/Qwen1.5-7B-Dutch-Chat-Sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_robinsmits__Qwen1.5-7B-Dutch-Chat-Sft",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-29T20:04:27.887464](https://huggingface.co/datasets/open-llm-leaderboard/details_robinsmits__Qwen1.5-7B-Dutch-Chat-Sft/blob/main/results_2024-03-29T20-04-27.887464.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5978454719542368,
"acc_stderr": 0.03318702690067964,
"acc_norm": 0.6052027087641982,
"acc_norm_stderr": 0.03386918430024484,
"mc1": 0.2839657282741738,
"mc1_stderr": 0.01578537085839673,
"mc2": 0.4389025272424357,
"mc2_stderr": 0.014718286096688073
},
"harness|arc:challenge|25": {
"acc": 0.4803754266211604,
"acc_stderr": 0.014600132075947087,
"acc_norm": 0.5068259385665529,
"acc_norm_stderr": 0.014610029151379812
},
"harness|hellaswag|10": {
"acc": 0.5421230830511851,
"acc_stderr": 0.004972042602001382,
"acc_norm": 0.7349133638717387,
"acc_norm_stderr": 0.004404772735765973
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5259259259259259,
"acc_stderr": 0.04313531696750575,
"acc_norm": 0.5259259259259259,
"acc_norm_stderr": 0.04313531696750575
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6566037735849056,
"acc_stderr": 0.02922452646912479,
"acc_norm": 0.6566037735849056,
"acc_norm_stderr": 0.02922452646912479
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.03852084696008534,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.03852084696008534
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.05021167315686779,
"acc_norm": 0.48,
"acc_norm_stderr": 0.05021167315686779
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5780346820809249,
"acc_stderr": 0.037657466938651504,
"acc_norm": 0.5780346820809249,
"acc_norm_stderr": 0.037657466938651504
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082635,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082635
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.03265019475033582,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.03265019475033582
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.04657047260594962,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.04657047260594962
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6068965517241379,
"acc_stderr": 0.040703290137070705,
"acc_norm": 0.6068965517241379,
"acc_norm_stderr": 0.040703290137070705
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47354497354497355,
"acc_stderr": 0.025715239811346755,
"acc_norm": 0.47354497354497355,
"acc_norm_stderr": 0.025715239811346755
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.04403438954768176,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.04403438954768176
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7096774193548387,
"acc_stderr": 0.025822106119415888,
"acc_norm": 0.7096774193548387,
"acc_norm_stderr": 0.025822106119415888
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.541871921182266,
"acc_stderr": 0.03505630140785742,
"acc_norm": 0.541871921182266,
"acc_norm_stderr": 0.03505630140785742
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.02985751567338642,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.02985751567338642
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8031088082901554,
"acc_stderr": 0.028697873971860667,
"acc_norm": 0.8031088082901554,
"acc_norm_stderr": 0.028697873971860667
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5871794871794872,
"acc_stderr": 0.024962683564331796,
"acc_norm": 0.5871794871794872,
"acc_norm_stderr": 0.024962683564331796
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3111111111111111,
"acc_stderr": 0.02822644674968352,
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.02822644674968352
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6134453781512605,
"acc_stderr": 0.031631458075523776,
"acc_norm": 0.6134453781512605,
"acc_norm_stderr": 0.031631458075523776
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.0395802723112157,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.0395802723112157
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7908256880733945,
"acc_stderr": 0.01743793717334323,
"acc_norm": 0.7908256880733945,
"acc_norm_stderr": 0.01743793717334323
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49537037037037035,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.49537037037037035,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.029771775228145635,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.029771775228145635
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159267,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159267
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6367713004484304,
"acc_stderr": 0.032277904428505,
"acc_norm": 0.6367713004484304,
"acc_norm_stderr": 0.032277904428505
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7175572519083969,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.7175572519083969,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.03749492448709698,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.03749492448709698
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.656441717791411,
"acc_stderr": 0.037311335196738925,
"acc_norm": 0.656441717791411,
"acc_norm_stderr": 0.037311335196738925
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.36607142857142855,
"acc_stderr": 0.0457237235873743,
"acc_norm": 0.36607142857142855,
"acc_norm_stderr": 0.0457237235873743
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.02308663508684141,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.02308663508684141
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.046482319871173156,
"acc_norm": 0.69,
"acc_norm_stderr": 0.046482319871173156
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7535121328224776,
"acc_stderr": 0.015411308769686933,
"acc_norm": 0.7535121328224776,
"acc_norm_stderr": 0.015411308769686933
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.02530525813187972,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.02530525813187972
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.33743016759776534,
"acc_stderr": 0.015813901283913048,
"acc_norm": 0.33743016759776534,
"acc_norm_stderr": 0.015813901283913048
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7026143790849673,
"acc_stderr": 0.02617390850671858,
"acc_norm": 0.7026143790849673,
"acc_norm_stderr": 0.02617390850671858
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6559485530546624,
"acc_stderr": 0.026981478043648033,
"acc_norm": 0.6559485530546624,
"acc_norm_stderr": 0.026981478043648033
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.02712511551316686,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.02712511551316686
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.40070921985815605,
"acc_stderr": 0.029233465745573086,
"acc_norm": 0.40070921985815605,
"acc_norm_stderr": 0.029233465745573086
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4256844850065189,
"acc_stderr": 0.012628393551811945,
"acc_norm": 0.4256844850065189,
"acc_norm_stderr": 0.012628393551811945
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5257352941176471,
"acc_stderr": 0.03033257809455502,
"acc_norm": 0.5257352941176471,
"acc_norm_stderr": 0.03033257809455502
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5800653594771242,
"acc_stderr": 0.01996681117825648,
"acc_norm": 0.5800653594771242,
"acc_norm_stderr": 0.01996681117825648
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.04653429807913507,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.04653429807913507
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.029162738410249776,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.029162738410249776
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7810945273631841,
"acc_stderr": 0.029239174636647,
"acc_norm": 0.7810945273631841,
"acc_norm_stderr": 0.029239174636647
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5,
"acc_stderr": 0.03892494720807614,
"acc_norm": 0.5,
"acc_norm_stderr": 0.03892494720807614
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7543859649122807,
"acc_stderr": 0.03301405946987251,
"acc_norm": 0.7543859649122807,
"acc_norm_stderr": 0.03301405946987251
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2839657282741738,
"mc1_stderr": 0.01578537085839673,
"mc2": 0.4389025272424357,
"mc2_stderr": 0.014718286096688073
},
"harness|winogrande|5": {
"acc": 0.6874506708760852,
"acc_stderr": 0.013027563620748842
},
"harness|gsm8k|5": {
"acc": 0.2934040940106141,
"acc_stderr": 0.01254183081546149
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
guo1109/codehaha | ---
license: mit
---
这是我测试的第一个数据集,我将会在这个数据集里添加少量数据查看
|
khadivi/mail-dataset | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 194508
num_examples: 286
download_size: 54317
dataset_size: 194508
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LolorzoloL/crypto_news | ---
license: apache-2.0
---
|
chathuranga-jayanath/defects4j-context-5-len-10000-prompt-3 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: filepath
dtype: string
- name: start_bug_line
dtype: int64
- name: end_bug_line
dtype: int64
- name: bug
dtype: string
- name: fix
dtype: string
- name: ctx
dtype: string
splits:
- name: train
num_bytes: 93696146
num_examples: 115468
- name: validation
num_bytes: 11701645
num_examples: 14433
- name: test
num_bytes: 11753835
num_examples: 14433
download_size: 41082332
dataset_size: 117151626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
one-sec-cv12/chunk_251 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 15383239776.75
num_examples: 160162
download_size: 13700452529
dataset_size: 15383239776.75
---
# Dataset Card for "chunk_251"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tessiw/german_OpenOrca12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 419828001
num_examples: 250000
download_size: 241049581
dataset_size: 419828001
---
# Dataset Card for "german_OpenOrca12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sproos/summeval-sw | ---
dataset_info:
features:
- name: machine_summaries
sequence: string
- name: human_summaries
sequence: string
- name: relevance
sequence: float64
- name: coherence
sequence: float64
- name: fluency
sequence: float64
- name: consistency
sequence: float64
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1172881
num_examples: 100
download_size: 484750
dataset_size: 1172881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "summeval-sw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NajmoAden/embedd_tutorial | ---
license: mit
---
|
rehanbrr/toolkit_dataset | ---
dataset_info:
features:
- name: doi
dtype: string
- name: chunk_id
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: chunk
dtype: string
splits:
- name: train
num_bytes: 139684
num_examples: 80
download_size: 74668
dataset_size: 139684
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "toolkit_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lordsymbol/lord | ---
license: openrail
---
|
VuongQuoc/60k_dataset_multichoice_384 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: token_type_ids
sequence:
sequence: int8
- name: attention_mask
sequence:
sequence: int8
- name: label
dtype: int64
splits:
- name: train
num_bytes: 695952828
num_examples: 60000
- name: test
num_bytes: 2320000
num_examples: 200
download_size: 71338055
dataset_size: 698272828
---
# Dataset Card for "60k_dataset_multichoice_384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
guyhadad01/manipulations3 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 25598
num_examples: 247
- name: test
num_bytes: 7495
num_examples: 62
download_size: 21019
dataset_size: 33093
---
# Dataset Card for "manipulations3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ramachetan22/sql-create-context-v2 | ---
license: cc-by-sa-3.0
---
# sql-create-context-v2 Dataset
## Overview
The `sql-create-context-v2` dataset enhances the original dataset built from WikiSQL and Spider, focusing on text-to-SQL tasks with a special emphasis on reducing hallucination of column and table names. This version introduces a JSONL format for more efficient data processing and iteration, alongside a structured approach to representing SQL queries in the dataset entries.
### Key Enhancements
- **Dataset Format:** Transitioned to JSON Lines (JSONL) format for improved handling of large datasets and streamlined processing of individual records.
- **Structured Query Representation:** Each SQL query answer is now encapsulated within an object keyed by `SQL_Query`, facilitating clearer separation between the query text and other metadata.
## Sample Entries
```json
{
"question": "Please show the themes of competitions with host cities having populations larger than 1000.",
"context": "CREATE TABLE city (City_ID VARCHAR, Population INTEGER); CREATE TABLE farm_competition (Theme VARCHAR, Host_city_ID VARCHAR)",
"answer": {"SQL_Query": "SELECT T2.Theme FROM city AS T1 JOIN farm_competition AS T2 ON T1.City_ID = T2.Host_city_ID WHERE T1.Population > 1000"}
},
{
"question": "Please show the different statuses of cities and the average population of cities with each status.",
"context": "CREATE TABLE city (Status VARCHAR, Population INTEGER)",
"answer": {"SQL_Query": "SELECT Status, AVG(Population) FROM city GROUP BY Status"}
}
```
Citing this Work
If you use the sql-create-context-v2 dataset, please cite the following in addition to the original works:
```bibtex
@misc{sql-create-context-v2_2024,
title = {sql-create-context-v2 Dataset},
author = Rama Chetan Atmudi,
year = {2024},
url = {https://huggingface.co/datasets/ramachetan22/sql-create-context-v2},
note = {Enhancements and modifications to the original sql-create-context dataset for improved usability and processing.}
}
```
Datasets Used to Create This Dataset
```bibtex
@misc{b-mc2_2023_sql-create-context,
title = {sql-create-context Dataset},
author = {b-mc2},
year = {2023},
url = {https://huggingface.co/datasets/b-mc2/sql-create-context},
note = {This dataset was created by modifying data from the following sources: \cite{zhongSeq2SQL2017, yu2018spider}.},
}
```
```bibtex
Datasets used to create this dataset
@article{zhongSeq2SQL2017,
author = {Victor Zhong and Caiming Xiong and Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017}
}
```
```bibtex
@article{yu2018spider,
title = {Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author = {Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal = {arXiv preprint arXiv:1809.08887},
year = {2018}
}
``` |
Langelaw/4dgs | ---
license: mit
---
|
CyberHarem/may_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of may/メイ/梅 (Arknights)
This is the dataset of may/メイ/梅 (Arknights), containing 41 images and their tags.
The core tags of this character are `braid, twin_braids, glasses, long_hair, red-framed_eyewear, blue_eyes, hair_ornament, semi-rimless_eyewear, pink_hair, green_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 41 | 72.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/may_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 41 | 60.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/may_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 102 | 117.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/may_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/may_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, collared_shirt, solo, white_shirt, bird, simple_background, sweater_vest, black_necktie, hair_bobbles, long_sleeves, looking_at_viewer, upper_body, open_mouth, topknot, white_background, brown_vest, closed_mouth, gloves, over-rim_eyewear, smile |
| 1 | 11 |  |  |  |  |  | 1girl, collared_shirt, solo, white_shirt, aqua_shorts, simple_background, full_body, handgun, holding_gun, open_mouth, blue_gloves, jacket_around_waist, looking_at_viewer, shoes, single_thighhigh, smile, white_background, green_gloves, hair_bobbles, one_eye_closed, orange_socks, short_shorts, aqua_gloves, black_footwear, blue_shorts, brown_footwear, brown_vest, thigh_strap, aiming, black_necktie, black_vest, boots, hair_between_eyes, kneehighs, short_sleeves, sweater_vest, teeth, thigh_pouch, uneven_legwear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | collared_shirt | solo | white_shirt | bird | simple_background | sweater_vest | black_necktie | hair_bobbles | long_sleeves | looking_at_viewer | upper_body | open_mouth | topknot | white_background | brown_vest | closed_mouth | gloves | over-rim_eyewear | smile | aqua_shorts | full_body | handgun | holding_gun | blue_gloves | jacket_around_waist | shoes | single_thighhigh | green_gloves | one_eye_closed | orange_socks | short_shorts | aqua_gloves | black_footwear | blue_shorts | brown_footwear | thigh_strap | aiming | black_vest | boots | hair_between_eyes | kneehighs | short_sleeves | teeth | thigh_pouch | uneven_legwear |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-------|:--------------|:-------|:--------------------|:---------------|:----------------|:---------------|:---------------|:--------------------|:-------------|:-------------|:----------|:-------------------|:-------------|:---------------|:---------|:-------------------|:--------|:--------------|:------------|:----------|:--------------|:--------------|:----------------------|:--------|:-------------------|:---------------|:-----------------|:---------------|:---------------|:--------------|:-----------------|:--------------|:-----------------|:--------------|:---------|:-------------|:--------|:--------------------|:------------|:----------------|:--------|:--------------|:-----------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | X | | X | | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
severo/dataset-formats | ---
license: apache-2.0
---
|
Moragbe/Knowgenre | ---
license: mit
---
|
el2e10/aya-paraphrase-bengali | ---
language:
- bn
license: cc
size_categories:
- n<1K
source_datasets:
- extended|ai4bharat/IndicXParaphrase
task_categories:
- text-generation
pretty_name: Aya Paraphrase Bengali
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: template_lang
dtype: string
- name: template_id
dtype: int64
splits:
- name: train
num_bytes: 625479
num_examples: 1001
download_size: 224004
dataset_size: 625479
---
### Description
This dataset is derived from the already existing dataset made by AI4Bharat. We have used the [IndicXParaphrase](https://huggingface.co/datasets/ai4bharat/IndicXParaphrase) dataset of AI4Bharat to create this instruction style dataset.
We have used the malayalam split of the above mentioned dataset to create this one. This was created as part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI.
IndicXParaphrase is multilingual, and n-way parallel dataset for paraphrase detection in 10 Indic languages. The original dataset(IndicXParaphrase) was made available under the cc-0 license.
### Template
The following templates(Bengali) where used for converting the original dataset:
```
#Template 1
prompt:
ভিন্ন শব্দগুচ্ছ ব্যবহার করে নিচের বাক্যটি লেখ: "{original_sentence}"
completion:
{paraphrased_sentence}
```
```
#Template 2
prompt:
নিচের বাক্যটি ভিন্নভাবে লেখ: "{original_sentence}"
completion:
{paraphrased_sentence}
```
```
#Template 3
prompt:
অর্থের পরিবর্তন না করে নিচের বাক্যটি নতুনভাবে লেখ: "{original_sentence}"
completion:
{paraphrased_sentence}
```
### Acknowledgement
Thank you, Tahmid Hossain for helping with the preparation of this dataset by providing the Bengali translation of the above mentioned English prompts. |
hsseinmz/realhumaneval | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- code
- chat
- copilot
- python
- programming
pretty_name: RealHumanEval
size_categories:
- 1K<n<10K
configs:
- config_name: chat
data_files: "chat/chat_data.csv"
- config_name: autocomplete
data_files: "autocomplete/autocomplete_data.csv"
- config_name: study
data_files: "study/study_data.csv"
- config_name: tasks
data_files: "tasks/tasks_data.csv"
---
# RealHumanEval
This dataset contains logs of participants study from the RealHumanEval study [paper](https://arxiv.org/abs/2404.02806). The RealHumanEval study was conducted to measure the ability of different LLMs to support programmers in their tasks.
We developed an online web app in which users interacted with one of six different LLMs integrated into an editor through either autocomplete support, akin to GitHub Copilot, or chat support, akin to ChatGPT, in addition to a condition with no LLM assistance.
We measure user performance in terms of the speed and amount of tasks completed, as well as user satisfaction metrics of LLM helpfulness.
In total, we selected 6 LLMs for our study: 4 from the Code Llama family (CodeLlama-7b, CodeLlama-7b-instruct, CodeLlama-34b, CodeLlama-34b-instruct), along with two models from the GPT series (GPT-3.5-turbo and GPT-3.5-turbo-instruct). To avoid confusion, we refer to the autocomplete conditions by the base name of the model: CodeLlama-7b, CodeLlama-34b and GPT-3.5 (refers to GPT-3.5-turbo-instruct); and the chat conditions by the base name of the model and adding chat: CodeLlama-7b (chat) (refers to CodeLlama-7b-instruct), CodeLlama-34b (chat) (refers to CodeLlama-34b-instruct) and GPT-3.5 (chat) (refers to GPT-3.5-turbo).
The data released consists of four parts:
- chat (chat_data.csv): contains the chat logs of the conversations between the study participants and the LLMs.
- autocomplete (autocomplete_data.csv): for each suggestion shown in the autocomplete conditions, we log whether it was accepted and the prompt of the LLM
- tasks (task_data.csv): the tasks that the participants were asked to complete
- study (study_data.csv and study_data.pkl): a dataframe of processed information for each participant (e.g., how many tasks they completed, their code history, how many suggestions they accepted ...). Use the pickle version of this file for the most accurate representation of the data.
## Usage
We will update the loading script at a later point, for now, you can use the following helper to load the data:
```python
from datasets import load_dataset, DatasetDict
import json
import pandas as pd
def load_dataset_realhumaneval(subset="all"):
"""
Loads the RealHumanEval dataset according to the specified subset.
Parameters:
- subset (str): Specifies the subset of the dataset to load. Options are "all", "chat",
"autocomplete", "study", "tasks". Default is "all".
Returns:
- A dictionary of datasets (if subset is "all") or a single dataset for the specified subset.
"""
valid_subsets = ["all", "chat", "autocomplete", "study", "tasks"]
if subset not in valid_subsets:
raise ValueError(f"subset must be one of {valid_subsets}")
data_files_paths = {
"autocomplete": "autocomplete/autocomplete_data.csv",
"chat": "chat/chat_data.csv",
"tasks": "tasks/tasks_data.csv",
"study": "study/study_data.csv",
}
datasets_loaded = {
key: load_dataset("hsseinmz/realhumaneval", data_files=path)['train']
for key, path in data_files_paths.items()
}
datasets_loaded["autocomplete"] = datasets_loaded["autocomplete"].map(
lambda x: {'logprobs': eval(x['logprobs'])}
)
datasets_loaded["chat"] = datasets_loaded["chat"].map(
lambda x: {'logprobs': eval(x['logprobs']), 'copy_events': eval(x['copy_events'])}
)
datasets_loaded["study"] = datasets_loaded["study"].map(
lambda x: {
'code_history': pd.read_json(x['code_history']),
'task_data': json.loads(x['task_data']),
'task_completion_durations': eval(x['task_completion_durations'])
}
)
dataset_hf = DatasetDict(datasets_loaded) if subset == "all" else datasets_loaded[subset]
return dataset_hf
```
then to load any subset or all the data:
```python
dataset = load_dataset_realhumaneval(subset = "all")
```
## Dataset Details
You can find more information about the data in our paper https://arxiv.org/abs/2404.02806 or our GitHub repository https://github.com/clinicalml/realhumaneval
# Citation
```
@misc{mozannar2024realhumaneval,
title={The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers},
author={Hussein Mozannar and Valerie Chen and Mohammed Alsobay and Subhro Das and Sebastian Zhao and Dennis Wei and Manish Nagireddy and Prasanna Sattigeri and Ameet Talwalkar and David Sontag},
year={2024},
eprint={2404.02806},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
``` |
Chat-UniVi/Chat-UniVi-Eval | ---
license: apache-2.0
---
# Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding
**Paper or resources for more information:**
[[Paper](https://huggingface.co/papers/2311.08046)] [[Code](https://github.com/PKU-YuanGroup/Chat-UniVi)] |
nlpproject2023/Paragraphs | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
struct:
- name: sentences
sequence:
sequence: string
- name: title
sequence: string
splits:
- name: validation
num_bytes: 5269319
num_examples: 4523
- name: test
num_bytes: 8548487
num_examples: 7405
download_size: 8899516
dataset_size: 13817806
---
# Dataset Card for "Paragraphs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xwjzds/pretrain_sts_extend | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1664682
num_examples: 5657
download_size: 1024627
dataset_size: 1664682
---
# Dataset Card for "pretrain_sts_extend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kakashi38746/Naofumi | ---
license: openrail
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-31000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1139143
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Wanfq/Explore_Instruct_Rewriting_32k | ---
license: cc-by-nc-4.0
language:
- en
---
<p align="center" width="100%">
</p>
<div id="top" align="center">
**Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration**
<h4> |<a href="https://arxiv.org/abs/2310.09168"> 📑 Paper </a> |
<a href="https://huggingface.co/datasets?sort=trending&search=Explore_Instruct"> 🤗 Data </a> |
<a href="https://huggingface.co/models?sort=trending&search=Explore-LM"> 🤗 Model </a> |
<a href="https://github.com/fanqiwan/Explore-Instruct"> 🐱 Github Repo </a> |
</h4>
<!-- **Authors:** -->
_**Fanqi Wan<sup>†</sup>, Xinting Huang<sup>‡</sup>, Tao Yang<sup>†</sup>, Xiaojun Quan<sup>†</sup>, Wei Bi<sup>‡</sup>, Shuming Shi<sup>‡</sup>**_
<!-- **Affiliations:** -->
_<sup>†</sup> Sun Yat-sen University,
<sup>‡</sup> Tencent AI Lab_
</div>
## News
- **Oct 16, 2023:** 🔥 We're excited to announce that the Explore-Instruct datasets in brainstorming, rewriting, and math domains are now available on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct)! Additionally, we've released Explore-LM models that have been initialized with LLaMA-7B and fine-tuned with the Explore-Instruct data in each domain. You can find these models on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Happy exploring and instructing!
## Contents
- [Overview](#overview)
- [Data Release](#data-release)
- [Model Release](#model-release)
- [Data Generation Process](#data-generation-process)
- [Fine-tuning](#fine-tuning)
- [Evaluation](#evaluation)
- [Limitations](#limitations)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgments)
## Overview
We propose Explore-Instruct, a novel approach to enhancing domain-specific instruction coverage. We posit that the domain space is inherently structured akin to a tree, reminiscent of cognitive science ontologies. Drawing from the essence of classical search algorithms and incorporating the power of LLMs, Explore-Instruct is conceived to actively traverse the domain space and generate instruction-tuning data, **not** necessitating a predefined tree structure. Specifically, Explore-Instruct employs two strategic operations: lookahead and backtracking exploration:
- **Lookahead** delves into a multitude of potential fine-grained sub-tasks, thereby mapping out a complex network of tasks
- **Backtracking** seeks alternative branches to widen the search boundary, hence extending the domain spectrum.
<p align="center">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig2.png?raw=true" width="95%"> <br>
</p>
## Data Release
We release the Explore-Instruct data in brainstorming, rewriting, and math domains on 🤗 [Huggingface Datasets](https://huggingface.co/datasets?sort=trending&search=Explore_Instruct). Each domain includes two versions of datasets: the basic and extended version. The base version contains 10k instruction-tuning data and the extended version contains 16k, 32k, and 64k instruction-tuning data for each domain respectively. Each dataset is a structured data file in the JSON format. It consists of a list of dictionaries, with each dictionary containing the following fields:
- `instruction`: `str`, describes the task the model should perform.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, ground-truth output text for the task and input text.
The results of data-centric analysis are shown as follows:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig1.png?raw=true" width="50%"> <br>
</p>
| Method | Brainstorming Unique<br/>V-N pairs | Rewriting Unique<br/>V-N pairs | Math Unique<br/>V-N pairs |
|:--------------------------------|:----------------------------------:|:------------------------------:|:-------------------------:|
| _Domain-Specific Human-Curated_ | 2 | 8 | 3 |
| _Domain-Aware Self-Instruct_ | 781 | 1715 | 451 |
| Explore-Instruct | **790** | **2015** | **917** |
## Model Release
We release the Explore-LM models in brainstorming, rewriting, and math domains on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=Explore-LM). Each domain includes two versions of models: the basic and extended version trained with the corresponding version of dataset.
The results of automatic and human evaluation in three domains are shown as follows:
- Automatic evaluation:
| Automatic Comparison in the Brainstorming Domain | Win:Tie:Lose | Beat Rate |
|:-------------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 194:1:13 | 93.72 |
| Explore-LM-Ext vs Domain-Curated-LM | 196:1:11 | 94.69 |
| Explore-LM vs Domain-Instruct-LM | 114:56:38 | 75.00 |
| Explore-LM-Ext vs Domain-Instruct-LM | 122:55:31 | 79.74 |
| Explore-LM vs ChatGPT | 52:71:85 | 37.96 |
| Explore-LM-Ext vs ChatGPT | 83:69:56 | 59.71 |
| Automatic Comparison in the Rewriting Domain | Win:Tie:Lose | Beat Rate |
|:---------------------------------------------|:------------:|:---------:|
| Explore-LM vs Domain-Curated-LM | 50:38:6 | 89.29 |
| Explore-LM-Ext vs Domain-Curated-LM | 53:37:4 | 92.98 |
| Explore-LM vs Domain-Instruct-LM | 34:49:11 | 75.56 |
| Explore-LM-Ext vs Domain-Instruct-LM | 35:53:6 | 85.37 |
| Explore-LM vs ChatGPT | 11:59:24 | 31.43 |
| Explore-LM-Ext vs ChatGPT | 12:56:26 | 31.58 |
| Automatic Comparison in the Math Domain | Accuracy Rate |
|:----------------------------------------|:-------------:|
| Domain-Curated-LM | 3.4 |
| Domain-Instruct-LM | 4.0 |
| Explore-LM | 6.8 |
| Explore-LM-Ext | 8.4 |
| ChatGPT | 34.8 |
- Human evaluation:
<p align="left">
<img src="https://github.com/fanqiwan/Explore-Instruct/blob/main/assets/fig5.png?raw=true" width="95%"> <br>
</p>
## Data Generation Process
To generate the domain-specific instruction-tuning data, please follow the following commands step by step:
### Domain Space Exploration
```
python3 generate_instruction.py \
--action extend \
--save_dir ./en_data/demo_domain \ # input dir include current domain tree for exploration
--out_dir ./en_data/demo_domain_exploration \ # output dir of the explored new domain tree
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--extend_nums <TASK_NUMBER_DEPTH_0>,...,<TASK_NUMBER_DEPTH_MAX_DEPTH-1> \ # exploration breadth at each depth
--max_depth <MAX_DEPTH> \ # exploration depth
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Instruction-Tuning Data Generation
```
python3 generate_instruction.py \
--action enrich \
--save_dir ./en_data/demo_domain_exploration \ # input dir include current domain tree for data generation
--out_dir ./en_data/demo_domain_generation \ # output dir of the domain tree with generated data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--enrich_nums <DATA_NUMBER_DEPTH_0>,...,<DATA_NUMBER_DEPTH_MAX_DEPTH> \ # data number for task at each depth
--enrich_batch_size <BATCH_SIZE> \ # batch size for data generation
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Task Pruning
```
python3 generate_instruction.py \
--action prune \
--save_dir ./en_data/demo_domain_generation \ # input dir include current domain tree for task pruning
--out_dir ./en_data/demo_domain_pruning \ # output dir of the domain tree with 'pruned_subtasks_name.json' file
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--prune_threshold <PRUNE_THRESHOLD> \ # threshold of rouge-l overlap between task names
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Filtering
```
python3 generate_instruction.py \
--action filter \
--save_dir ./en_data/demo_domain_pruning \ # input dir include current domain tree for data filtering
--out_dir ./en_data/demo_domain_filtering \ # output dir of the domain tree with fitered data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_pruning/pruned_subtasks_name.json \ # file of pruned tasks
--filter_threshold <FILTER_THRESHOLD> \ # threshold of rouge-l overlap between instructions
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
### Data Sampling
```
python3 generate_instruction.py \
--action sample \
--save_dir ./en_data/demo_domain_filtering \ # input dir include current domain tree for data sampling
--out_dir ./en_data/demo_domain_sampling \ # output dir of the domain tree with sampled data
--lang <LANGUAGE> \ # currently support 'en'
--domain demo_domain \ # domain for exploration
--pruned_file ./en_data/demo_domain_filtering/pruned_subtasks_name.json \ # file of pruned tasks
--sample_example_num <SAMPLE_EXAMPLES_NUM> \ # number of sampled examples
--sample_max_depth <SAMPLE_MAX_DEPTH> \ # max depth for data sampling
--sample_use_pruned \ # do not sample from pruned tasks
--assistant_name <ASSISTANT_NAME> # currently support openai and claude
```
## Fine-tuning
We fine-tune LLaMA-7B with the following hyperparameters:
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|:----------------|-------------------:|---------------:|--------:|------------:|--------------:|
| LLaMA 7B | 128 | 2e-5 | 3 | 512| 0 |
To reproduce the training procedure, please use the following command:
```
deepspeed --num_gpus=8 ./train/train.py \
--deepspeed ./deepspeed_config/deepspeed_zero3_offload_config.json \
--model_name_or_path decapoda-research/llama-7b-hf \
--data_path ./en_data/demo_domain_sampling \
--fp16 True \
--output_dir ./training_results/explore-lm-7b-demo-domain \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--model_max_length 512 \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--prompt_type alpaca \
2>&1 | tee ./training_logs/explore-lm-7b-demo-domain.log
python3 ./train/zero_to_fp32.py \
--checkpoint_dir ./training_results/explore-lm-7b-demo-domain \
--output_file ./training_results/explore-lm-7b-demo-domain/pytorch_model.bin
```
## Evaluation
The evaluation datasets for different domains are as follows:
- Brainstorming and Rewriting: From the corresponding categories in the translated test set of BELLE. ([en_eval_set.jsonl](./eval/question/en_eval_set.jsonl))
- Math: From randomly selected 500 questions from the test set of MATH. ([MATH_eval_set_sample.jsonl](./eval/question/MATH_eval_set_sample.jsonl))
The evaluation metrics for different domains are as follows:
- Brainstorming and Rewriting: Both automatic and human evaluations following Vicuna.
- Math: Accuracy Rate metric in solving math problems.
The automatic evaluation commands for different domains are as follows:
```
# Brainstorming and Rewriting Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/en_eval_set.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 1 \
--temperature 0.7 \
--max_new_tokens 512 \
--prompt_type alpaca \
--do_sample
# 2. Evaluation
python3 ./eval/chatgpt_score.py \
--baseline_file ./eval/answer/<MODEL_1>.jsonl \ # answer of baseline model to compare with
--answer_file ./eval/answer/<MODEL_2>.jsonl \ # answer of evaluation model
--review_file ./eval/review/<MODEL_1>_cp_<MODEL_2>_<DOMAIN>.jsonl \ # review from chatgpt
--prompt_file ./eval/prompt/en_review_prompt_compare.jsonl \ # evaluation prompt for chatgpt
--target_classes <DOMAIN> \ # evaluation domain
--batch_size <BATCH_SIZE> \
--review_model "gpt-3.5-turbo-0301"
```
```
# Math Domain
# 1. Inference
python3 ./eval/generate.py \
--model_id <MODEL_ID> \
--model_path <MODEL_PATH> \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl \
--num_gpus 8 \
--num_beams 10 \
--temperature 1.0 \
--max_new_tokens 512 \
--prompt_type alpaca
# 2. Evaluation
python3 ./eval/auto_eval.py \
--question_file ./eval/question/MATH_eval_set_sample.jsonl \
--answer_file ./eval/answer/<MODEL_ID>.jsonl # answer of evaluation model
```
## Limitations
Explore-Instruct is still under development and needs a lot of improvements. We acknowledge that our work focuses on the enhancement of domain-specific instruction coverage and does not address other aspects of instruction-tuning, such as the generation of complex and challenging instructions or the mitigation of toxic and harmful instructions. Future work is needed to explore the potential of our approach in these areas.
## License
Explore-Instruct is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The weights of Explore-LM models are also CC BY NC 4.0 (allowing only non-commercial use).
## Citation
If you find this work is relevant with your research or applications, please feel free to cite our work!
```
@misc{wan2023explore,
title={Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration},
author={Fanqi, Wan and Xinting, Huang and Tao, Yang and Xiaojun, Quan and Wei, Bi and Shuming, Shi},
year={2023},
eprint={2310.09168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgments
This repo benefits from [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Vicuna](https://github.com/lm-sys/FastChat). Thanks for their wonderful works!
|
sam-mosaic/sql-prompts | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 12853227
num_examples: 18846
download_size: 5154754
dataset_size: 12853227
---
# Dataset Card for "sql-prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZurabDz/geo_large_corpus | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 14397567044
num_examples: 41886434
download_size: 5031762025
dataset_size: 14397567044
---
# Dataset Card for "geo_large_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/70fd4f5c | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 204
num_examples: 10
download_size: 1419
dataset_size: 204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "70fd4f5c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lmqg/qag_ruquad | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: ru
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_ruquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_ruquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the RUQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Russian (ru)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": " Everybody , как и хотела Мадонна, выпускают синглом. При нулевом бюджете на раскрутку фото певицы решают не помещать на обложке, чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы . Everybody поднимается на 3-е место в чарте Hot Dance Club Songs, а потом на 107 место в основном, немного не дотянув до первой сотни Hot 100 журнала Billboard[91]. Менеджмент считает это отличным результатом, учитывая нулевые затраты на пиар, и хочет убедиться, что взлёт Everybody не случаен. По просьбе Мадонны вместо Каминса берут более опытного штатного аранжировщика Warner Bros. Records Регги Лукаса (англ.)русск.. Второй сингл Burning Up тоже достигает в чарте танцевальных хитов 3-го места, повторив успех Everybody . И только после этого Мадонне позволяют арендовать студию для записи первого альбома[91].",
"questions": [ "При каком бюджете на раскрутку фото певицы решают не помещать на обложке ?", "Какой альбом Мадонны выпускают синглом?", "Имя более опытного штатного аранжировщика берут по просьбе Мадонны вместо Каминсаболее ?", "Почему при нулевом бджете фото певицы решают не помещать на обложке ?", "На каое место Everybody поднимается в чарте Hot Dance Club Songs?" ],
"answers": [ "При нулевом", " Everybody ", "Warner Bros", "чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы ", "на 3-е место" ],
"questions_answers": "question: При каком бюджете на раскрутку фото певицы решают не помещать на обложке ?, answer: При нулевом | question: Какой альбом Мадонны выпускают синглом?, answer: Everybody | question: Имя более опытного штатного аранжировщика берут по просьбе Мадонны вместо Каминсаболее ?, answer: Warner Bros | question: Почему при нулевом бджете фото певицы решают не помещать на обложке ?, answer: чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы | question: На каое место Everybody поднимается в чарте Hot Dance Club Songs?, answer: на 3-е место"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|10407| 4079 | 4017|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
CyberHarem/irako_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of irako (Kantai Collection)
This is the dataset of irako (Kantai Collection), containing 315 images and their tags.
The core tags of this character are `long_hair, ponytail, green_hair, ribbon, hair_ribbon, green_eyes, breasts, antenna_hair, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 315 | 330.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 315 | 193.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 704 | 397.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 315 | 288.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 704 | 551.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/irako_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, simple_background, solo, cleavage, looking_at_viewer, white_background, black_bikini, bikini_skirt, navel, cowboy_shot, smile, twitter_username, collarbone, one-hour_drawing_challenge |
| 1 | 5 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, navel, open_clothes, solo, bikini_skirt, black_bikini, collarbone, cowboy_shot, simple_background, smile, white_jacket, open_mouth |
| 2 | 15 |  |  |  |  |  | 1girl, alternate_costume, green_skirt, looking_at_viewer, smile, solo, simple_background, white_shirt, long_skirt, long_sleeves, blouse, full_body, white_background, bow, standing, open_mouth |
| 3 | 6 |  |  |  |  |  | 1girl, alternate_costume, pleated_skirt, sailor_collar, simple_background, solo, black_footwear, black_pantyhose, long_sleeves, looking_at_viewer, neckerchief, white_background, black_serafuku, black_skirt, full_body, loafers, smile, standing, open_mouth |
| 4 | 17 |  |  |  |  |  | 1girl, blue_skirt, kappougi, simple_background, solo, looking_at_viewer, smile, full_body, pink_shirt, white_background, sandals, tabi, standing, long_sleeves, open_mouth, red_necktie, white_socks, food, tray |
| 5 | 7 |  |  |  |  |  | 1girl, hair_bow, kappougi, looking_at_viewer, solo, blush, black_hair, necktie, twitter_username, upper_body, smile |
| 6 | 6 |  |  |  |  |  | 2girls, kappougi, open_mouth, ahoge, necktie, :d, black_hair, brown_hair, hair_bow, pink_shirt, upper_body |
| 7 | 9 |  |  |  |  |  | 1girl, bow, looking_at_viewer, open_mouth, santa_costume, solo, smile, alternate_costume, red_dress, christmas, fur-trimmed_dress, blush, cake, plate, simple_background, white_background, white_thighhighs |
| 8 | 13 |  |  |  |  |  | 1girl, rabbit_ears, solo, detached_collar, looking_at_viewer, playboy_bunny, wrist_cuffs, cleavage, fake_animal_ears, simple_background, rabbit_tail, strapless_leotard, alternate_costume, black_pantyhose, smile, white_background, black_leotard, cowboy_shot, red_bowtie, sitting |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | simple_background | solo | cleavage | looking_at_viewer | white_background | black_bikini | bikini_skirt | navel | cowboy_shot | smile | twitter_username | collarbone | one-hour_drawing_challenge | open_clothes | white_jacket | open_mouth | alternate_costume | green_skirt | white_shirt | long_skirt | long_sleeves | blouse | full_body | bow | standing | pleated_skirt | sailor_collar | black_footwear | black_pantyhose | neckerchief | black_serafuku | black_skirt | loafers | blue_skirt | kappougi | pink_shirt | sandals | tabi | red_necktie | white_socks | food | tray | hair_bow | blush | black_hair | necktie | upper_body | 2girls | ahoge | :d | brown_hair | santa_costume | red_dress | christmas | fur-trimmed_dress | cake | plate | white_thighhighs | rabbit_ears | detached_collar | playboy_bunny | wrist_cuffs | fake_animal_ears | rabbit_tail | strapless_leotard | black_leotard | red_bowtie | sitting |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-----------|:--------------------|:-------------------|:---------------|:---------------|:--------|:--------------|:--------|:-------------------|:-------------|:-----------------------------|:---------------|:---------------|:-------------|:--------------------|:--------------|:--------------|:-------------|:---------------|:---------|:------------|:------|:-----------|:----------------|:----------------|:-----------------|:------------------|:--------------|:-----------------|:--------------|:----------|:-------------|:-----------|:-------------|:----------|:-------|:--------------|:--------------|:-------|:-------|:-----------|:--------|:-------------|:----------|:-------------|:---------|:--------|:-----|:-------------|:----------------|:------------|:------------|:--------------------|:-------|:--------|:-------------------|:--------------|:------------------|:----------------|:--------------|:-------------------|:--------------|:--------------------|:----------------|:-------------|:----------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | X | X | | X | X | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | | X | X | | | | | X | | | | | | X | X | | | | X | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 17 |  |  |  |  |  | X | X | X | | X | X | | | | | X | | | | | | X | | | | | X | | X | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | X | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | | | | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | X | X | | X | X | | | | | X | | | | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | |
| 8 | 13 |  |  |  |  |  | X | X | X | X | X | X | | | | X | X | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
takiholadi/kill-me-please-dataset | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ru
multilinguality:
- monolingual
pretty_name: Kill-Me-Please Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- stories
- website
task_categories:
- text-generation
- text-classification
---
# Dataset Card for Kill-Me-Please Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Repository:** [github pet project repo](https://github.com/takiholadi/generative-kill-me-please)
### Dataset Summary
It is an Russian-language dataset containing just over 30k unique stories as written as users of https://killpls.me as of period from March 2009 to October 2022. This resource was blocked by Roskomnadzor so consider text-generation task if you want more stories.
### Languages
ru-RU
## Dataset Structure
### Data Instances
Here is an example of instance:
```
{'text': 'По глупости удалил всю 10 летнюю базу. Восстановлению не подлежит. Мне конец. КМП!'
'tags': 'техника'
'votes': 2914
'url': 'https://killpls.me/story/616'
'datetime': '4 июля 2009, 23:20'}
```
### Data Fields
- `text`: a string containing the body of the story
- `tags`: a string containing a comma-separated tags in a multi-label setup, fullset of tags (except of one empty-tagged record): `внешность`, `деньги`, `друзья`, `здоровье`, `отношения`, `работа`, `разное`, `родители`, `секс`, `семья`, `техника`, `учеба`
- `votes`: an integer sum of upvotes/downvotes
- `url`: a string containing the url where the story was web-scraped from
- `datetime`: a string containing with the datetime the story was written
### Data Splits
The has 2 multi-label stratified splits: train and test.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 27,321 |
| Test | 2,772 |
|
ACOSharma/literature | ---
license: cc-by-sa-4.0
---
# Literature Dataset
## Files
A dataset containing novels, epics and essays.
The files are as follows:
- main.txt, a file with all the texts, every text on a newline, all English
- vocab.txt, a file with the trained (BERT) vocab, a newline a new word
- train.csv, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens)
- test.csv, the test split in the same way, 5,417 samples (698,793 tokens)
- DatasetDistribution.png, a file with all the texts and a plot with character length
## Texts
The texts used are these:
- Wuthering Heights
- Ulysses
- Treasure Island
- The War of the Worlds
- The Republic
- The Prophet
- The Prince
- The Picture of Dorian Gray
- The Odyssey
- The Great Gatsby
- The Brothers Karamazov
- Second Treatise of Goverment
- Pride and Prejudice
- Peter Pan
- Moby Dick
- Metamorphosis
- Little Women
- Les Misérables
- Japanese Girls and Women
- Iliad
- Heart of Darkness
- Grimms' Fairy Tales
- Great Expectations
- Frankenstein
- Emma
- Dracula
- Don Quixote
- Crime and Punishment
- Christmas Carol
- Beyond Good and Evil
- Anna Karenina
- Adventures of Sherlock Holmes
- Adventures of Huckleberry Finn
- Adventures in Wonderland
- A Tale of Two Cities
- A Room with A View |
arieg/cluster00_large_10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '000212'
'1': 003708
'2': '005171'
'3': 009557
'4': 009559
'5': 009678
'6': 010384
'7': 010386
'8': 010807
'9': '013325'
'10': '014735'
'11': 014739
'12': 019187
'13': '023041'
'14': 024915
'15': '036614'
'16': 039188
'17': '040242'
'18': '040243'
'19': 040985
'20': 045128
'21': '051271'
'22': '054667'
'23': '054703'
'24': 059451
'25': '062164'
'26': '067007'
'27': '067237'
'28': '067357'
'29': '067557'
'30': 072738
'31': '073465'
'32': 073468
'33': 074391
'34': 075925
'35': 080003
'36': 085482
'37': 085484
'38': 085485
'39': 085489
'40': 087190
'41': 087363
'42': 088854
'43': 095249
'44': 095251
'45': 098622
'46': 099411
'47': '106458'
'48': '107617'
'49': '107909'
'50': '108477'
'51': '108881'
'52': '109203'
'53': '109355'
'54': '109903'
'55': '113511'
'56': '113973'
'57': '114199'
'58': '114413'
'59': '117627'
'60': '118087'
'61': '118195'
'62': '118222'
'63': '118738'
'64': '118986'
'65': '122079'
'66': '122354'
'67': '122395'
'68': '122628'
'69': '123438'
'70': '123474'
'71': '123505'
'72': '125187'
'73': '125194'
'74': '125723'
'75': '126669'
'76': '126674'
'77': '126743'
'78': '126749'
'79': '127184'
'80': '127205'
'81': '127273'
'82': '127275'
'83': '127298'
'84': '127300'
'85': '129694'
'86': '130940'
'87': '130945'
'88': '131292'
'89': '132272'
'90': '133793'
'91': '136094'
'92': '137719'
'93': '138016'
'94': '138210'
'95': '138282'
'96': '138406'
'97': '138415'
'98': '141179'
'99': '143095'
'100': '145241'
'101': '146988'
'102': '148285'
'103': '148585'
'104': '149143'
splits:
- name: train
num_bytes: 56601913.95
num_examples: 1050
download_size: 56522174
dataset_size: 56601913.95
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
selfrag/selfrag_train_data | ---
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
This is a training data file for [Self-RAG](https://selfrag.github.io/) that generates outputs to diverse user queries as well as reflection tokens to call the retrieval system adaptively and criticize its own output and retrieved passages.
Self-RAG is trained on our 150k diverse instruction-output pairs with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences. See full descriptions in [our paper](https://arxiv.org/abs/2310.11511) and [code](https://github.com/AkariAsai/self-rag).
## Citation and contact
If you use this model, please cite our work:
```
@article{asai2023selfrag,
author = {Asai, Akari and Wu, Zeqiu and Wang, Yizhong and Sil, Avirup and Hajishirzi, Hannaneh},
title = {{Self-RAG}: Learning to Retrieve, Generate, and Critique through Self-Reflection},
year = {2023},
journal = { arXiv preprint arXiv:2310.11511 },
URL = {https://arxiv.org/abs/2310.11511}
}
``` |
joey234/mmlu-clinical_knowledge-neg-prepend-verbal | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: neg_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
splits:
- name: dev
num_bytes: 6767
num_examples: 5
- name: test
num_bytes: 2000689
num_examples: 265
download_size: 212042
dataset_size: 2007456
---
# Dataset Card for "mmlu-clinical_knowledge-neg-prepend-verbal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nlu_evaluation_data | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
pretty_name: NLU Evaluation Data
dataset_info:
features:
- name: text
dtype: string
- name: scenario
dtype: string
- name: label
dtype:
class_label:
names:
'0': alarm_query
'1': alarm_remove
'2': alarm_set
'3': audio_volume_down
'4': audio_volume_mute
'5': audio_volume_other
'6': audio_volume_up
'7': calendar_query
'8': calendar_remove
'9': calendar_set
'10': cooking_query
'11': cooking_recipe
'12': datetime_convert
'13': datetime_query
'14': email_addcontact
'15': email_query
'16': email_querycontact
'17': email_sendemail
'18': general_affirm
'19': general_commandstop
'20': general_confirm
'21': general_dontcare
'22': general_explain
'23': general_greet
'24': general_joke
'25': general_negate
'26': general_praise
'27': general_quirky
'28': general_repeat
'29': iot_cleaning
'30': iot_coffee
'31': iot_hue_lightchange
'32': iot_hue_lightdim
'33': iot_hue_lightoff
'34': iot_hue_lighton
'35': iot_hue_lightup
'36': iot_wemo_off
'37': iot_wemo_on
'38': lists_createoradd
'39': lists_query
'40': lists_remove
'41': music_dislikeness
'42': music_likeness
'43': music_query
'44': music_settings
'45': news_query
'46': play_audiobook
'47': play_game
'48': play_music
'49': play_podcasts
'50': play_radio
'51': qa_currency
'52': qa_definition
'53': qa_factoid
'54': qa_maths
'55': qa_stock
'56': recommendation_events
'57': recommendation_locations
'58': recommendation_movies
'59': social_post
'60': social_query
'61': takeaway_order
'62': takeaway_query
'63': transport_query
'64': transport_taxi
'65': transport_ticket
'66': transport_traffic
'67': weather_query
splits:
- name: train
num_bytes: 1447941
num_examples: 25715
download_size: 5867439
dataset_size: 1447941
---
# Dataset Card for NLU Evaluation Data
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/xliuhw/NLU-Evaluation-Data)
- **Repository:** [Github](https://github.com/xliuhw/NLU-Evaluation-Data)
- **Paper:** [ArXiv](https://arxiv.org/abs/1903.05566)
- **Leaderboard:**
- **Point of Contact:** [x.liu@hw.ac.uk](mailto:x.liu@hw.ac.uk)
### Dataset Summary
Dataset with short utterances from conversational domain annotated with their corresponding intents and scenarios.
It has 25 715 non-zero examples (original dataset has 25716 examples) belonging to 18 scenarios and 68 intents.
Originally, the dataset was crowd-sourced and annotated with both intents and named entities
in order to evaluate commercial NLU systems such as RASA, IBM's Watson, Microsoft's LUIS and Google's Dialogflow.
**This version of the dataset only includes intent annotations!**
In contrast to paper claims, released data contains 68 unique intents. This is due to the fact, that NLU systems were
evaluated on more curated part of this dataset which only included 64 most important intents. Read more in [github issue](https://github.com/xliuhw/NLU-Evaluation-Data/issues/5).
### Supported Tasks and Leaderboards
Intent classification, intent detection
### Languages
English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows:
```
{
'label': 2, # integer label corresponding to "alarm_set" intent
'scenario': 'alarm',
'text': 'wake me up at five am this week'
}
```
### Data Fields
- `text`: a string feature.
- `label`: one of classification labels (0-67) corresponding to unique intents.
- `scenario`: a string with one of unique scenarios (18).
Intent names are mapped to `label` in the following way:
| label | intent |
|--------:|:-------------------------|
| 0 | alarm_query |
| 1 | alarm_remove |
| 2 | alarm_set |
| 3 | audio_volume_down |
| 4 | audio_volume_mute |
| 5 | audio_volume_other |
| 6 | audio_volume_up |
| 7 | calendar_query |
| 8 | calendar_remove |
| 9 | calendar_set |
| 10 | cooking_query |
| 11 | cooking_recipe |
| 12 | datetime_convert |
| 13 | datetime_query |
| 14 | email_addcontact |
| 15 | email_query |
| 16 | email_querycontact |
| 17 | email_sendemail |
| 18 | general_affirm |
| 19 | general_commandstop |
| 20 | general_confirm |
| 21 | general_dontcare |
| 22 | general_explain |
| 23 | general_greet |
| 24 | general_joke |
| 25 | general_negate |
| 26 | general_praise |
| 27 | general_quirky |
| 28 | general_repeat |
| 29 | iot_cleaning |
| 30 | iot_coffee |
| 31 | iot_hue_lightchange |
| 32 | iot_hue_lightdim |
| 33 | iot_hue_lightoff |
| 34 | iot_hue_lighton |
| 35 | iot_hue_lightup |
| 36 | iot_wemo_off |
| 37 | iot_wemo_on |
| 38 | lists_createoradd |
| 39 | lists_query |
| 40 | lists_remove |
| 41 | music_dislikeness |
| 42 | music_likeness |
| 43 | music_query |
| 44 | music_settings |
| 45 | news_query |
| 46 | play_audiobook |
| 47 | play_game |
| 48 | play_music |
| 49 | play_podcasts |
| 50 | play_radio |
| 51 | qa_currency |
| 52 | qa_definition |
| 53 | qa_factoid |
| 54 | qa_maths |
| 55 | qa_stock |
| 56 | recommendation_events |
| 57 | recommendation_locations |
| 58 | recommendation_movies |
| 59 | social_post |
| 60 | social_query |
| 61 | takeaway_order |
| 62 | takeaway_query |
| 63 | transport_query |
| 64 | transport_taxi |
| 65 | transport_ticket |
| 66 | transport_traffic |
| 67 | weather_query |
### Data Splits
| Dataset statistics | Train |
| --- | --- |
| Number of examples | 25 715 |
| Average character length | 34.32 |
| Number of intents | 68 |
| Number of scenarios | 18 |
## Dataset Creation
### Curation Rationale
The dataset was prepared for a wide coverage evaluation and comparison of some of the most popular NLU services.
At that time, previous benchmarks were done with few intents and spawning limited number of domains. Here, the dataset
is much larger and contains 68 intents from 18 scenarios, which is much larger that any previous evaluation. For more discussion see the paper.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
> To build the NLU component we collected real user data via Amazon Mechanical Turk (AMT). We designed tasks where the Turker’s goal was to answer questions about how people would interact with the home robot, in a wide range of scenarios designed in advance, namely: alarm, audio, audiobook, calendar, cooking, datetime, email, game, general, IoT, lists, music, news, podcasts, general Q&A, radio, recommendations, social, food takeaway, transport, and weather.
The questions put to Turkers were designed to capture the different requests within each given scenario.
In the ‘calendar’ scenario, for example, these pre-designed intents were included: ‘set event’, ‘delete event’ and ‘query event’.
An example question for intent ‘set event’ is: “How would you ask your PDA to schedule a meeting with someone?” for which a user’s answer example was “Schedule a chat with Adam on Thursday afternoon”.
The Turkers would then type in their answers to these questions and select possible entities from the pre-designed suggested entities list for each of their answers.The Turkers didn’t always follow the instructions fully, e.g. for the specified ‘delete event’ Intent, an answer was: “PDA what is my next event?”; which clearly belongs to ‘query event’ Intent.
We have manually corrected all such errors either during post-processing or the subsequent annotations.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset it to help develop better intent detection systems.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution 4.0 International License (CC BY 4.0)
### Citation Information
```
@InProceedings{XLiu.etal:IWSDS2019,
author = {Xingkun Liu, Arash Eshghi, Pawel Swietojanski and Verena Rieser},
title = {Benchmarking Natural Language Understanding Services for building Conversational Agents},
booktitle = {Proceedings of the Tenth International Workshop on Spoken Dialogue Systems Technology (IWSDS)},
month = {April},
year = {2019},
address = {Ortigia, Siracusa (SR), Italy},
publisher = {Springer},
pages = {xxx--xxx},
url = {http://www.xx.xx/xx/}
}
```
### Contributions
Thanks to [@dkajtoch](https://github.com/dkajtoch) for adding this dataset. |
iamkaikai/FUI-ART | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6410238.0
num_examples: 204
download_size: 5862362
dataset_size: 6410238.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "FUI-ART"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Braddy/xview_captions_v0 | Invalid username or password. |
taln-ls2n/semeval-2010-pre | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license: cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- n<1K
pretty_name: Preprocessed SemEval-2010 Benchmark dataset
---
# Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation
## About
SemEval-2010 is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 244 **full-text** scientific papers collected from the [ACM Digital Library](https://dl.acm.org/).
Keyphrases were annotated by readers and combined with those provided by the authors.
Details about the SemEval-2010 dataset can be found in the original paper [(kim et al., 2010)][kim-2010].
This version of the dataset was produced by [(Boudin et al., 2016)][boudin-2016] and provides four increasingly sophisticated levels of document preprocessing:
* `lvl-1`: default text files provided by the SemEval-2010 organizers.
* `lvl-2`: for each file, we manually retrieved the original PDF file from the ACM Digital Library.
We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505.
We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion.
We finally apply a systematic dehyphenation at line breaks.s
* `lvl-3`: we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion.
* `lvl-4`: we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique.
We keep the title and abstract and select the most content bearing sentences from the remaining contents.
Titles and abstracts, collected from the [SciCorefCorpus](https://github.com/melsk125/SciCorefCorpus), are also provided.
Details about how they were extracted and cleaned up can be found in [(Chaimongkol et al., 2014)][chaimongkol-2014].
Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition).
They are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
The <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and text (lvl-1).
## Content and statistics
The dataset is divided into the following two splits:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- |------------:|-------:|-------------:|----------:|------------:|--------:|---------:|
| Train | 144 | 184.6 | 15.44 | 42.16 | 7.36 | 26.85 | 23.63 |
| Test | 100 | 203.1 | 14.66 | 40.11 | 8.34 | 27.12 | 24.43 |
Statistics (#words, PRMU distributions) are computed using the title/abstract and not the full text of scientific papers.
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **lvl-1**: content of the document with no text processing.
- **lvl-2**: content of the document retrieved from original PDF files and cleaned up.
- **lvl-3**: content of the document further abridged to relevant sections.
- **lvl-4**: content of the document further abridged using an unsupervised summarization technique.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Kim et al., 2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010.
[SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles][kim-2010].
In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics.
- (Chaimongkol et al., 2014) Panot Chaimongkol, Akiko Aizawa, and Yuka Tateisi. 2014.
[Corpus for Coreference Resolution on Scientific Papers][chaimongkol-2014].
In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3187–3190, Reykjavik, Iceland. European Language Resources Association (ELRA).
- (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016.
[How Document Pre-processing affects Keyphrase Extraction Performance][boudin-2016].
In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[kim-2010]: https://aclanthology.org/S10-1004/
[chaimongkol-2014]: https://aclanthology.org/L14-1259/
[boudin-2016]: https://aclanthology.org/W16-3917/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
|
pablouribe/ocr_correction_fr | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: ocr_text
dtype: string
splits:
- name: train
num_bytes: 49989671.1
num_examples: 4500
- name: test
num_bytes: 5554407.9
num_examples: 500
download_size: 33241561
dataset_size: 55544079.0
---
# Dataset Card for "ocr_correction_fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
denizzhansahin/Turkish_News_Technology-News-2-2024 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Baslik
dtype: string
- name: Ozet
dtype: string
- name: Kategori
dtype: string
- name: Link
dtype: string
- name: Icerik
dtype: string
splits:
- name: train
num_bytes: 773333.4
num_examples: 154
- name: validation
num_bytes: 331428.6
num_examples: 66
download_size: 618167
dataset_size: 1104762.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
arieg/cluster04_medium_10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '000667'
'1': 001039
'2': 001083
'3': '001663'
'4': 001930
'5': '003766'
'6': 003840
'7': 009511
'8': '010676'
'9': '011334'
'10': 012349
'11': '012513'
'12': 013596
'13': '013666'
'14': 024368
'15': '025324'
'16': '026674'
'17': 028070
'18': 028072
'19': '032327'
'20': '037727'
'21': 039484
'22': 041095
'23': '042761'
'24': 043796
'25': 043886
'26': 044918
'27': '046024'
'28': 047895
'29': 048439
'30': '052631'
'31': 053592
'32': 058333
'33': 061493
'34': '062337'
'35': '062445'
'36': 062458
'37': '063043'
'38': '063045'
'39': '063117'
'40': 064659
'41': '067163'
'42': 069202
'43': '072456'
'44': '073342'
'45': '073343'
'46': '073371'
'47': 073486
'48': 073921
'49': 074669
'50': 080516
'51': 080517
'52': 085787
'53': 085791
'54': 086037
'55': 088870
'56': 090639
'57': 091083
'58': 091158
'59': 091159
'60': 093867
'61': 094348
'62': 096408
'63': 099419
'64': '105722'
'65': '106953'
'66': '107188'
'67': '107391'
'68': '107616'
'69': '110637'
'70': '110983'
'71': '111335'
'72': '111376'
'73': '111391'
'74': '111397'
'75': '112734'
'76': '112767'
'77': '114415'
'78': '119027'
'79': '120296'
'80': '120467'
'81': '122081'
'82': '122087'
'83': '122088'
'84': '122472'
'85': '122630'
'86': '125774'
'87': '126224'
'88': '126608'
'89': '129088'
'90': '129094'
'91': '129095'
'92': '129096'
'93': '129097'
'94': '131448'
'95': '131451'
'96': '131452'
'97': '131453'
'98': '131552'
'99': '133023'
'100': '133025'
'101': '133027'
'102': '133275'
'103': '139772'
'104': '140576'
'105': '141594'
'106': '142402'
'107': '143098'
'108': '143989'
'109': '143995'
'110': '145761'
'111': '148536'
splits:
- name: train
num_bytes: 56320865.44
num_examples: 1120
download_size: 52273224
dataset_size: 56320865.44
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
marmofayezi/M3CelebA-Test | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: caption
dtype: string
- name: landmark
dtype: image
- name: caption_fre
dtype: string
- name: caption_deu
dtype: string
- name: caption_ita
dtype: string
- name: caption_spa
dtype: string
splits:
- name: train
num_bytes: 1104063693.75
num_examples: 2998
download_size: 725132925
dataset_size: 1104063693.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kristmh/highest_high_vs_rest_5_levels | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validate
path: data/validate-*
dataset_info:
features:
- name: text_clean
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 23347042
num_examples: 34049
- name: train
num_bytes: 172674062
num_examples: 272380
- name: validate
num_bytes: 21259604
num_examples: 34047
download_size: 99361952
dataset_size: 217280708
---
# Dataset Card for "highest_high_vs_rest_5_levels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gigant/webvid-mini-frames | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
sequence: int64
splits:
- name: train
num_bytes: 843960385.0
num_examples: 3184
download_size: 843331948
dataset_size: 843960385.0
---
# Dataset Card for "webvid-mini-frames"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
martinsinnona/plotqa | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: test
num_bytes: 3405264.0
num_examples: 100
download_size: 0
dataset_size: 3405264.0
---
# Dataset Card for "dataset_plotqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marcus2000/sentiment2to1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 4281800
num_examples: 3350
- name: test
num_bytes: 441642
num_examples: 373
download_size: 2338740
dataset_size: 4723442
---
# Dataset Card for "sentiment2to1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_bigcode__starcoderbase-1b | ---
pretty_name: Evaluation run of bigcode/starcoderbase-1b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bigcode__starcoderbase-1b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-14T21:51:42.530406](https://huggingface.co/datasets/open-llm-leaderboard/details_bigcode__starcoderbase-1b/blob/main/results_2024-02-14T21-51-42.530406.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2656327745815198,\n\
\ \"acc_stderr\": 0.03133338710793329,\n \"acc_norm\": 0.26735820509373515,\n\
\ \"acc_norm_stderr\": 0.032129928643110324,\n \"mc1\": 0.2729498164014688,\n\
\ \"mc1_stderr\": 0.01559475363200652,\n \"mc2\": 0.4578928664903403,\n\
\ \"mc2_stderr\": 0.015155546755030565\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.18686006825938567,\n \"acc_stderr\": 0.011391015649694386,\n\
\ \"acc_norm\": 0.22696245733788395,\n \"acc_norm_stderr\": 0.012240491536132863\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.30392352121091415,\n\
\ \"acc_stderr\": 0.0045901000501988275,\n \"acc_norm\": 0.3430591515634336,\n\
\ \"acc_norm_stderr\": 0.004737608340163395\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768081,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768081\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2814814814814815,\n\
\ \"acc_stderr\": 0.038850042458002526,\n \"acc_norm\": 0.2814814814814815,\n\
\ \"acc_norm_stderr\": 0.038850042458002526\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.19736842105263158,\n \"acc_stderr\": 0.03238981601699397,\n\
\ \"acc_norm\": 0.19736842105263158,\n \"acc_norm_stderr\": 0.03238981601699397\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.21,\n\
\ \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.27547169811320754,\n \"acc_stderr\": 0.027495663683724067,\n\
\ \"acc_norm\": 0.27547169811320754,\n \"acc_norm_stderr\": 0.027495663683724067\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2361111111111111,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.2361111111111111,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n\
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.1907514450867052,\n\
\ \"acc_stderr\": 0.029957851329869337,\n \"acc_norm\": 0.1907514450867052,\n\
\ \"acc_norm_stderr\": 0.029957851329869337\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929776,\n\
\ \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929776\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.3276595744680851,\n \"acc_stderr\": 0.030683020843231004,\n\
\ \"acc_norm\": 0.3276595744680851,\n \"acc_norm_stderr\": 0.030683020843231004\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.041424397194893624,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.041424397194893624\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.296551724137931,\n \"acc_stderr\": 0.03806142687309994,\n\
\ \"acc_norm\": 0.296551724137931,\n \"acc_norm_stderr\": 0.03806142687309994\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.23544973544973544,\n \"acc_stderr\": 0.02185150982203171,\n \"\
acc_norm\": 0.23544973544973544,\n \"acc_norm_stderr\": 0.02185150982203171\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.03718489006818115,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.03718489006818115\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2838709677419355,\n\
\ \"acc_stderr\": 0.025649381063029268,\n \"acc_norm\": 0.2838709677419355,\n\
\ \"acc_norm_stderr\": 0.025649381063029268\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.270935960591133,\n \"acc_stderr\": 0.031270907132976984,\n\
\ \"acc_norm\": 0.270935960591133,\n \"acc_norm_stderr\": 0.031270907132976984\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\"\
: 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.23030303030303031,\n \"acc_stderr\": 0.0328766675860349,\n\
\ \"acc_norm\": 0.23030303030303031,\n \"acc_norm_stderr\": 0.0328766675860349\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.22727272727272727,\n \"acc_stderr\": 0.029857515673386414,\n \"\
acc_norm\": 0.22727272727272727,\n \"acc_norm_stderr\": 0.029857515673386414\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.22797927461139897,\n \"acc_stderr\": 0.03027690994517826,\n\
\ \"acc_norm\": 0.22797927461139897,\n \"acc_norm_stderr\": 0.03027690994517826\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.27692307692307694,\n \"acc_stderr\": 0.022688042352424994,\n\
\ \"acc_norm\": 0.27692307692307694,\n \"acc_norm_stderr\": 0.022688042352424994\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2740740740740741,\n \"acc_stderr\": 0.027195934804085626,\n \
\ \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.027195934804085626\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.23109243697478993,\n \"acc_stderr\": 0.02738140692786897,\n\
\ \"acc_norm\": 0.23109243697478993,\n \"acc_norm_stderr\": 0.02738140692786897\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2251655629139073,\n \"acc_stderr\": 0.03410435282008936,\n \"\
acc_norm\": 0.2251655629139073,\n \"acc_norm_stderr\": 0.03410435282008936\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.23853211009174313,\n \"acc_stderr\": 0.018272575810231863,\n \"\
acc_norm\": 0.23853211009174313,\n \"acc_norm_stderr\": 0.018272575810231863\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.38425925925925924,\n \"acc_stderr\": 0.03317354514310742,\n \"\
acc_norm\": 0.38425925925925924,\n \"acc_norm_stderr\": 0.03317354514310742\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.18627450980392157,\n \"acc_stderr\": 0.02732547096671631,\n \"\
acc_norm\": 0.18627450980392157,\n \"acc_norm_stderr\": 0.02732547096671631\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.2616033755274262,\n \"acc_stderr\": 0.028609516716994934,\n \
\ \"acc_norm\": 0.2616033755274262,\n \"acc_norm_stderr\": 0.028609516716994934\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.31390134529147984,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.31390134529147984,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.1984732824427481,\n \"acc_stderr\": 0.034981493854624734,\n\
\ \"acc_norm\": 0.1984732824427481,\n \"acc_norm_stderr\": 0.034981493854624734\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2644628099173554,\n \"acc_stderr\": 0.04026187527591206,\n \"\
acc_norm\": 0.2644628099173554,\n \"acc_norm_stderr\": 0.04026187527591206\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.3055555555555556,\n\
\ \"acc_stderr\": 0.044531975073749834,\n \"acc_norm\": 0.3055555555555556,\n\
\ \"acc_norm_stderr\": 0.044531975073749834\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.2085889570552147,\n \"acc_stderr\": 0.03192193448934723,\n\
\ \"acc_norm\": 0.2085889570552147,\n \"acc_norm_stderr\": 0.03192193448934723\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n\
\ \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n\
\ \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.2524271844660194,\n \"acc_stderr\": 0.04301250399690875,\n\
\ \"acc_norm\": 0.2524271844660194,\n \"acc_norm_stderr\": 0.04301250399690875\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.1752136752136752,\n\
\ \"acc_stderr\": 0.024904439098918218,\n \"acc_norm\": 0.1752136752136752,\n\
\ \"acc_norm_stderr\": 0.024904439098918218\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2822477650063857,\n\
\ \"acc_stderr\": 0.016095302969878576,\n \"acc_norm\": 0.2822477650063857,\n\
\ \"acc_norm_stderr\": 0.016095302969878576\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.23410404624277456,\n \"acc_stderr\": 0.022797110278071145,\n\
\ \"acc_norm\": 0.23410404624277456,\n \"acc_norm_stderr\": 0.022797110278071145\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n\
\ \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n\
\ \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.27124183006535946,\n \"acc_stderr\": 0.025457756696667878,\n\
\ \"acc_norm\": 0.27124183006535946,\n \"acc_norm_stderr\": 0.025457756696667878\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2797427652733119,\n\
\ \"acc_stderr\": 0.02549425935069491,\n \"acc_norm\": 0.2797427652733119,\n\
\ \"acc_norm_stderr\": 0.02549425935069491\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.27469135802469136,\n \"acc_stderr\": 0.02483605786829468,\n\
\ \"acc_norm\": 0.27469135802469136,\n \"acc_norm_stderr\": 0.02483605786829468\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2730496453900709,\n \"acc_stderr\": 0.02657786094330785,\n \
\ \"acc_norm\": 0.2730496453900709,\n \"acc_norm_stderr\": 0.02657786094330785\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.25749674054758803,\n\
\ \"acc_stderr\": 0.01116770601490415,\n \"acc_norm\": 0.25749674054758803,\n\
\ \"acc_norm_stderr\": 0.01116770601490415\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.030134614954403924,\n \
\ \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.030134614954403924\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.22712418300653595,\n \"acc_stderr\": 0.01694985327921237,\n \
\ \"acc_norm\": 0.22712418300653595,\n \"acc_norm_stderr\": 0.01694985327921237\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.3,\n\
\ \"acc_stderr\": 0.04389311454644286,\n \"acc_norm\": 0.3,\n \
\ \"acc_norm_stderr\": 0.04389311454644286\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.3183673469387755,\n \"acc_stderr\": 0.029822533793982045,\n\
\ \"acc_norm\": 0.3183673469387755,\n \"acc_norm_stderr\": 0.029822533793982045\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n\
\ \"acc_stderr\": 0.030360490154014645,\n \"acc_norm\": 0.24378109452736318,\n\
\ \"acc_norm_stderr\": 0.030360490154014645\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3493975903614458,\n\
\ \"acc_stderr\": 0.0371172519074075,\n \"acc_norm\": 0.3493975903614458,\n\
\ \"acc_norm_stderr\": 0.0371172519074075\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.03377310252209194,\n\
\ \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.03377310252209194\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2729498164014688,\n\
\ \"mc1_stderr\": 0.01559475363200652,\n \"mc2\": 0.4578928664903403,\n\
\ \"mc2_stderr\": 0.015155546755030565\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.4996053670086819,\n \"acc_stderr\": 0.014052481306049516\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009097801364670205,\n \
\ \"acc_stderr\": 0.0026153265107756725\n }\n}\n```"
repo_url: https://huggingface.co/bigcode/starcoderbase-1b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|arc:challenge|25_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|gsm8k|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hellaswag|10_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-14T21-51-42.530406.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-14T21-51-42.530406.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- '**/details_harness|winogrande|5_2024-02-14T21-51-42.530406.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-14T21-51-42.530406.parquet'
- config_name: results
data_files:
- split: 2024_02_14T21_51_42.530406
path:
- results_2024-02-14T21-51-42.530406.parquet
- split: latest
path:
- results_2024-02-14T21-51-42.530406.parquet
---
# Dataset Card for Evaluation run of bigcode/starcoderbase-1b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bigcode__starcoderbase-1b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-14T21:51:42.530406](https://huggingface.co/datasets/open-llm-leaderboard/details_bigcode__starcoderbase-1b/blob/main/results_2024-02-14T21-51-42.530406.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2656327745815198,
"acc_stderr": 0.03133338710793329,
"acc_norm": 0.26735820509373515,
"acc_norm_stderr": 0.032129928643110324,
"mc1": 0.2729498164014688,
"mc1_stderr": 0.01559475363200652,
"mc2": 0.4578928664903403,
"mc2_stderr": 0.015155546755030565
},
"harness|arc:challenge|25": {
"acc": 0.18686006825938567,
"acc_stderr": 0.011391015649694386,
"acc_norm": 0.22696245733788395,
"acc_norm_stderr": 0.012240491536132863
},
"harness|hellaswag|10": {
"acc": 0.30392352121091415,
"acc_stderr": 0.0045901000501988275,
"acc_norm": 0.3430591515634336,
"acc_norm_stderr": 0.004737608340163395
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768081,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768081
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.038850042458002526,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.038850042458002526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.19736842105263158,
"acc_stderr": 0.03238981601699397,
"acc_norm": 0.19736842105263158,
"acc_norm_stderr": 0.03238981601699397
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.27547169811320754,
"acc_stderr": 0.027495663683724067,
"acc_norm": 0.27547169811320754,
"acc_norm_stderr": 0.027495663683724067
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2361111111111111,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.2361111111111111,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.1907514450867052,
"acc_stderr": 0.029957851329869337,
"acc_norm": 0.1907514450867052,
"acc_norm_stderr": 0.029957851329869337
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.04533838195929776,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.04533838195929776
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3276595744680851,
"acc_stderr": 0.030683020843231004,
"acc_norm": 0.3276595744680851,
"acc_norm_stderr": 0.030683020843231004
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.041424397194893624,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.041424397194893624
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.296551724137931,
"acc_stderr": 0.03806142687309994,
"acc_norm": 0.296551724137931,
"acc_norm_stderr": 0.03806142687309994
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.23544973544973544,
"acc_stderr": 0.02185150982203171,
"acc_norm": 0.23544973544973544,
"acc_norm_stderr": 0.02185150982203171
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.03718489006818115,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.03718489006818115
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2838709677419355,
"acc_stderr": 0.025649381063029268,
"acc_norm": 0.2838709677419355,
"acc_norm_stderr": 0.025649381063029268
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.270935960591133,
"acc_stderr": 0.031270907132976984,
"acc_norm": 0.270935960591133,
"acc_norm_stderr": 0.031270907132976984
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.23030303030303031,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.23030303030303031,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.029857515673386414,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.029857515673386414
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.22797927461139897,
"acc_stderr": 0.03027690994517826,
"acc_norm": 0.22797927461139897,
"acc_norm_stderr": 0.03027690994517826
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.27692307692307694,
"acc_stderr": 0.022688042352424994,
"acc_norm": 0.27692307692307694,
"acc_norm_stderr": 0.022688042352424994
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.027195934804085626,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.027195934804085626
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.23109243697478993,
"acc_stderr": 0.02738140692786897,
"acc_norm": 0.23109243697478993,
"acc_norm_stderr": 0.02738140692786897
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2251655629139073,
"acc_stderr": 0.03410435282008936,
"acc_norm": 0.2251655629139073,
"acc_norm_stderr": 0.03410435282008936
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.23853211009174313,
"acc_stderr": 0.018272575810231863,
"acc_norm": 0.23853211009174313,
"acc_norm_stderr": 0.018272575810231863
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.38425925925925924,
"acc_stderr": 0.03317354514310742,
"acc_norm": 0.38425925925925924,
"acc_norm_stderr": 0.03317354514310742
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.18627450980392157,
"acc_stderr": 0.02732547096671631,
"acc_norm": 0.18627450980392157,
"acc_norm_stderr": 0.02732547096671631
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2616033755274262,
"acc_stderr": 0.028609516716994934,
"acc_norm": 0.2616033755274262,
"acc_norm_stderr": 0.028609516716994934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.31390134529147984,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.31390134529147984,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.1984732824427481,
"acc_stderr": 0.034981493854624734,
"acc_norm": 0.1984732824427481,
"acc_norm_stderr": 0.034981493854624734
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2644628099173554,
"acc_stderr": 0.04026187527591206,
"acc_norm": 0.2644628099173554,
"acc_norm_stderr": 0.04026187527591206
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.3055555555555556,
"acc_stderr": 0.044531975073749834,
"acc_norm": 0.3055555555555556,
"acc_norm_stderr": 0.044531975073749834
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2085889570552147,
"acc_stderr": 0.03192193448934723,
"acc_norm": 0.2085889570552147,
"acc_norm_stderr": 0.03192193448934723
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3125,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.2524271844660194,
"acc_stderr": 0.04301250399690875,
"acc_norm": 0.2524271844660194,
"acc_norm_stderr": 0.04301250399690875
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.1752136752136752,
"acc_stderr": 0.024904439098918218,
"acc_norm": 0.1752136752136752,
"acc_norm_stderr": 0.024904439098918218
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2822477650063857,
"acc_stderr": 0.016095302969878576,
"acc_norm": 0.2822477650063857,
"acc_norm_stderr": 0.016095302969878576
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.23410404624277456,
"acc_stderr": 0.022797110278071145,
"acc_norm": 0.23410404624277456,
"acc_norm_stderr": 0.022797110278071145
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.27124183006535946,
"acc_stderr": 0.025457756696667878,
"acc_norm": 0.27124183006535946,
"acc_norm_stderr": 0.025457756696667878
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2797427652733119,
"acc_stderr": 0.02549425935069491,
"acc_norm": 0.2797427652733119,
"acc_norm_stderr": 0.02549425935069491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.27469135802469136,
"acc_stderr": 0.02483605786829468,
"acc_norm": 0.27469135802469136,
"acc_norm_stderr": 0.02483605786829468
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2730496453900709,
"acc_stderr": 0.02657786094330785,
"acc_norm": 0.2730496453900709,
"acc_norm_stderr": 0.02657786094330785
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.25749674054758803,
"acc_stderr": 0.01116770601490415,
"acc_norm": 0.25749674054758803,
"acc_norm_stderr": 0.01116770601490415
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4375,
"acc_stderr": 0.030134614954403924,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.030134614954403924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.22712418300653595,
"acc_stderr": 0.01694985327921237,
"acc_norm": 0.22712418300653595,
"acc_norm_stderr": 0.01694985327921237
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.3,
"acc_stderr": 0.04389311454644286,
"acc_norm": 0.3,
"acc_norm_stderr": 0.04389311454644286
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.3183673469387755,
"acc_stderr": 0.029822533793982045,
"acc_norm": 0.3183673469387755,
"acc_norm_stderr": 0.029822533793982045
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.030360490154014645,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.030360490154014645
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3493975903614458,
"acc_stderr": 0.0371172519074075,
"acc_norm": 0.3493975903614458,
"acc_norm_stderr": 0.0371172519074075
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.03377310252209194,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.03377310252209194
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2729498164014688,
"mc1_stderr": 0.01559475363200652,
"mc2": 0.4578928664903403,
"mc2_stderr": 0.015155546755030565
},
"harness|winogrande|5": {
"acc": 0.4996053670086819,
"acc_stderr": 0.014052481306049516
},
"harness|gsm8k|5": {
"acc": 0.009097801364670205,
"acc_stderr": 0.0026153265107756725
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
irds/car_v1.5_trec-y1_auto | ---
pretty_name: '`car/v1.5/trec-y1/auto`'
viewer: false
source_datasets: ['irds/car_v1.5']
task_categories:
- text-retrieval
---
# Dataset Card for `car/v1.5/trec-y1/auto`
The `car/v1.5/trec-y1/auto` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5/trec-y1/auto).
# Data
This dataset provides:
- `qrels`: (relevance assessments); count=5,820
- For `docs`, use [`irds/car_v1.5`](https://huggingface.co/datasets/irds/car_v1.5)
## Usage
```python
from datasets import load_dataset
qrels = load_dataset('irds/car_v1.5_trec-y1_auto', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dietz2017TrecCar,
title={TREC Complex Answer Retrieval Overview.},
author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick},
booktitle={TREC},
year={2017}
}
@article{Dietz2017Car,
title={{TREC CAR}: A Data Set for Complex Answer Retrieval},
author={Laura Dietz and Ben Gamari},
year={2017},
note={Version 1.5},
url={http://trec-car.cs.unh.edu}
}
```
|
TuringsSolutions/pFAF2 | ---
license: mit
---
|
dongyoung4091/shp-generated_flan_t5_large | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 6358460
num_examples: 100
download_size: 1586813
dataset_size: 6358460
---
# Dataset Card for "shp-generated_flan_t5_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/fujimiya_konomi_nonnonbiyori | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Fujimiya Konomi
This is the dataset of Fujimiya Konomi, containing 160 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 160 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 389 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 427 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 160 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 160 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 160 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 389 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 389 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 331 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 427 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 427 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
arbml/ESCWA | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 783712001.0
num_examples: 24
download_size: 766073404
dataset_size: 783712001.0
---
# Dataset Card for "ESCWA"
Collected over two days of meetings of the United Nations Economic and Social Commission for West Asia (ESCWA) in 2019. The data includes intrasentential code alternation between Arabic and English. In the case of Algerian, Tunisian, and Moroccan native speakers, the switch is between Arabic and French.
The 2.8 hours ESCWA includes dialectal Arabic, with a Code Mixing Index (CMI) of ~28%.
More details about the ESCWA can be found https://arabicspeech.org/escwa/.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zetavg/CC-100-zh-Hant-merged | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 17882150544
num_examples: 12328228
download_size: 12940914691
dataset_size: 17882150544
---
# CC-100 zh-Hant (Traditional Chinese)
From https://data.statmt.org/cc-100/, only zh-Hant - Chinese (Traditional). Broken into paragraphs, with each paragraphs as a row.
Estimated to have around 4B tokens when tokenized with the [`bigscience/bloom`](https://huggingface.co/bigscience/bloom) tokenizer.
There's another version that the text is split by lines instead of paragraphs: [`zetavg/CC-100-zh-Hant`](https://huggingface.co/datasets/zetavg/CC-100-zh-Hant).
## References
Please cite the following if you found the resources in the CC-100 corpus useful.
* **Unsupervised Cross-lingual Representation Learning at Scale**, *Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov*, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), p. 8440-8451, July 2020, [pdf](https://www.aclweb.org/anthology/2020.acl-main.747.pdf), [bib](https://www.aclweb.org/anthology/2020.acl-main.747.bib) .
* **CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data**, *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave*, Proceedings of the 12th Language Resources and Evaluation Conference (LREC), p. 4003-4012, May 2020, [pdf](https://www.aclweb.org/anthology/2020.lrec-1.494.pdf), [bib](https://www.aclweb.org/anthology/2020.lrec-1.494.bib). |
tr416/dataset_20231007_033400 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74449
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_033400"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lengocquangLAB/Amazon-beverage-reviews | ---
license: unknown
---
|
hun7eee/mydataset | ---
license: mit
---
|
open-llm-leaderboard/details_Deci__DeciLM-7B | ---
pretty_name: Evaluation run of Deci/DeciLM-7B
dataset_summary: "Dataset automatically created during the evaluation run of model [Deci/DeciLM-7B](https://huggingface.co/Deci/DeciLM-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Deci-early-access__DeciLM-7B-early_private\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T13:05:55.242370](https://huggingface.co/datasets/open-llm-leaderboard/details_Deci-early-access__DeciLM-7B-early_private/blob/main/results_2023-12-11T13-05-55.242370.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5986461662246719,\n \"acc_stderr\": 0.03322810922254394,\n \"acc_norm\": 0.6014214623320648,\n \"acc_norm_stderr\": 0.03391006890945986,\n \"mc1\": 0.2692778457772338,\n \"mc1_stderr\": 0.015528566637087295,\n \"mc2\": 0.4032625331106103,\n \"mc2_stderr\": 0.01398363920569579\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.552901023890785,\n \"acc_stderr\": 0.014529380160526843,\n \"acc_norm\": 0.5938566552901023,\n \"acc_norm_stderr\": 0.014351656690097862\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6262696673969329,\n \"acc_stderr\": 0.004828045774734898,\n \"acc_norm\": 0.8251344353714399,\n \"acc_norm_stderr\": 0.0037907576465758953\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5407407407407407,\n \"acc_stderr\": 0.04304979692464242,\n \"acc_norm\": 0.5407407407407407,\n \"acc_norm_stderr\": 0.04304979692464242\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.03842498559395268,\n \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.03842498559395268\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.028727502957880267,\n \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.028727502957880267\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.03852084696008534,\n \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.03852084696008534\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.630057803468208,\n \"acc_stderr\": 0.0368122963339432,\n \"acc_norm\": 0.630057803468208,\n \"acc_norm_stderr\": 0.0368122963339432\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287533,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287533\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.03267862331014063,\n \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.03267862331014063\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.025010749116137595,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.025010749116137595\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3412698412698413,\n \"acc_stderr\": 0.04240799327574924,\n \"acc_norm\": 0.3412698412698413,\n \"acc_norm_stderr\": 0.04240799327574924\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7225806451612903,\n \"acc_stderr\": 0.025470196835900055,\n \"acc_norm\": 0.7225806451612903,\n \"acc_norm_stderr\": 0.025470196835900055\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.035107665979592154,\n \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.035107665979592154\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.0347769116216366,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.0347769116216366\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7474747474747475,\n \"acc_stderr\": 0.030954055470365897,\n \"acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.030954055470365897\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.02649905770139746,\n \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.02649905770139746\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5794871794871795,\n \"acc_stderr\": 0.025028610276710855,\n \"acc_norm\": 0.5794871794871795,\n \"acc_norm_stderr\": 0.025028610276710855\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.029723278961476668,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.029723278961476668\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6260504201680672,\n \"acc_stderr\": 0.03142946637883708,\n \"acc_norm\": 0.6260504201680672,\n \"acc_norm_stderr\": 0.03142946637883708\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3841059602649007,\n \"acc_stderr\": 0.03971301814719197,\n \"acc_norm\": 0.3841059602649007,\n \"acc_norm_stderr\": 0.03971301814719197\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7871559633027523,\n \"acc_stderr\": 0.017549376389313694,\n \"acc_norm\": 0.7871559633027523,\n \"acc_norm_stderr\": 0.017549376389313694\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.47685185185185186,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.47685185185185186,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7598039215686274,\n \"acc_stderr\": 0.02998373305591362,\n \"acc_norm\": 0.7598039215686274,\n \"acc_norm_stderr\": 0.02998373305591362\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6946564885496184,\n \"acc_stderr\": 0.040393149787245605,\n \"acc_norm\": 0.6946564885496184,\n \"acc_norm_stderr\": 0.040393149787245605\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6871165644171779,\n \"acc_stderr\": 0.036429145782924055,\n \"acc_norm\": 0.6871165644171779,\n \"acc_norm_stderr\": 0.036429145782924055\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n \"acc_stderr\": 0.02363687331748927,\n \"acc_norm\": 0.8461538461538461,\n \"acc_norm_stderr\": 0.02363687331748927\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7879948914431673,\n \"acc_stderr\": 0.01461609938583368,\n \"acc_norm\": 0.7879948914431673,\n \"acc_norm_stderr\": 0.01461609938583368\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.025305258131879695,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.025305258131879695\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n \"acc_stderr\": 0.01433352205921789,\n \"acc_norm\": 0.2424581005586592,\n \"acc_norm_stderr\": 0.01433352205921789\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.696078431372549,\n \"acc_stderr\": 0.02633661346904663,\n \"acc_norm\": 0.696078431372549,\n \"acc_norm_stderr\": 0.02633661346904663\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6688102893890675,\n \"acc_stderr\": 0.02673062072800491,\n \"acc_norm\": 0.6688102893890675,\n \"acc_norm_stderr\": 0.02673062072800491\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409825,\n \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409825\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4645390070921986,\n \"acc_stderr\": 0.029752389657427047,\n \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.029752389657427047\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4276401564537158,\n \"acc_stderr\": 0.012635799922765846,\n \"acc_norm\": 0.4276401564537158,\n \"acc_norm_stderr\": 0.012635799922765846\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6139705882352942,\n \"acc_stderr\": 0.029573269134411124,\n \"acc_norm\": 0.6139705882352942,\n \"acc_norm_stderr\": 0.029573269134411124\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5915032679738562,\n \"acc_stderr\": 0.01988622103750187,\n \"acc_norm\": 0.5915032679738562,\n \"acc_norm_stderr\": 0.01988622103750187\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.02879518557429129,\n \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.02879518557429129\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8208955223880597,\n \"acc_stderr\": 0.027113286753111837,\n \"acc_norm\": 0.8208955223880597,\n \"acc_norm_stderr\": 0.027113286753111837\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8011695906432749,\n \"acc_stderr\": 0.030611116557432528,\n \"acc_norm\": 0.8011695906432749,\n \"acc_norm_stderr\": 0.030611116557432528\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2692778457772338,\n \"mc1_stderr\": 0.015528566637087295,\n \"mc2\": 0.4032625331106103,\n \"mc2_stderr\": 0.01398363920569579\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7995264404104183,\n \"acc_stderr\": 0.011251958281205083\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.47384382107657314,\n \"acc_stderr\": 0.013753627037255045\n }\n}\n```"
repo_url: https://huggingface.co/Deci/DeciLM-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- '**/details_harness|arc:challenge|25_2023-12-11T13-05-55.242370.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-11T13-05-55.242370.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- '**/details_harness|gsm8k|5_2023-12-11T13-05-55.242370.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-11T13-05-55.242370.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- '**/details_harness|hellaswag|10_2023-12-11T13-05-55.242370.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-11T13-05-55.242370.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-anatomy|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-astronomy|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_biology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_physics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-computer_security|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-econometrics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-global_facts|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-human_aging|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-international_law|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-management|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-marketing|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-nutrition|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-philosophy|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-prehistory|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-professional_law|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-public_relations|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-security_studies|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-sociology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-virology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-world_religions|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-anatomy|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-astronomy|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_biology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-college_physics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-computer_security|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-econometrics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-global_facts|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-human_aging|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-international_law|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-management|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-marketing|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-nutrition|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-philosophy|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-prehistory|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-professional_law|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-public_relations|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-security_studies|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-sociology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-virology|5_2023-12-11T13-05-55.242370.parquet
- >-
**/details_harness|hendrycksTest-world_religions|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-anatomy|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-anatomy|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-astronomy|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-astronomy|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-college_biology|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-college_biology|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-college_physics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-college_physics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-computer_security|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-computer_security|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-econometrics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-econometrics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-global_facts|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-global_facts|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-human_aging|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-human_aging|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-international_law|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-international_law|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-management|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-management|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-marketing|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-marketing|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-nutrition|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-nutrition|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-philosophy|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-philosophy|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-prehistory|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-prehistory|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-professional_law|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-professional_law|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-public_relations|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-public_relations|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-security_studies|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-security_studies|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-sociology|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-sociology|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-virology|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-virology|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- >-
**/details_harness|hendrycksTest-world_religions|5_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- >-
**/details_harness|hendrycksTest-world_religions|5_2023-12-11T13-05-55.242370.parquet
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-11T13-05-55.242370.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-11T13-05-55.242370.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- '**/details_harness|winogrande|5_2023-12-11T13-05-55.242370.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-11T13-05-55.242370.parquet'
- config_name: results
data_files:
- split: 2023_12_11T13_05_55.242370
path:
- results_2023-12-11T13-05-55.242370.parquet
- split: latest
path:
- results_2023-12-11T13-05-55.242370.parquet
---
# Dataset Card for Evaluation run of Deci/DeciLM-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Deci/DeciLM-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Deci/DeciLM-7B](https://huggingface.co/Deci/DeciLM-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Deci/DeciLM-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T13:05:55.242370](https://huggingface.co/datasets/open-llm-leaderboard/details_Deci__DeciLM-7B/blob/main/results_2023-12-11T13-05-55.242370.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5986461662246719,
"acc_stderr": 0.03322810922254394,
"acc_norm": 0.6014214623320648,
"acc_norm_stderr": 0.03391006890945986,
"mc1": 0.2692778457772338,
"mc1_stderr": 0.015528566637087295,
"mc2": 0.4032625331106103,
"mc2_stderr": 0.01398363920569579
},
"harness|arc:challenge|25": {
"acc": 0.552901023890785,
"acc_stderr": 0.014529380160526843,
"acc_norm": 0.5938566552901023,
"acc_norm_stderr": 0.014351656690097862
},
"harness|hellaswag|10": {
"acc": 0.6262696673969329,
"acc_stderr": 0.004828045774734898,
"acc_norm": 0.8251344353714399,
"acc_norm_stderr": 0.0037907576465758953
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5407407407407407,
"acc_stderr": 0.04304979692464242,
"acc_norm": 0.5407407407407407,
"acc_norm_stderr": 0.04304979692464242
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6644736842105263,
"acc_stderr": 0.03842498559395268,
"acc_norm": 0.6644736842105263,
"acc_norm_stderr": 0.03842498559395268
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.028727502957880267,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.028727502957880267
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.03852084696008534,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.03852084696008534
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.630057803468208,
"acc_stderr": 0.0368122963339432,
"acc_norm": 0.630057803468208,
"acc_norm_stderr": 0.0368122963339432
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287533,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287533
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5106382978723404,
"acc_stderr": 0.03267862331014063,
"acc_norm": 0.5106382978723404,
"acc_norm_stderr": 0.03267862331014063
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.046151869625837026,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.046151869625837026
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.025010749116137595,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.025010749116137595
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3412698412698413,
"acc_stderr": 0.04240799327574924,
"acc_norm": 0.3412698412698413,
"acc_norm_stderr": 0.04240799327574924
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7225806451612903,
"acc_stderr": 0.025470196835900055,
"acc_norm": 0.7225806451612903,
"acc_norm_stderr": 0.025470196835900055
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.46798029556650245,
"acc_stderr": 0.035107665979592154,
"acc_norm": 0.46798029556650245,
"acc_norm_stderr": 0.035107665979592154
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.0347769116216366,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.0347769116216366
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7474747474747475,
"acc_stderr": 0.030954055470365897,
"acc_norm": 0.7474747474747475,
"acc_norm_stderr": 0.030954055470365897
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.02649905770139746,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.02649905770139746
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5794871794871795,
"acc_stderr": 0.025028610276710855,
"acc_norm": 0.5794871794871795,
"acc_norm_stderr": 0.025028610276710855
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.029723278961476668,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.029723278961476668
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6260504201680672,
"acc_stderr": 0.03142946637883708,
"acc_norm": 0.6260504201680672,
"acc_norm_stderr": 0.03142946637883708
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3841059602649007,
"acc_stderr": 0.03971301814719197,
"acc_norm": 0.3841059602649007,
"acc_norm_stderr": 0.03971301814719197
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7871559633027523,
"acc_stderr": 0.017549376389313694,
"acc_norm": 0.7871559633027523,
"acc_norm_stderr": 0.017549376389313694
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.47685185185185186,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.47685185185185186,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7598039215686274,
"acc_stderr": 0.02998373305591362,
"acc_norm": 0.7598039215686274,
"acc_norm_stderr": 0.02998373305591362
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.03160295143776679,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.03160295143776679
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6946564885496184,
"acc_stderr": 0.040393149787245605,
"acc_norm": 0.6946564885496184,
"acc_norm_stderr": 0.040393149787245605
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7129629629629629,
"acc_stderr": 0.043733130409147614,
"acc_norm": 0.7129629629629629,
"acc_norm_stderr": 0.043733130409147614
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6871165644171779,
"acc_stderr": 0.036429145782924055,
"acc_norm": 0.6871165644171779,
"acc_norm_stderr": 0.036429145782924055
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8461538461538461,
"acc_stderr": 0.02363687331748927,
"acc_norm": 0.8461538461538461,
"acc_norm_stderr": 0.02363687331748927
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7879948914431673,
"acc_stderr": 0.01461609938583368,
"acc_norm": 0.7879948914431673,
"acc_norm_stderr": 0.01461609938583368
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.025305258131879695,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.025305258131879695
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.01433352205921789,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.01433352205921789
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.696078431372549,
"acc_stderr": 0.02633661346904663,
"acc_norm": 0.696078431372549,
"acc_norm_stderr": 0.02633661346904663
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6688102893890675,
"acc_stderr": 0.02673062072800491,
"acc_norm": 0.6688102893890675,
"acc_norm_stderr": 0.02673062072800491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.026105673861409825,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.026105673861409825
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4645390070921986,
"acc_stderr": 0.029752389657427047,
"acc_norm": 0.4645390070921986,
"acc_norm_stderr": 0.029752389657427047
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4276401564537158,
"acc_stderr": 0.012635799922765846,
"acc_norm": 0.4276401564537158,
"acc_norm_stderr": 0.012635799922765846
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6139705882352942,
"acc_stderr": 0.029573269134411124,
"acc_norm": 0.6139705882352942,
"acc_norm_stderr": 0.029573269134411124
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5915032679738562,
"acc_stderr": 0.01988622103750187,
"acc_norm": 0.5915032679738562,
"acc_norm_stderr": 0.01988622103750187
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.02879518557429129,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.02879518557429129
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111837,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111837
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8011695906432749,
"acc_stderr": 0.030611116557432528,
"acc_norm": 0.8011695906432749,
"acc_norm_stderr": 0.030611116557432528
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2692778457772338,
"mc1_stderr": 0.015528566637087295,
"mc2": 0.4032625331106103,
"mc2_stderr": 0.01398363920569579
},
"harness|winogrande|5": {
"acc": 0.7995264404104183,
"acc_stderr": 0.011251958281205083
},
"harness|gsm8k|5": {
"acc": 0.47384382107657314,
"acc_stderr": 0.013753627037255045
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
hongrui/mimic_chest_xray_v_1 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: report
dtype: string
splits:
- name: train
num_bytes: 2350901047.71
num_examples: 89395
download_size: 2322292341
dataset_size: 2350901047.71
---
# Dataset Card for "mimic_chest_xray_v_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/30ce51a4 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1336
dataset_size: 184
---
# Dataset Card for "30ce51a4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_50_1713114866 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 415794
num_examples: 1025
download_size: 224060
dataset_size: 415794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
derenrich/enwiki-did-you-know | ---
license: cc-by-3.0
language:
- en
tags:
- wikipedia
pretty_name: English Wikipedia Did You Know Corpus
size_categories:
- 10K<n<100K
--- |
ovior/twitter_dataset_1713027004 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2272784
num_examples: 7050
download_size: 1297271
dataset_size: 2272784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tribh/MiningRegs | ---
license: apache-2.0
---
|
marciodiniz7xx/clarinha | ---
license: openrail
---
|
togethercomputer/RedPajama-Data-1T | ---
task_categories:
- text-generation
language:
- en
pretty_name: Red Pajama 1T
---
### Getting Started
The dataset consists of 2084 jsonl files.
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
```
Or you can directly download the files using the following command:
```
wget 'https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt'
while read line; do
dload_loc=${line#https://data.together.xyz/redpajama-data-1T/v1.0.0/}
mkdir -p $(dirname $dload_loc)
wget "$line" -O "$dload_loc"
done < urls.txt
```
After downloading the files, you can load the dataset from disk by setting the `RED_PAJAMA_DATA_DIR` environment variable to the directory containing the files:
```python
import os
from datasets import load_dataset
os.environ["RED_PAJAMA_DATA_DIR"] = "/path/to/download"
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
```
A smaller 1B-token sample of the dataset can be found [here](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| Books | 26 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```json
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
"red_pajama_subset": "common_crawl" | "c4" | "github" | "books" | "arxiv" | "wikipedia" | "stackexchange"
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> The 'book' config is defunct and no longer accessible due to reported copyright infringement for the Book3 dataset contained in this config.</p>
</div>
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
### SHA256 Checksums
SHA256 checksums for the dataset files for each data source are available here:
```
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/arxiv_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/book_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/c4_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/common_crawl_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/github_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/stackexchange_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/wikipedia_SHA256SUMS.txt
```
To cite RedPajama, please use:
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
### License
Please refer to the licenses of the data subsets you use.
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
* [C4 license](https://huggingface.co/datasets/allenai/c4#license)
* GitHub was limited to MIT, BSD, or Apache licenses only
* Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.