datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
senhorsapo/spider | ---
license: openrail
---
|
dinhbinh161/vi_text_2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 51531390
num_examples: 528489
download_size: 29226757
dataset_size: 51531390
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vi_text_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
louisbrulenaudet/code-education | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code de l'éducation
source_datasets:
- original
pretty_name: Code de l'éducation
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code de l'éducation, non-instruct (2024-04-15)
This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice.
Fine-tuning is the process of adapting a pre-trained model to perform specific tasks or cater to particular domains. It involves adjusting the model's parameters through a further round of training on task-specific or domain-specific data. While conventional fine-tuning strategies involve supervised learning with labeled data, instruction-based fine-tuning introduces a more structured and interpretable approach.
Instruction-based fine-tuning leverages the power of human-provided instructions to guide the model's behavior. These instructions can be in the form of text prompts, prompts with explicit task descriptions, or a combination of both. This approach allows for a more controlled and context-aware interaction with the LLM, making it adaptable to a multitude of specialized tasks.
Instruction-based fine-tuning significantly enhances the performance of LLMs in the following ways:
- Task-Specific Adaptation: LLMs, when fine-tuned with specific instructions, exhibit remarkable adaptability to diverse tasks. They can switch seamlessly between translation, summarization, and question-answering, guided by the provided instructions.
- Reduced Ambiguity: Traditional LLMs might generate ambiguous or contextually inappropriate responses. Instruction-based fine-tuning allows for a clearer and more context-aware generation, reducing the likelihood of nonsensical outputs.
- Efficient Knowledge Transfer: Instructions can encapsulate domain-specific knowledge, enabling LLMs to benefit from expert guidance. This knowledge transfer is particularly valuable in fields like tax practice, law, medicine, and more.
- Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
- Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
## Concurrent reading of the LegalKit
To use all the legal data published on LegalKit, you can use this code snippet:
```python
# -*- coding: utf-8 -*-
import concurrent.futures
import os
import datasets
from tqdm.notebook import tqdm
def dataset_loader(
name:str,
streaming:bool=True
) -> datasets.Dataset:
"""
Helper function to load a single dataset in parallel.
Parameters
----------
name : str
Name of the dataset to be loaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
dataset : datasets.Dataset
Loaded dataset object.
Raises
------
Exception
If an error occurs during dataset loading.
"""
try:
return datasets.load_dataset(
name,
split="train",
streaming=streaming
)
except Exception as exc:
logging.error(f"Error loading dataset {name}: {exc}")
return None
def load_datasets(
req:list,
streaming:bool=True
) -> list:
"""
Downloads datasets specified in a list and creates a list of loaded datasets.
Parameters
----------
req : list
A list containing the names of datasets to be downloaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
datasets_list : list
A list containing loaded datasets as per the requested names provided in 'req'.
Raises
------
Exception
If an error occurs during dataset loading or processing.
Examples
--------
>>> datasets = load_datasets(["dataset1", "dataset2"], streaming=False)
"""
datasets_list = []
with concurrent.futures.ThreadPoolExecutor() as executor:
future_to_dataset = {executor.submit(dataset_loader, name): name for name in req}
for future in tqdm(concurrent.futures.as_completed(future_to_dataset), total=len(req)):
name = future_to_dataset[future]
try:
dataset = future.result()
if dataset:
datasets_list.append(dataset)
except Exception as exc:
logging.error(f"Error processing dataset {name}: {exc}")
return datasets_list
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=True
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
## Dataset generation
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `string`, presenting the instruction linked to the element.
- `input`: `string`, signifying the input details for the element.
- `output`: `string`, indicating the output information for the element.
- `start`: `string`, the date of entry into force of the article.
- `expiration`: `string`, the date of expiration of the article.
- `num`: `string`, the id of the article.
We used the following list of instructions for generating the dataset:
```python
instructions = [
"Compose l'intégralité de l'article sous forme écrite.",
"Écris la totalité du contenu de l'article.",
"Formule la totalité du texte présent dans l'article.",
"Produis l'intégralité de l'article en écriture.",
"Développe l'article dans son ensemble par écrit.",
"Génère l'ensemble du texte contenu dans l'article.",
"Formule le contenu intégral de l'article en entier.",
"Rédige la totalité du texte de l'article en entier.",
"Compose l'intégralité du contenu textuel de l'article.",
"Rédige l'ensemble du texte qui constitue l'article.",
"Formule l'article entier dans son contenu écrit.",
"Composez l'intégralité de l'article sous forme écrite.",
"Écrivez la totalité du contenu de l'article.",
"Formulez la totalité du texte présent dans l'article.",
"Développez l'article dans son ensemble par écrit.",
"Générez l'ensemble du texte contenu dans l'article.",
"Formulez le contenu intégral de l'article en entier.",
"Rédigez la totalité du texte de l'article en entier.",
"Composez l'intégralité du contenu textuel de l'article.",
"Écrivez l'article dans son intégralité en termes de texte.",
"Rédigez l'ensemble du texte qui constitue l'article.",
"Formulez l'article entier dans son contenu écrit.",
"Composer l'intégralité de l'article sous forme écrite.",
"Écrire la totalité du contenu de l'article.",
"Formuler la totalité du texte présent dans l'article.",
"Produire l'intégralité de l'article en écriture.",
"Développer l'article dans son ensemble par écrit.",
"Générer l'ensemble du texte contenu dans l'article.",
"Formuler le contenu intégral de l'article en entier.",
"Rédiger la totalité du texte de l'article en entier.",
"Composer l'intégralité du contenu textuel de l'article.",
"Rédiger l'ensemble du texte qui constitue l'article.",
"Formuler l'article entier dans son contenu écrit.",
"Quelles sont les dispositions de l'article ?",
"Quelles dispositions sont incluses dans l'article ?",
"Quelles sont les dispositions énoncées dans l'article ?",
"Quel est le texte intégral de l'article ?",
"Quelle est la lettre de l'article ?"
]
```
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
mariem1994/nlp_project | ---
license: afl-3.0
task_categories:
- token-classification
language:
- fr
--- |
guyhadad01/Talmud-Hebrew-tok | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 8927832
num_examples: 17302
download_size: 5094023
dataset_size: 8927832
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
iamnguyen/ds_by_sys_prompt_8 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 55085157.81628031
num_examples: 32297
download_size: 21824628
dataset_size: 55085157.81628031
---
# Dataset Card for "ds_by_sys_prompt_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/momiji_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of momiji/秋泉モミジ/红叶 (Blue Archive)
This is the dataset of momiji/秋泉モミジ/红叶 (Blue Archive), containing 84 images and their tags.
The core tags of this character are `green_hair, halo, long_hair, green_eyes, bow, blue_halo, red_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 84 | 114.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momiji_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 84 | 93.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momiji_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 207 | 196.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momiji_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/momiji_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, white_coat, animal_hood, hood_up, long_sleeves, solo, white_gloves, blush, fur-trimmed_coat, hooded_coat, looking_at_viewer, fur-trimmed_hood, simple_background, white_background, smile, winter_clothes, closed_mouth, holding, sidelocks, open_mouth, upper_body, twintails |
| 1 | 8 |  |  |  |  |  | 1girl, black_pantyhose, fur-trimmed_coat, hood_up, hooded_coat, long_sleeves, solo, white_coat, animal_hood, full_body, fur-trimmed_boots, blush, looking_at_viewer, standing, white_gloves, closed_mouth, simple_background, holding_gun, rocket_launcher, sidelocks, white_background, bear_hood, brown_footwear, fur-trimmed_hood, open_mouth, paw_gloves, winter_clothes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | white_coat | animal_hood | hood_up | long_sleeves | solo | white_gloves | blush | fur-trimmed_coat | hooded_coat | looking_at_viewer | fur-trimmed_hood | simple_background | white_background | smile | winter_clothes | closed_mouth | holding | sidelocks | open_mouth | upper_body | twintails | black_pantyhose | full_body | fur-trimmed_boots | standing | holding_gun | rocket_launcher | bear_hood | brown_footwear | paw_gloves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------------|:----------|:---------------|:-------|:---------------|:--------|:-------------------|:--------------|:--------------------|:-------------------|:--------------------|:-------------------|:--------|:-----------------|:---------------|:----------|:------------|:-------------|:-------------|:------------|:------------------|:------------|:--------------------|:-----------|:--------------|:------------------|:------------|:-----------------|:-------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | | X | X | | | X | X | X | X | X | X | X | X | X |
|
BangumiBase/mankitsuhappening | ---
license: mit
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Bangumi Image Base of Mankitsu Happening
This is the image base of bangumi Mankitsu Happening, we detected 7 characters, 475 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 70 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 58 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 17 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 103 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 88 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 106 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 33 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  | |
bigbio/genia_term_corpus |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: GENIA Term Corpus
homepage: http://www.geniaproject.org/genia-corpus/term-corpus
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for GENIA Term Corpus
## Dataset Description
- **Homepage:** http://www.geniaproject.org/genia-corpus/term-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The identification of linguistic expressions referring to entities of interest in molecular biology such as proteins,
genes and cells is a fundamental task in biomolecular text mining. The GENIA technical term annotation covers the
identification of physical biological entities as well as other important terms. The corpus annotation covers the full
1,999 abstracts of the primary GENIA corpus.
## Citation Information
```
@inproceedings{10.5555/1289189.1289260,
author = {Ohta, Tomoko and Tateisi, Yuka and Kim, Jin-Dong},
title = {The GENIA Corpus: An Annotated Research Abstract Corpus in Molecular Biology Domain},
year = {2002},
publisher = {Morgan Kaufmann Publishers Inc.},
address = {San Francisco, CA, USA},
booktitle = {Proceedings of the Second International Conference on Human Language Technology Research},
pages = {82–86},
numpages = {5},
location = {San Diego, California},
series = {HLT '02}
}
@article{Kim2003GENIAC,
title={GENIA corpus - a semantically annotated corpus for bio-textmining},
author={Jin-Dong Kim and Tomoko Ohta and Yuka Tateisi and Junichi Tsujii},
journal={Bioinformatics},
year={2003},
volume={19 Suppl 1},
pages={
i180-2
}
}
@inproceedings{10.5555/1567594.1567610,
author = {Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel},
title = {Introduction to the Bio-Entity Recognition Task at JNLPBA},
year = {2004},
publisher = {Association for Computational Linguistics},
address = {USA},
booktitle = {Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its
Applications},
pages = {70–75},
numpages = {6},
location = {Geneva, Switzerland},
series = {JNLPBA '04}
}
```
|
hilongjw/view_border | ---
license: mit
---
|
open-llm-leaderboard/details_MaziyarPanahi__MeliodasPercival_01_Experiment26T3q | ---
pretty_name: Evaluation run of MaziyarPanahi/MeliodasPercival_01_Experiment26T3q
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MaziyarPanahi/MeliodasPercival_01_Experiment26T3q](https://huggingface.co/MaziyarPanahi/MeliodasPercival_01_Experiment26T3q)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MaziyarPanahi__MeliodasPercival_01_Experiment26T3q\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-09T10:40:11.649729](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__MeliodasPercival_01_Experiment26T3q/blob/main/results_2024-04-09T10-40-11.649729.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6515210905924909,\n\
\ \"acc_stderr\": 0.032099070270118296,\n \"acc_norm\": 0.6504709983594362,\n\
\ \"acc_norm_stderr\": 0.03277652523666969,\n \"mc1\": 0.631578947368421,\n\
\ \"mc1_stderr\": 0.016886551261046046,\n \"mc2\": 0.7827611805683923,\n\
\ \"mc2_stderr\": 0.01364070041845758\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7158703071672355,\n \"acc_stderr\": 0.013179442447653886,\n\
\ \"acc_norm\": 0.7303754266211604,\n \"acc_norm_stderr\": 0.012968040686869148\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7175861382194781,\n\
\ \"acc_stderr\": 0.004492535748097627,\n \"acc_norm\": 0.8916550487950607,\n\
\ \"acc_norm_stderr\": 0.0031018035745563055\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n\
\ \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n\
\ \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6981132075471698,\n \"acc_stderr\": 0.02825420034443866,\n\
\ \"acc_norm\": 0.6981132075471698,\n \"acc_norm_stderr\": 0.02825420034443866\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\"\
: 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107224,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107224\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41534391534391535,\n \"acc_stderr\": 0.025379524910778394,\n \"\
acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.025379524910778394\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7870967741935484,\n\
\ \"acc_stderr\": 0.02328766512726855,\n \"acc_norm\": 0.7870967741935484,\n\
\ \"acc_norm_stderr\": 0.02328766512726855\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175007,\n\
\ \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175007\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.03287666758603491,\n\
\ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.03287666758603491\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\
: 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.658974358974359,\n \"acc_stderr\": 0.02403548967633508,\n \
\ \"acc_norm\": 0.658974358974359,\n \"acc_norm_stderr\": 0.02403548967633508\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3296296296296296,\n \"acc_stderr\": 0.02866120111652457,\n \
\ \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.02866120111652457\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3841059602649007,\n \"acc_stderr\": 0.03971301814719197,\n \"\
acc_norm\": 0.3841059602649007,\n \"acc_norm_stderr\": 0.03971301814719197\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8440366972477065,\n \"acc_stderr\": 0.01555580271359017,\n \"\
acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.01555580271359017\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"\
acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931048,\n \"\
acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931048\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8143459915611815,\n \"acc_stderr\": 0.025310495376944856,\n \
\ \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.025310495376944856\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n\
\ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n\
\ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159465,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159465\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8212005108556832,\n\
\ \"acc_stderr\": 0.013702643715368983,\n \"acc_norm\": 0.8212005108556832,\n\
\ \"acc_norm_stderr\": 0.013702643715368983\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.02394851290546836,\n\
\ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.02394851290546836\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.42681564245810055,\n\
\ \"acc_stderr\": 0.016542401954631917,\n \"acc_norm\": 0.42681564245810055,\n\
\ \"acc_norm_stderr\": 0.016542401954631917\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.02555316999182652,\n\
\ \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.02555316999182652\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\
\ \"acc_stderr\": 0.026003301117885135,\n \"acc_norm\": 0.7009646302250804,\n\
\ \"acc_norm_stderr\": 0.026003301117885135\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135114,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135114\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47327249022164275,\n\
\ \"acc_stderr\": 0.01275197796767601,\n \"acc_norm\": 0.47327249022164275,\n\
\ \"acc_norm_stderr\": 0.01275197796767601\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n\
\ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706204,\n \
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706204\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.631578947368421,\n\
\ \"mc1_stderr\": 0.016886551261046046,\n \"mc2\": 0.7827611805683923,\n\
\ \"mc2_stderr\": 0.01364070041845758\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8492501973164956,\n \"acc_stderr\": 0.010056094631479674\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7043214556482184,\n \
\ \"acc_stderr\": 0.012570068947898772\n }\n}\n```"
repo_url: https://huggingface.co/MaziyarPanahi/MeliodasPercival_01_Experiment26T3q
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|arc:challenge|25_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|gsm8k|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hellaswag|10_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T10-40-11.649729.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T10-40-11.649729.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- '**/details_harness|winogrande|5_2024-04-09T10-40-11.649729.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-09T10-40-11.649729.parquet'
- config_name: results
data_files:
- split: 2024_04_09T10_40_11.649729
path:
- results_2024-04-09T10-40-11.649729.parquet
- split: latest
path:
- results_2024-04-09T10-40-11.649729.parquet
---
# Dataset Card for Evaluation run of MaziyarPanahi/MeliodasPercival_01_Experiment26T3q
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MaziyarPanahi/MeliodasPercival_01_Experiment26T3q](https://huggingface.co/MaziyarPanahi/MeliodasPercival_01_Experiment26T3q) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MaziyarPanahi__MeliodasPercival_01_Experiment26T3q",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-09T10:40:11.649729](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__MeliodasPercival_01_Experiment26T3q/blob/main/results_2024-04-09T10-40-11.649729.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6515210905924909,
"acc_stderr": 0.032099070270118296,
"acc_norm": 0.6504709983594362,
"acc_norm_stderr": 0.03277652523666969,
"mc1": 0.631578947368421,
"mc1_stderr": 0.016886551261046046,
"mc2": 0.7827611805683923,
"mc2_stderr": 0.01364070041845758
},
"harness|arc:challenge|25": {
"acc": 0.7158703071672355,
"acc_stderr": 0.013179442447653886,
"acc_norm": 0.7303754266211604,
"acc_norm_stderr": 0.012968040686869148
},
"harness|hellaswag|10": {
"acc": 0.7175861382194781,
"acc_stderr": 0.004492535748097627,
"acc_norm": 0.8916550487950607,
"acc_norm_stderr": 0.0031018035745563055
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6981132075471698,
"acc_stderr": 0.02825420034443866,
"acc_norm": 0.6981132075471698,
"acc_norm_stderr": 0.02825420034443866
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.025379524910778394,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.025379524910778394
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7870967741935484,
"acc_stderr": 0.02328766512726855,
"acc_norm": 0.7870967741935484,
"acc_norm_stderr": 0.02328766512726855
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175007,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175007
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.03287666758603491,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.03287666758603491
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.658974358974359,
"acc_stderr": 0.02403548967633508,
"acc_norm": 0.658974358974359,
"acc_norm_stderr": 0.02403548967633508
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.02866120111652457,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.02866120111652457
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3841059602649007,
"acc_stderr": 0.03971301814719197,
"acc_norm": 0.3841059602649007,
"acc_norm_stderr": 0.03971301814719197
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.01555580271359017,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.01555580271359017
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931048,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931048
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.025310495376944856,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.025310495376944856
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159465,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159465
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8212005108556832,
"acc_stderr": 0.013702643715368983,
"acc_norm": 0.8212005108556832,
"acc_norm_stderr": 0.013702643715368983
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.02394851290546836,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.02394851290546836
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.42681564245810055,
"acc_stderr": 0.016542401954631917,
"acc_norm": 0.42681564245810055,
"acc_norm_stderr": 0.016542401954631917
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.02555316999182652,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.02555316999182652
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.026003301117885135,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.026003301117885135
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.024477222856135114,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.024477222856135114
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47327249022164275,
"acc_stderr": 0.01275197796767601,
"acc_norm": 0.47327249022164275,
"acc_norm_stderr": 0.01275197796767601
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.018824219512706204,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.018824219512706204
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169136,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169136
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.631578947368421,
"mc1_stderr": 0.016886551261046046,
"mc2": 0.7827611805683923,
"mc2_stderr": 0.01364070041845758
},
"harness|winogrande|5": {
"acc": 0.8492501973164956,
"acc_stderr": 0.010056094631479674
},
"harness|gsm8k|5": {
"acc": 0.7043214556482184,
"acc_stderr": 0.012570068947898772
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
NovelSense/syntra-experiment-dataset | ---
license: cc-by-sa-4.0
task_categories:
- object-detection
tags:
- traffic
- vehicles
- car
- synthetic
- mobility
- infrastructure
pretty_name: SYNTRA Experiment Dataset
size_categories:
- 1K<n<10K
---
# About
This is the *SYNTRA Experiment Dataset*. It is a sample dataset from the NovelSense SYNTRA EU Hubs 4 Data experiment (https://euhubs4data.eu/experiments/syntra/). The experiment supported the development of a web application reachable under https://syntra.app. The dataset is a synthetic traffic infrastructure dataset e.g. for use for the validation, trainig and optimization of your traffic AI models.
# Datset description
The dataset has been created by generating 14 different visualization configurations. These include color spectrum of cars, camera noise, background, driving trajectories, among others. The dataset consists of png and xml files.
Each png files has a corresponding xml file which contians the annotation information in PascalVOC format.
The structure of the png and xml file names is a follows:
XXXXXX-C-M_frame_F.(png|xml)
* XXXXXX -- string encoding of configuration
* C -- number of the configuration
* M -- video in this configuration
* F -- frame number in this video
# Limitation
The dataset was generated using a development version of SYNTRA and contains only cars.
# License
SYNTRA Experiment Dataset © 2023 by NovelSense UG is licensed under CC BY-SA 4.0
(https://creativecommons.org/licenses/by-sa/4.0/) |
open-llm-leaderboard/details_argilla__notux-8x7b-v1-epoch-2 | ---
pretty_name: Evaluation run of argilla/notux-8x7b-v1-epoch-2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [argilla/notux-8x7b-v1-epoch-2](https://huggingface.co/argilla/notux-8x7b-v1-epoch-2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_argilla__notux-8x7b-v1-epoch-2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-06T07:23:08.510905](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__notux-8x7b-v1-epoch-2/blob/main/results_2024-01-06T07-23-08.510905.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7132510295468097,\n\
\ \"acc_stderr\": 0.030137639590982482,\n \"acc_norm\": 0.7169084121358973,\n\
\ \"acc_norm_stderr\": 0.030719998582647873,\n \"mc1\": 0.5140758873929009,\n\
\ \"mc1_stderr\": 0.01749656371704278,\n \"mc2\": 0.6596774083234566,\n\
\ \"mc2_stderr\": 0.015018146932027448\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6808873720136519,\n \"acc_stderr\": 0.013621696119173304,\n\
\ \"acc_norm\": 0.7064846416382252,\n \"acc_norm_stderr\": 0.01330725044494111\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6900019916351324,\n\
\ \"acc_stderr\": 0.0046154722103160396,\n \"acc_norm\": 0.8780123481378211,\n\
\ \"acc_norm_stderr\": 0.0032660269509226414\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.674074074074074,\n\
\ \"acc_stderr\": 0.040491220417025055,\n \"acc_norm\": 0.674074074074074,\n\
\ \"acc_norm_stderr\": 0.040491220417025055\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7960526315789473,\n \"acc_stderr\": 0.03279000406310049,\n\
\ \"acc_norm\": 0.7960526315789473,\n \"acc_norm_stderr\": 0.03279000406310049\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.72,\n\
\ \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \
\ \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7773584905660378,\n \"acc_stderr\": 0.025604233470899095,\n\
\ \"acc_norm\": 0.7773584905660378,\n \"acc_norm_stderr\": 0.025604233470899095\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8402777777777778,\n\
\ \"acc_stderr\": 0.030635578972093278,\n \"acc_norm\": 0.8402777777777778,\n\
\ \"acc_norm_stderr\": 0.030635578972093278\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n\
\ \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.7514450867052023,\n \"acc_stderr\": 0.03295304696818318,\n\
\ \"acc_norm\": 0.7514450867052023,\n \"acc_norm_stderr\": 0.03295304696818318\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n\
\ \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n\
\ \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653695,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653695\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6851063829787234,\n\
\ \"acc_stderr\": 0.03036358219723817,\n \"acc_norm\": 0.6851063829787234,\n\
\ \"acc_norm_stderr\": 0.03036358219723817\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.5964912280701754,\n \"acc_stderr\": 0.04615186962583707,\n\
\ \"acc_norm\": 0.5964912280701754,\n \"acc_norm_stderr\": 0.04615186962583707\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.6551724137931034,\n \"acc_stderr\": 0.03960933549451208,\n \"\
acc_norm\": 0.6551724137931034,\n \"acc_norm_stderr\": 0.03960933549451208\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.47354497354497355,\n \"acc_stderr\": 0.025715239811346758,\n \"\
acc_norm\": 0.47354497354497355,\n \"acc_norm_stderr\": 0.025715239811346758\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5158730158730159,\n\
\ \"acc_stderr\": 0.044698818540726076,\n \"acc_norm\": 0.5158730158730159,\n\
\ \"acc_norm_stderr\": 0.044698818540726076\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8516129032258064,\n \"acc_stderr\": 0.020222737554330378,\n \"\
acc_norm\": 0.8516129032258064,\n \"acc_norm_stderr\": 0.020222737554330378\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.6157635467980296,\n \"acc_stderr\": 0.03422398565657551,\n \"\
acc_norm\": 0.6157635467980296,\n \"acc_norm_stderr\": 0.03422398565657551\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\"\
: 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.03123475237772117,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.03123475237772117\n },\n\
\ \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n\
\ \"acc_stderr\": 0.024063156416822523,\n \"acc_norm\": 0.8686868686868687,\n\
\ \"acc_norm_stderr\": 0.024063156416822523\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\"\
: {\n \"acc\": 0.9585492227979274,\n \"acc_stderr\": 0.01438543285747646,\n\
\ \"acc_norm\": 0.9585492227979274,\n \"acc_norm_stderr\": 0.01438543285747646\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7025641025641025,\n \"acc_stderr\": 0.023177408131465946,\n\
\ \"acc_norm\": 0.7025641025641025,\n \"acc_norm_stderr\": 0.023177408131465946\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3814814814814815,\n \"acc_stderr\": 0.029616718927497582,\n \
\ \"acc_norm\": 0.3814814814814815,\n \"acc_norm_stderr\": 0.029616718927497582\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8109243697478992,\n \"acc_stderr\": 0.025435119438105364,\n\
\ \"acc_norm\": 0.8109243697478992,\n \"acc_norm_stderr\": 0.025435119438105364\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.47019867549668876,\n \"acc_stderr\": 0.040752249922169775,\n \"\
acc_norm\": 0.47019867549668876,\n \"acc_norm_stderr\": 0.040752249922169775\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8899082568807339,\n \"acc_stderr\": 0.013419939018681203,\n \"\
acc_norm\": 0.8899082568807339,\n \"acc_norm_stderr\": 0.013419939018681203\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5925925925925926,\n \"acc_stderr\": 0.033509916046960436,\n \"\
acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.033509916046960436\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8529411764705882,\n \"acc_stderr\": 0.024857478080250447,\n \"\
acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.024857478080250447\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8607594936708861,\n \"acc_stderr\": 0.0225355263526927,\n \
\ \"acc_norm\": 0.8607594936708861,\n \"acc_norm_stderr\": 0.0225355263526927\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.757847533632287,\n\
\ \"acc_stderr\": 0.028751392398694755,\n \"acc_norm\": 0.757847533632287,\n\
\ \"acc_norm_stderr\": 0.028751392398694755\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.03498149385462469,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.03498149385462469\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035202,\n \"\
acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035202\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n\
\ \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n\
\ \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7975460122699386,\n \"acc_stderr\": 0.03157065078911899,\n\
\ \"acc_norm\": 0.7975460122699386,\n \"acc_norm_stderr\": 0.03157065078911899\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5982142857142857,\n\
\ \"acc_stderr\": 0.04653333146973647,\n \"acc_norm\": 0.5982142857142857,\n\
\ \"acc_norm_stderr\": 0.04653333146973647\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n\
\ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9230769230769231,\n\
\ \"acc_stderr\": 0.017456987872436193,\n \"acc_norm\": 0.9230769230769231,\n\
\ \"acc_norm_stderr\": 0.017456987872436193\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.78,\n \"acc_stderr\": 0.041633319989322626,\n \
\ \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.041633319989322626\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8825031928480205,\n\
\ \"acc_stderr\": 0.011515102251977221,\n \"acc_norm\": 0.8825031928480205,\n\
\ \"acc_norm_stderr\": 0.011515102251977221\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7774566473988439,\n \"acc_stderr\": 0.02239421566194282,\n\
\ \"acc_norm\": 0.7774566473988439,\n \"acc_norm_stderr\": 0.02239421566194282\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.46368715083798884,\n\
\ \"acc_stderr\": 0.01667834189453317,\n \"acc_norm\": 0.46368715083798884,\n\
\ \"acc_norm_stderr\": 0.01667834189453317\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8169934640522876,\n \"acc_stderr\": 0.02214076751288094,\n\
\ \"acc_norm\": 0.8169934640522876,\n \"acc_norm_stderr\": 0.02214076751288094\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7909967845659164,\n\
\ \"acc_stderr\": 0.023093140398374224,\n \"acc_norm\": 0.7909967845659164,\n\
\ \"acc_norm_stderr\": 0.023093140398374224\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8302469135802469,\n \"acc_stderr\": 0.02088869041409387,\n\
\ \"acc_norm\": 0.8302469135802469,\n \"acc_norm_stderr\": 0.02088869041409387\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5567375886524822,\n \"acc_stderr\": 0.029634838473766002,\n \
\ \"acc_norm\": 0.5567375886524822,\n \"acc_norm_stderr\": 0.029634838473766002\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5488917861799217,\n\
\ \"acc_stderr\": 0.012709037347346233,\n \"acc_norm\": 0.5488917861799217,\n\
\ \"acc_norm_stderr\": 0.012709037347346233\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7941176470588235,\n \"acc_stderr\": 0.02456220431414231,\n\
\ \"acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.02456220431414231\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7630718954248366,\n \"acc_stderr\": 0.017201662169789793,\n \
\ \"acc_norm\": 0.7630718954248366,\n \"acc_norm_stderr\": 0.017201662169789793\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7272727272727273,\n\
\ \"acc_stderr\": 0.04265792110940588,\n \"acc_norm\": 0.7272727272727273,\n\
\ \"acc_norm_stderr\": 0.04265792110940588\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7836734693877551,\n \"acc_stderr\": 0.026358916334904028,\n\
\ \"acc_norm\": 0.7836734693877551,\n \"acc_norm_stderr\": 0.026358916334904028\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8905472636815921,\n\
\ \"acc_stderr\": 0.02207632610182466,\n \"acc_norm\": 0.8905472636815921,\n\
\ \"acc_norm_stderr\": 0.02207632610182466\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \
\ \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8947368421052632,\n \"acc_stderr\": 0.02353755765789256,\n\
\ \"acc_norm\": 0.8947368421052632,\n \"acc_norm_stderr\": 0.02353755765789256\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5140758873929009,\n\
\ \"mc1_stderr\": 0.01749656371704278,\n \"mc2\": 0.6596774083234566,\n\
\ \"mc2_stderr\": 0.015018146932027448\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8208366219415943,\n \"acc_stderr\": 0.010777949156047984\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6034874905231236,\n \
\ \"acc_stderr\": 0.013474258584033338\n }\n}\n```"
repo_url: https://huggingface.co/argilla/notux-8x7b-v1-epoch-2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|arc:challenge|25_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|gsm8k|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hellaswag|10_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T07-23-08.510905.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T07-23-08.510905.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- '**/details_harness|winogrande|5_2024-01-06T07-23-08.510905.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-06T07-23-08.510905.parquet'
- config_name: results
data_files:
- split: 2024_01_06T07_23_08.510905
path:
- results_2024-01-06T07-23-08.510905.parquet
- split: latest
path:
- results_2024-01-06T07-23-08.510905.parquet
---
# Dataset Card for Evaluation run of argilla/notux-8x7b-v1-epoch-2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [argilla/notux-8x7b-v1-epoch-2](https://huggingface.co/argilla/notux-8x7b-v1-epoch-2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_argilla__notux-8x7b-v1-epoch-2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-06T07:23:08.510905](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__notux-8x7b-v1-epoch-2/blob/main/results_2024-01-06T07-23-08.510905.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7132510295468097,
"acc_stderr": 0.030137639590982482,
"acc_norm": 0.7169084121358973,
"acc_norm_stderr": 0.030719998582647873,
"mc1": 0.5140758873929009,
"mc1_stderr": 0.01749656371704278,
"mc2": 0.6596774083234566,
"mc2_stderr": 0.015018146932027448
},
"harness|arc:challenge|25": {
"acc": 0.6808873720136519,
"acc_stderr": 0.013621696119173304,
"acc_norm": 0.7064846416382252,
"acc_norm_stderr": 0.01330725044494111
},
"harness|hellaswag|10": {
"acc": 0.6900019916351324,
"acc_stderr": 0.0046154722103160396,
"acc_norm": 0.8780123481378211,
"acc_norm_stderr": 0.0032660269509226414
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.674074074074074,
"acc_stderr": 0.040491220417025055,
"acc_norm": 0.674074074074074,
"acc_norm_stderr": 0.040491220417025055
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7960526315789473,
"acc_stderr": 0.03279000406310049,
"acc_norm": 0.7960526315789473,
"acc_norm_stderr": 0.03279000406310049
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7773584905660378,
"acc_stderr": 0.025604233470899095,
"acc_norm": 0.7773584905660378,
"acc_norm_stderr": 0.025604233470899095
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8402777777777778,
"acc_stderr": 0.030635578972093278,
"acc_norm": 0.8402777777777778,
"acc_norm_stderr": 0.030635578972093278
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7514450867052023,
"acc_stderr": 0.03295304696818318,
"acc_norm": 0.7514450867052023,
"acc_norm_stderr": 0.03295304696818318
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653695,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653695
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6851063829787234,
"acc_stderr": 0.03036358219723817,
"acc_norm": 0.6851063829787234,
"acc_norm_stderr": 0.03036358219723817
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5964912280701754,
"acc_stderr": 0.04615186962583707,
"acc_norm": 0.5964912280701754,
"acc_norm_stderr": 0.04615186962583707
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6551724137931034,
"acc_stderr": 0.03960933549451208,
"acc_norm": 0.6551724137931034,
"acc_norm_stderr": 0.03960933549451208
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47354497354497355,
"acc_stderr": 0.025715239811346758,
"acc_norm": 0.47354497354497355,
"acc_norm_stderr": 0.025715239811346758
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5158730158730159,
"acc_stderr": 0.044698818540726076,
"acc_norm": 0.5158730158730159,
"acc_norm_stderr": 0.044698818540726076
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8516129032258064,
"acc_stderr": 0.020222737554330378,
"acc_norm": 0.8516129032258064,
"acc_norm_stderr": 0.020222737554330378
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6157635467980296,
"acc_stderr": 0.03422398565657551,
"acc_norm": 0.6157635467980296,
"acc_norm_stderr": 0.03422398565657551
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8,
"acc_stderr": 0.03123475237772117,
"acc_norm": 0.8,
"acc_norm_stderr": 0.03123475237772117
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.024063156416822523,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.024063156416822523
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9585492227979274,
"acc_stderr": 0.01438543285747646,
"acc_norm": 0.9585492227979274,
"acc_norm_stderr": 0.01438543285747646
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7025641025641025,
"acc_stderr": 0.023177408131465946,
"acc_norm": 0.7025641025641025,
"acc_norm_stderr": 0.023177408131465946
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3814814814814815,
"acc_stderr": 0.029616718927497582,
"acc_norm": 0.3814814814814815,
"acc_norm_stderr": 0.029616718927497582
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8109243697478992,
"acc_stderr": 0.025435119438105364,
"acc_norm": 0.8109243697478992,
"acc_norm_stderr": 0.025435119438105364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.47019867549668876,
"acc_stderr": 0.040752249922169775,
"acc_norm": 0.47019867549668876,
"acc_norm_stderr": 0.040752249922169775
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8899082568807339,
"acc_stderr": 0.013419939018681203,
"acc_norm": 0.8899082568807339,
"acc_norm_stderr": 0.013419939018681203
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.033509916046960436,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.033509916046960436
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.024857478080250447,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.024857478080250447
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8607594936708861,
"acc_stderr": 0.0225355263526927,
"acc_norm": 0.8607594936708861,
"acc_norm_stderr": 0.0225355263526927
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.757847533632287,
"acc_stderr": 0.028751392398694755,
"acc_norm": 0.757847533632287,
"acc_norm_stderr": 0.028751392398694755
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.03498149385462469,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.03498149385462469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.030083098716035202,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.030083098716035202
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.036809181416738807,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.036809181416738807
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7975460122699386,
"acc_stderr": 0.03157065078911899,
"acc_norm": 0.7975460122699386,
"acc_norm_stderr": 0.03157065078911899
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5982142857142857,
"acc_stderr": 0.04653333146973647,
"acc_norm": 0.5982142857142857,
"acc_norm_stderr": 0.04653333146973647
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9230769230769231,
"acc_stderr": 0.017456987872436193,
"acc_norm": 0.9230769230769231,
"acc_norm_stderr": 0.017456987872436193
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.78,
"acc_stderr": 0.041633319989322626,
"acc_norm": 0.78,
"acc_norm_stderr": 0.041633319989322626
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8825031928480205,
"acc_stderr": 0.011515102251977221,
"acc_norm": 0.8825031928480205,
"acc_norm_stderr": 0.011515102251977221
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7774566473988439,
"acc_stderr": 0.02239421566194282,
"acc_norm": 0.7774566473988439,
"acc_norm_stderr": 0.02239421566194282
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.46368715083798884,
"acc_stderr": 0.01667834189453317,
"acc_norm": 0.46368715083798884,
"acc_norm_stderr": 0.01667834189453317
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8169934640522876,
"acc_stderr": 0.02214076751288094,
"acc_norm": 0.8169934640522876,
"acc_norm_stderr": 0.02214076751288094
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7909967845659164,
"acc_stderr": 0.023093140398374224,
"acc_norm": 0.7909967845659164,
"acc_norm_stderr": 0.023093140398374224
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8302469135802469,
"acc_stderr": 0.02088869041409387,
"acc_norm": 0.8302469135802469,
"acc_norm_stderr": 0.02088869041409387
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5567375886524822,
"acc_stderr": 0.029634838473766002,
"acc_norm": 0.5567375886524822,
"acc_norm_stderr": 0.029634838473766002
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5488917861799217,
"acc_stderr": 0.012709037347346233,
"acc_norm": 0.5488917861799217,
"acc_norm_stderr": 0.012709037347346233
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.02456220431414231,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.02456220431414231
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7630718954248366,
"acc_stderr": 0.017201662169789793,
"acc_norm": 0.7630718954248366,
"acc_norm_stderr": 0.017201662169789793
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04265792110940588,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04265792110940588
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7836734693877551,
"acc_stderr": 0.026358916334904028,
"acc_norm": 0.7836734693877551,
"acc_norm_stderr": 0.026358916334904028
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8905472636815921,
"acc_stderr": 0.02207632610182466,
"acc_norm": 0.8905472636815921,
"acc_norm_stderr": 0.02207632610182466
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8947368421052632,
"acc_stderr": 0.02353755765789256,
"acc_norm": 0.8947368421052632,
"acc_norm_stderr": 0.02353755765789256
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5140758873929009,
"mc1_stderr": 0.01749656371704278,
"mc2": 0.6596774083234566,
"mc2_stderr": 0.015018146932027448
},
"harness|winogrande|5": {
"acc": 0.8208366219415943,
"acc_stderr": 0.010777949156047984
},
"harness|gsm8k|5": {
"acc": 0.6034874905231236,
"acc_stderr": 0.013474258584033338
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
breno30/wandaRV | ---
license: openrail
---
|
trl-internal-testing/descriptiveness-sentiment-trl-style | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: descriptiveness
num_bytes: 4730435
num_examples: 5425
- name: sentiment
num_bytes: 4753415
num_examples: 5480
download_size: 6210965
dataset_size: 9483850
configs:
- config_name: default
data_files:
- split: descriptiveness
path: data/descriptiveness-*
- split: sentiment
path: data/sentiment-*
---
# TRL's Sentiment and Descriptiveness Preference Dataset
The dataset comes from https://arxiv.org/abs/1909.08593, one of the earliest RLHF work from OpenAI.
We preprocess the dataset using our standard `prompt, chosen, rejected` format.
## Reproduce this dataset
1. Download the `descriptiveness_sentiment.py` from the https://huggingface.co/datasets/trl-internal-testing/descriptiveness-sentiment-trl-style/tree/0.1.0.
2. Run `python examples/datasets/descriptiveness_sentiment.py --push_to_hub --hf_entity trl-internal-testing`
|
venetis/VMMRdb_make_model_test | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': acura_cl
'1': acura_integra
'2': acura_legend
'3': acura_mdx
'4': acura_rdx
'5': acura_rl
'6': acura_rsx
'7': acura_tl
'8': acura_tsx
'9': audi_a3
'10': audi_a4
'11': audi_a6
'12': audi_a8
'13': audi_s4
'14': audi_tt
'15': bmw_323i
'16': bmw_325i
'17': bmw_328i
'18': bmw_330ci
'19': bmw_330i
'20': bmw_335i
'21': bmw_525i
'22': bmw_528i
'23': bmw_530i
'24': bmw_535i
'25': bmw_540i
'26': bmw_545i
'27': bmw_550i
'28': bmw_740i
'29': bmw_745i
'30': bmw_750i
'31': bmw_m3
'32': bmw_m5
'33': bmw_x3
'34': bmw_x5
'35': bmw_z3
'36': bmw_z4
'37': buick_century
'38': buick_enclave
'39': buick_lacrosse
'40': buick_lesabre
'41': buick_lucerne
'42': buick_parkavenue
'43': buick_regal
'44': buick_rendezvous
'45': buick_riviera
'46': cadillac_catera
'47': cadillac_cts
'48': cadillac_deville
'49': cadillac_eldorado
'50': cadillac_escalade
'51': cadillac_seville
'52': cadillac_srx
'53': cadillac_sts
'54': chevrolet_astro
'55': chevrolet_avalanche
'56': chevrolet_aveo
'57': chevrolet_bel air
'58': chevrolet_blazer
'59': chevrolet_c-k1500
'60': chevrolet_c10
'61': chevrolet_camaro
'62': chevrolet_caprice
'63': chevrolet_cavalier
'64': chevrolet_chevelle
'65': chevrolet_cobalt
'66': chevrolet_colorado
'67': chevrolet_corvette
'68': chevrolet_cruze
'69': chevrolet_el camino
'70': chevrolet_equinox
'71': chevrolet_express
'72': chevrolet_hhr
'73': chevrolet_impala
'74': chevrolet_lumina
'75': chevrolet_malibu
'76': chevrolet_montecarlo
'77': chevrolet_nova
'78': chevrolet_prizm
'79': chevrolet_s10
'80': chevrolet_silverado
'81': chevrolet_sonic
'82': chevrolet_suburban
'83': chevrolet_tahoe
'84': chevrolet_tracker
'85': chevrolet_trailblazer
'86': chevrolet_traverse
'87': chevrolet_uplander
'88': chevrolet_venture
'89': chrysler_200
'90': chrysler_300
'91': chrysler_concorde
'92': chrysler_crossfire
'93': chrysler_pacifica
'94': chrysler_pt cruiser
'95': chrysler_sebring
'96': chrysler_town&country
'97': chrysler_voyager
'98': dodge_avenger
'99': dodge_caliber
'100': dodge_challenger
'101': dodge_charger
'102': dodge_dakota
'103': dodge_dart
'104': dodge_durango
'105': dodge_grand caravan
'106': dodge_intrepid
'107': dodge_journey
'108': dodge_magnum
'109': dodge_neon
'110': dodge_nitro
'111': dodge_ram
'112': dodge_stratus
'113': fiat_five hundred
'114': ford_bronco
'115': ford_contour
'116': ford_crown victoria
'117': ford_e150
'118': ford_e250
'119': ford_e350
'120': ford_edge
'121': ford_escape
'122': ford_escort
'123': ford_excursion
'124': ford_expedition
'125': ford_explorer
'126': ford_f100
'127': ford_f150
'128': ford_f250
'129': ford_f350
'130': ford_f450
'131': ford_fiesta
'132': ford_five hundred
'133': ford_focus
'134': ford_freestar
'135': ford_fusion
'136': ford_mustang
'137': ford_ranger
'138': ford_taurus
'139': ford_thunderbird
'140': ford_windstar
'141': gmc_acadia
'142': gmc_canyon
'143': gmc_envoy
'144': gmc_jimmy
'145': gmc_sierra
'146': gmc_sonoma
'147': gmc_suburban
'148': gmc_terrain
'149': gmc_yukon
'150': honda_accord
'151': honda_civic
'152': honda_cr-v
'153': honda_delsol
'154': honda_element
'155': honda_fit
'156': honda_odyssey
'157': honda_passport
'158': honda_pilot
'159': honda_prelude
'160': honda_ridgeline
'161': honda_s2000
'162': hummer_h2
'163': hummer_h3
'164': hyundai_accent
'165': hyundai_azera
'166': hyundai_elantra
'167': hyundai_genesis
'168': hyundai_santafe
'169': hyundai_sonata
'170': hyundai_tiburon
'171': hyundai_tucson
'172': infiniti_fx35
'173': infiniti_g35
'174': infiniti_g37
'175': infiniti_i30
'176': infiniti_i35
'177': infiniti_m35
'178': infiniti_q45
'179': infiniti_qx4
'180': infiniti_qx56
'181': isuzu_rodeo
'182': isuzu_trooper
'183': jaguar_s-type
'184': jaguar_x-type
'185': jaguar_xj
'186': jeep_cherokee
'187': jeep_cj5
'188': jeep_cj7
'189': jeep_commander
'190': jeep_compass
'191': jeep_grand
'192': jeep_liberty
'193': jeep_patriot
'194': jeep_wrangler
'195': kia_amanti
'196': kia_forte
'197': kia_optima
'198': kia_rio
'199': kia_sedona
'200': kia_sephia
'201': kia_sorento
'202': kia_soul
'203': kia_spectra
'204': kia_sportage
'205': landrover_discovery
'206': landrover_rangerover
'207': lexus_es300
'208': lexus_es330
'209': lexus_es350
'210': lexus_gs300
'211': lexus_gx470
'212': lexus_is250
'213': lexus_is300
'214': lexus_is350
'215': lexus_ls400
'216': lexus_ls430
'217': lexus_rx300
'218': lexus_rx330
'219': lexus_sc430
'220': lincoln_aviator
'221': lincoln_continental
'222': lincoln_ls
'223': lincoln_mark
'224': lincoln_mkx
'225': lincoln_mkz
'226': lincoln_navigator
'227': lincoln_towncar
'228': mazda_3
'229': mazda_5
'230': mazda_6
'231': mazda_626
'232': mazda_millenia
'233': mazda_mpv
'234': mazda_mx5
'235': mazda_protege
'236': mazda_rx7
'237': mazda_rx8
'238': mazda_tribute
'239': mercedes benz_c230
'240': mercedes benz_c240
'241': mercedes benz_c280
'242': mercedes benz_c300
'243': mercedes benz_c320
'244': mercedes benz_clk320
'245': mercedes benz_e320
'246': mercedes benz_e350
'247': mercedes benz_e500
'248': mercedes benz_ml320
'249': mercedes benz_ml350
'250': mercedes benz_ml500
'251': mercedes benz_s430
'252': mercedes benz_s500
'253': mercedes benz_s550
'254': mercedes benz_sl500
'255': mercury_cougar
'256': mercury_grandmarquis
'257': mercury_mariner
'258': mercury_milan
'259': mercury_mountaineer
'260': mercury_sable
'261': mercury_villager
'262': mini_cooper
'263': mitsubishi_3000gt
'264': mitsubishi_eclipse
'265': mitsubishi_endeavor
'266': mitsubishi_galant
'267': mitsubishi_lancer
'268': mitsubishi_mirage
'269': mitsubishi_montero
'270': mitsubishi_outlander
'271': nissan_240sx
'272': nissan_300zx
'273': nissan_350z
'274': nissan_altima
'275': nissan_armada
'276': nissan_frontier
'277': nissan_maxima
'278': nissan_murano
'279': nissan_pathfinder
'280': nissan_quest
'281': nissan_rogue
'282': nissan_sentra
'283': nissan_titan
'284': nissan_versa
'285': nissan_xterra
'286': oldsmobile_alero
'287': oldsmobile_aurora
'288': oldsmobile_bravada
'289': oldsmobile_cutlass
'290': oldsmobile_intrigue
'291': oldsmobile_silhouette
'292': plymouth_neon
'293': plymouth_voyager
'294': pontiac_bonneville
'295': pontiac_firebird
'296': pontiac_g5
'297': pontiac_g6
'298': pontiac_grandam
'299': pontiac_grandprix
'300': pontiac_gto
'301': pontiac_montana
'302': pontiac_sunfire
'303': pontiac_torrent
'304': pontiac_transam
'305': pontiac_vibe
'306': porsche_911
'307': porsche_boxster
'308': porsche_cayenne
'309': ram_1500
'310': saab_9-3
'311': saab_9-5
'312': saturn_aura
'313': saturn_ion
'314': saturn_l200
'315': saturn_l300
'316': saturn_sl1
'317': saturn_sl2
'318': saturn_vue
'319': scion_tc
'320': scion_xa
'321': scion_xb
'322': scion_xd
'323': smart_fortwo
'324': subaru_forester
'325': subaru_impreza
'326': subaru_legacy
'327': subaru_outback
'328': subaru_wrx
'329': suzuki_forenza
'330': suzuki_sx4
'331': suzuki_xl7
'332': toyota_4runner
'333': toyota_avalon
'334': toyota_camry
'335': toyota_celica
'336': toyota_corolla
'337': toyota_echo
'338': toyota_fjcruiser
'339': toyota_highlander
'340': toyota_landcruiser
'341': toyota_matrix
'342': toyota_mr2
'343': toyota_pickup
'344': toyota_prius
'345': toyota_rav4
'346': toyota_sequoia
'347': toyota_sienna
'348': toyota_solara
'349': toyota_supra
'350': toyota_t100
'351': toyota_tacoma
'352': toyota_tercel
'353': toyota_tundra
'354': toyota_yaris
'355': volkswagen_beetle
'356': volkswagen_bug
'357': volkswagen_cc
'358': volkswagen_eos
'359': volkswagen_golf
'360': volkswagen_gti
'361': volkswagen_jetta
'362': volkswagen_newbeetle
'363': volkswagen_passat
'364': volkswagen_rabbit
'365': volkswagen_touareg
'366': volvo_850
'367': volvo_c70
'368': volvo_s40
'369': volvo_s60
'370': volvo_s70
'371': volvo_s80
'372': volvo_v70
'373': volvo_xc70
'374': volvo_xc90
splits:
- name: train
num_bytes: 498938159.51709396
num_examples: 26852
download_size: 498718383
dataset_size: 498938159.51709396
---
# Dataset Card for "VMMRdb_make_model_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hounst/whitn | ---
license: cc
---
|
mevol/protein_structure_NER_model_v1.4 | ---
license: mit
language:
- en
tags:
- biology
- protein structure
- token classification
configs:
- config_name: protein_structure_NER_model_v1.4
data_files:
- split: train
path: "annotation_IOB/train.tsv"
- split: dev
path: "annotation_IOB/dev.tsv"
- split: test
path: "annotation_IOB/test.tsv"
---
## Overview
This data was used to train model:
https://huggingface.co/mevol/BiomedNLP-PubMedBERT-ProteinStructure-NER-v1.4
There are 19 different entity types in this dataset:
"chemical", "complex_assembly", "evidence", "experimental_method", "gene", "mutant",
"oligomeric_state", "protein", "protein_state", "protein_type", "ptm", "residue_name",
"residue_name_number","residue_number", "residue_range", "site", "species", "structure_element",
"taxonomy_domain"
The data prepared as IOB formated input has been used during training, develiopment
and testing. Additional data formats such as JSON and XML as well as CSV files are
also available and are described below.
Annotation was carried out with the free annotation tool TeamTat (https://www.teamtat.org/) and
documents were downloaded as BioC XML before converting them to IOB, annotation only JSON and CSV format.
The number of annotations and sentences in each file is given below:
| document ID | number of annotations in BioC XML | number of annotations in IOB/JSON/CSV | number of sentences |
| --- | --- | --- | --- |
| PMC4850273 | 1121 | 1121 | 204 |
| PMC4784909 | 865 | 865 | 204 |
| PMC4850288 | 716 | 708 | 146 |
| PMC4887326 | 933 | 933 | 152 |
| PMC4833862 | 1044 | 1044 | 192 |
| PMC4832331 | 739 | 718 | 134 |
| PMC4852598 | 1229 | 1218 | 250 |
| PMC4786784 | 1549 | 1549 | 232 |
| PMC4848090 | 987 | 985 | 191 |
| PMC4792962 | 1268 | 1268 | 256 |
| PMC4841544 | 1434 | 1433 | 273 |
| PMC4772114 | 825 | 825 | 166 |
| PMC4872110 | 1276 | 1276 | 253 |
| PMC4848761 | 887 | 883 | 252 |
| PMC4919469 | 1628 | 1616 | 336 |
| PMC4880283 | 771 | 771 | 166 |
| PMC4937829 | 625 | 625 | 181 |
| PMC4968113 | 1238 | 1238 | 292 |
| PMC4854314 | 481 | 471 | 139 |
| PMC4871749 | 383 | 383 | 76 |
| total | 19999 | 19930 | 4095 |
Documents and annotations are easiest viewed by using the BioC XML files and opening
them in free annotation tool TeamTat. More about the BioC
format can be found here: https://bioc.sourceforge.net/
## Raw BioC XML files
These are the raw, un-annotated XML files for the publications in the dataset in BioC format.
The files are found in the directory: "raw_BioC_XML".
There is one file for each document and they follow standard naming
"unique PubMedCentral ID"_raw.xml.
## Annotations in IOB format
The IOB formated files can be found in the directory: "annotation_IOB"
The four files are as follows:
* all.tsv --> all sentences and annotations used to create model
"mevol/BiomedNLP-PubMedBERT-ProteinStructure-NER-v1.4"; 4095 sentences
* train.tsv --> training subset of the data; 2866 sentences
* dev.tsv --> development subset of the data; 614 sentences
* test.tsv --> testing subset of the data; 615 sentences
The total number of annotations is: 19930
## Annotations in BioC JSON
The BioC formated JSON files of the publications have been downloaded from the annotation
tool TeamTat. The files are found in the directory: "annotated_BioC_JSON"
There is one file for each document and they follow standard naming
"unique PubMedCentral ID"_ann.json
Each document JSON contains the following relevant keys:
* "sourceid" --> giving the numerical part of the unique PubMedCentral ID
* "text" --> containing the complete raw text of the publication as a string
* "denotations" --> containing a list of all the annotations for the text
Each annotation is a dictionary with the following keys:
* "span" --> gives the start and end of the annotatiom span defined by sub keys:
* "begin" --> character start position of annotation
* "end" --> character end position of annotation
* "obj" --> a string containing a number of terms that can be separated by ","; the order
of the terms gives the following: entity type, reference to ontology, annotator,
time stamp
* "id" --> unique annotation ID
Here an example:
```json
[{"sourceid":"4784909",
"sourcedb":"",
"project":"",
"target":"",
"text":"",
"denotations":[{"span":{"begin":24,
"end":34},
"obj":"chemical,CHEBI:,melaniev@ebi.ac.uk,2023-03-21T15:19:42Z",
"id":"4500"},
{"span":{"begin":50,
"end":59},
"obj":"taxonomy_domain,DUMMY:,melaniev@ebi.ac.uk,2023-03-21T15:15:03Z",
"id":"1281"}]
}
]
```
## Annotations in BioC XML
The BioC formated XML files of the publications have been downloaded from the annotation
tool TeamTat. The files are found in the directory: "annotated_BioC_XML"
There is one file for each document and they follow standard naming
"unique PubMedCentral ID_ann.xml
The key XML tags to be able to visualise the annotations in TeamTat as well as extracting
them to create the training data are "passage" and "offset". The "passage" tag encloses a
text passage or paragraph to which the annotations are linked. "Offset" gives the passage/
paragraph offset and allows to determine the character starting and ending postions of the
annotations. The tag "text" encloses the raw text of the passage.
Each annotation in the XML file is tagged as below:
* "annotation id=" --> giving the unique ID of the annotation
* "infon key="type"" --> giving the entity type of the annotation
* "infon key="identifier"" --> giving a reference to an ontology for the annotation
* "infon key="annotator"" --> giving the annotator
* "infon key="updated_at"" --> providing a time stamp for annotation creation/update
* "location" --> start and end character positions for the annotated text span
* "offset" --> start character position as defined by offset value
* "length" --> length of the annotation span; sum of "offset" and "length" creates
the end character position
Here is a basic example of what the BioC XML looks like. Additional tags for document
management are not given. Please refer to the documenttation to find out more.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE collection SYSTEM "BioC.dtd">
<collection>
<source>PMC</source>
<date>20140719</date>
<key>pmc.key</key>
<document>
<id>4784909</id>
<passage>
<offset>0</offset>
<text>The Structural Basis of Coenzyme A Recycling in a Bacterial Organelle</text>
<annotation id="4500">
<infon key="type">chemical</infon>
<infon key="identifier">CHEBI:</infon>
<infon key="annotator">melaniev@ebi.ac.uk</infon>
<infon key="updated_at">2023-03-21T15:19:42Z</infon>
<location offset="24" length="10"/>
<text>Coenzyme A</text>
</annotation>
</passage>
</document>
</collection>
```
## Annotations in CSV
The annotations and the relevant sentences they have been found in have also been made
available as tab-separated CSV files, one for each publication in the dataset. The files can
be found in directory "annotation_CSV". Each file is named as "unique PubMedCentral ID".csv.
The column labels in the CSV files are as follows:
* "anno_start" --> character start position of the annotation
* "anno_end" --> character end position of the annotation
* "anno_text" --> text covered by the annotation
* "entity_type" --> entity type of the annotation
* "sentence" --> sentence text in which the annotation was found
* "section" --> publication section in which the annotation was found
## Annotations in JSON
A combined JSON file was created only containing the relevant sentences and associated
annotations for each publication in the dataset. The file can be found in directory
"annotation_JSON" under the name "annotations.json".
The following keys are used:
* "PMC4850273" --> unique PubMedCentral of the publication
* "annotations" --> list of dictionaries for the relevant, annotated sentences of the
document; each dictionary has the following sub keys
* "sid" --> unique sentence ID
* "sent" --> sentence text as string
* "section" --> publication section the sentence is in
* "ner" --> nested list of annotations; each sublist contains the following items:
start character position, end character position, annotation text,
entity type
Here is an example of a sentence and its annotations:
```json
{"PMC4850273": {"annotations":
[{"sid": 0,
"sent": "Molecular Dissection of Xyloglucan Recognition in a Prominent Human Gut Symbiont",
"section": "TITLE",
"ner": [
[24,34,"Xyloglucan","chemical"],
[62,67,"Human","species"],]
},]
}}
```
|
Kquant03/1-line | ---
license: apache-2.0
---
|
ura-hcmut/vietnews-dpo | ---
license: mit
language:
- vi
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: test
path: vietnews-dpo.json
--- |
CyberHarem/torricelli_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of torricelli/トリチェリ/托里拆利 (Azur Lane)
This is the dataset of torricelli/トリチェリ/托里拆利 (Azur Lane), containing 39 images and their tags.
The core tags of this character are `red_eyes, long_hair, green_hair, antenna_hair, hair_between_eyes, very_long_hair, dark_green_hair, bangs, goggles_on_head, breasts, diving_mask_on_head`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 39 | 55.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torricelli_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 39 | 30.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torricelli_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 95 | 64.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torricelli_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 39 | 48.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torricelli_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 95 | 93.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torricelli_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/torricelli_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, @_@, looking_at_viewer, wetsuit, blush, one-piece_swimsuit, solo, highleg, thick_thighs, braid, diving_mask, open_mouth, simple_background, smile, white_background, leg_tattoo, thigh_strap, ringed_eyes, sitting, skindentation |
| 1 | 10 |  |  |  |  |  | 1girl, black_bikini, hat_flower, official_alternate_costume, straw_hat, sun_hat, @_@, blush, open_mouth, smile, solo, leg_tattoo, bare_shoulders, outdoors, red_flower, see-through, sky, turtle, barefoot, beach, day, feet, heart, sitting, soles, toes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | @_@ | looking_at_viewer | wetsuit | blush | one-piece_swimsuit | solo | highleg | thick_thighs | braid | diving_mask | open_mouth | simple_background | smile | white_background | leg_tattoo | thigh_strap | ringed_eyes | sitting | skindentation | black_bikini | hat_flower | official_alternate_costume | straw_hat | sun_hat | bare_shoulders | outdoors | red_flower | see-through | sky | turtle | barefoot | beach | day | feet | heart | soles | toes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------|:--------------------|:----------|:--------|:---------------------|:-------|:----------|:---------------|:--------|:--------------|:-------------|:--------------------|:--------|:-------------------|:-------------|:--------------|:--------------|:----------|:----------------|:---------------|:-------------|:-----------------------------|:------------|:----------|:-----------------|:-----------|:-------------|:--------------|:------|:---------|:-----------|:--------|:------|:-------|:--------|:--------|:-------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | | | X | | X | | | | | X | | X | | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
KnutJaegersberg/wikipedia_categories_labels | ---
license: mit
---
|
LenguajeNaturalAI/SpaLawEx | ---
license: cc-by-nc-sa-4.0
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 98147
num_examples: 119
download_size: 48982
dataset_size: 98147
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
- text2text-generation
- text-generation
language:
- es
tags:
- legal
pretty_name: SpaLawEx
size_categories:
- n<1K
---
## Introducción
Este dataset se ha extraído de exámenes de acceso a la abogacía del Colegio de Abogados de España de los años [2022](https://www.mjusticia.gob.es/es/Ciudadano/EmpleoPublico/Documents/PLANTILLA_DEFINITIVA_CASTELLANO_2022_1.pdf) y [2023](https://www.mjusticia.gob.es/es/Ciudadano/EmpleoPublico/Documents/PLANTILLA%20DEFINITIVA%202023.2%20(CASTELLANO).pdf. Lo forman preguntas multi-respuesta, con 4 opciones: a, b, c y d.
## Guía de uso
Para trabajar con el corpus y poder evaluar LLMs, la idea es utilizar el siguiente template:
```python
prompt_template="""Como experto en derecho español y el sistema legal y jurídico de España, debes hacer lo siguiente.
A partir de la pregunta que se plantea a continuación y las opciones que se te presentan, tu tarea consiste en responder únicamente con la letra que corresponde a la respuesta correcta: A, B, C o D. Sólo responde con la letra.
Pregunta: {pregunta}
Opciones: {opciones}
"""
# cómo usarlo con un LLM:
system_prompt = "Eres un abogado español experto en las leyes de España y su sistema legal y jurídico."
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt_template.format(pregunta=pregunta, opciones=opciones)}
]
mssg = tokenizer.apply_chat_template(messages, tokenize=False)
```
## Licencia
Este dataset está distribuido con licencia [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
 |
heziyevv/small_wiki_news_books | ---
license: mit
---
|
tushifire/Arxiv_Paper_embeddings | ---
license: mit
---
|
tellarin-ai/ntx_llm_inst_chinese | ---
license: cc-by-sa-4.0
language:
- zh
task_categories:
- token-classification
---
# Dataset Card for NTX v1 in the Aya format - Chinese subset
This dataset is a format conversion for the Chinese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Nexdata/Chinese_Mandarin_Average_Tone_Speech_Synthesis_Corpus-Customer_Service | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Chinese_Mandarin_Average_Tone_Speech_Synthesis_Corpus-Customer_Service
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1100?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
150 People - Chinese Mandarin Average Tone Speech Synthesis Corpus-Customer Service. It is recorded by Chinese native speakers,customer service text, and the syllables, phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1100?source=Huggingface
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/b0cca9c0 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1335
dataset_size: 182
---
# Dataset Card for "b0cca9c0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ctoraman/large-scale-hate-speech | ---
license: cc
task_categories:
- text-classification
language:
- en
- tr
tags:
- hate-speech
- hatespeech
- hate-speech-detection
- hatespeechdetection
pretty_name: h
size_categories:
- 100K<n<1M
---
This repository contains the utilized dataset in the LREC 2022 paper "Large-Scale Hate Speech Detection with Cross-Domain Transfer". This study mainly focuses hate speech detection in Turkish and English. In addition, domain transfer success between hate domains is also examined.
There are two dataset versions.
Dataset v1: The original dataset that includes 100,000 tweets per English and Turkish, published in LREC 2022. The annotations with more than 60% agreement are included.
Dataset v2: A more reliable dataset version that includes 68,597 tweets for English and 60,310 for Turkish. The annotations with more than 80% agreement are included.
For more details: https://github.com/avaapm/hatespeech/ |
Seongill/Trivia_5_small_missing_adv_top6 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: has_answer
dtype: bool
- name: similar_sub
dtype: string
- name: ctxs
list:
- name: answer_sent
sequence: string
- name: hasanswer
dtype: bool
- name: id
dtype: string
- name: is_adv
dtype: bool
- name: new_answer_sent
dtype: string
- name: original_text
dtype: string
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: status
dtype: string
splits:
- name: train
num_bytes: 17279426
num_examples: 3771
download_size: 9619951
dataset_size: 17279426
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pszemraj/dolly_hhrlhf-text2text | ---
license: cc-by-sa-3.0
task_categories:
- text2text-generation
language:
- en
tags:
- instruct
size_categories:
- 10K<n<100K
source_datasets: mosaicml/dolly_hhrlhf
---
# dolly_hhrlhf-text2text
This is `mosaicml/dolly_hhrlhf` with the following changes:
- clean up/adapt `prompt` column for the `text2text-generation` task (no need for a special template)
- split the original `train` set into a 95% train and an explicit validation set (5%)
- fixed extra spaces in puncuation (as this is not a French dataset)
details on extra spaces:
```
Original sentence 1: How can I be healthy ?
Fixed sentence 1: How can I be healthy?
``` |
Codec-SUPERB/esc50_synth | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: id
dtype: string
splits:
- name: original
num_bytes: 960127258.0
num_examples: 2000
- name: academicodec_hifi_16k_320d
num_bytes: 320129480.0
num_examples: 2000
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 320129480.0
num_examples: 2000
- name: academicodec_hifi_24k_320d
num_bytes: 480129480.0
num_examples: 2000
- name: audiodec_24k_320d
num_bytes: 480129480.0
num_examples: 2000
- name: dac_16k
num_bytes: 320129480.0
num_examples: 2000
- name: dac_24k
num_bytes: 480129480.0
num_examples: 2000
- name: dac_44k
num_bytes: 882129480.0
num_examples: 2000
- name: encodec_24k_12bps
num_bytes: 480129480.0
num_examples: 2000
- name: encodec_24k_1_5bps
num_bytes: 480129480.0
num_examples: 2000
- name: encodec_24k_24bps
num_bytes: 480129480.0
num_examples: 2000
- name: encodec_24k_3bps
num_bytes: 480129480.0
num_examples: 2000
- name: encodec_24k_6bps
num_bytes: 480129480.0
num_examples: 2000
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 320129480.0
num_examples: 2000
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 320129480.0
num_examples: 2000
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 320129480.0
num_examples: 2000
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 320129480.0
num_examples: 2000
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 320129480.0
num_examples: 2000
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 320129480.0
num_examples: 2000
- name: speech_tokenizer_16k
num_bytes: 320129480.0
num_examples: 2000
download_size: 7976139767
dataset_size: 8884587378.0
---
# Dataset Card for "esc50_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-wmt16-de-en-bfa340-42157145094 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wmt16
eval_info:
task: translation
model: Lvxue/finetuned-mt5-small-10epoch
metrics: ['accuracy']
dataset_name: wmt16
dataset_config: de-en
dataset_split: test
col_mapping:
source: translation.en
target: translation.de
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: Lvxue/finetuned-mt5-small-10epoch
* Dataset: wmt16
* Config: de-en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@DarkSourceOfCode](https://huggingface.co/DarkSourceOfCode) for evaluating this model. |
Leoku/drug | ---
license: mit
---
|
nanakonoda/xnli_cm | ---
annotations_creators:
- expert-generated
language:
- en
- de
- fr
language_creators:
- found
license: []
multilinguality:
- multilingual
pretty_name: XNLI Code-Mixed Corpus
size_categories:
- 1M<n<10M
source_datasets:
- extended|xnli
tags:
- mode classification
- aligned
- code-mixed
task_categories:
- text-classification
task_ids: []
dataset_info:
- config_name: de_ec
features:
- name: text
dtype: string
- name: label
dtype: int64
# class_label:
# names:
# '0': spoken
# '1': written
splits:
- name: train
num_bytes: 576
num_examples: 2490
- name: test
num_bytes: 194139776
num_examples: 1610549
- config_name: de_ml
features:
- name: text
dtype: string
- name: label
dtype: int64
# class_label:
# names:
# '0': spoken
# '1': written
splits:
- name: train
num_bytes: 576
num_examples: 2490
- name: test
num_bytes: 87040
num_examples: 332326
- config_name: fr_ec
features:
- name: text
dtype: string
- name: label
dtype: int64
# class_label:
# names:
# '0': spoken
# '1': written
splits:
- name: train
num_bytes: 576
num_examples: 2490
- name: test
num_bytes: 564416
num_examples: 2562631
- config_name: fr_ml
features:
- name: text
dtype: string
- name: label
dtype: int64
# class_label:
# names:
# '0': spoken
# '1': written
splits:
- name: train
num_bytes: 576
num_examples: 2490
- name: test
num_bytes: 361472
num_examples: 1259159
download_size: 1376728
dataset_size: 1376704
---
# Dataset Card for XNLI Code-Mixed Corpus
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
- German-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
- French-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
## Dataset Structure
### Data Instances
{
'text': "And he said , Mama , I 'm home",
'label': 0
}
### Data Fields
- text: sentence
- label: binary label of text (0: spoken 1: written)
### Data Splits
- de-ec
- train (English, German, French monolingual):
- test (German-English code-mixed by Equivalence Constraint Theory):
- de-ml:
- train (English, German, French monolingual):
- test (German-English code-mixed by Matrix Language Theory):
- fr-ec
- train (English, German, French monolingual):
- test (French-English code-mixed by Equivalence Constraint Theory):
- fr-ml:
- train (English, German, French monolingual):
- test (French-English code-mixed by Matrix Language Theory):
### Other Statistics
#### Average Sentence Length
- German
- train:
- test:
- French
- train:
- test:
#### Label Split
- train:
- 0:
- 1:
- test:
- 0:
- 1:
## Dataset Creation
### Curation Rationale
Using the XNLI Parallel Corpus, we generated a code-mixed corpus using CodeMixed Text Generator.
The XNLI Parallel Corpus is available here:
https://huggingface.co/datasets/nanakonoda/xnli_parallel
It was created from the XNLI corpus.
More information is available in the datacard for the XNLI Parallel Corpus.
Here is the link and citation for the original CodeMixed Text Generator paper.
https://github.com/microsoft/CodeMixed-Text-Generator
```
@inproceedings{rizvi-etal-2021-gcm,
title = "{GCM}: A Toolkit for Generating Synthetic Code-mixed Text",
author = "Rizvi, Mohd Sanad Zaki and
Srinivasan, Anirudh and
Ganu, Tanuja and
Choudhury, Monojit and
Sitaram, Sunayana",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.24",
pages = "205--211",
abstract = "Code-mixing is common in multilingual communities around the world, and processing it is challenging due to the lack of labeled and unlabeled data. We describe a tool that can automatically generate code-mixed data given parallel data in two languages. We implement two linguistic theories of code-mixing, the Equivalence Constraint theory and the Matrix Language theory to generate all possible code-mixed sentences in the language-pair, followed by sampling of the generated data to generate natural code-mixed sentences. The toolkit provides three modes: a batch mode, an interactive library mode and a web-interface to address the needs of researchers, linguists and language experts. The toolkit can be used to generate unlabeled text data for pre-trained models, as well as visualize linguistic theories of code-mixing. We plan to release the toolkit as open source and extend it by adding more implementations of linguistic theories, visualization techniques and better sampling techniques. We expect that the release of this toolkit will help facilitate more research in code-mixing in diverse language pairs.",
}
```
### Source Data
XNLI Parallel Corpus
https://huggingface.co/datasets/nanakonoda/xnli_parallel
#### Original Source Data
XNLI Parallel Corpus was created using the XNLI Corpus.
https://github.com/facebookresearch/XNLI
Here is the citation for the original XNLI paper.
```
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
```
#### Initial Data Collection and Normalization
We removed all punctuation from the XNLI Parallel Corpus except apostrophes.
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
N/A
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
N/A
### Licensing Information
N/A
### Citation Information
### Contributions
N/A |
kaleemWaheed/twitter_dataset_1712997139 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 82689
num_examples: 208
download_size: 35847
dataset_size: 82689
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kalcho100/flippy_final2 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1047628246.0021477
num_examples: 761834
- name: test
num_bytes: 116404207.9978523
num_examples: 84649
download_size: 628911397
dataset_size: 1164032454.0
---
# Dataset Card for "flippy_final2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
style_change_detection | ---
paperswithcode_id: null
pretty_name: StyleChangeDetection
dataset_info:
- config_name: narrow
features:
- name: id
dtype: string
- name: text
dtype: string
- name: authors
dtype: int32
- name: structure
sequence: string
- name: site
dtype: string
- name: multi-author
dtype: bool
- name: changes
sequence: bool
splits:
- name: train
num_bytes: 40499150
num_examples: 3418
- name: validation
num_bytes: 20447137
num_examples: 1713
download_size: 0
dataset_size: 60946287
- config_name: wide
features:
- name: id
dtype: string
- name: text
dtype: string
- name: authors
dtype: int32
- name: structure
sequence: string
- name: site
dtype: string
- name: multi-author
dtype: bool
- name: changes
sequence: bool
splits:
- name: train
num_bytes: 97403392
num_examples: 8030
- name: validation
num_bytes: 48850089
num_examples: 4019
download_size: 0
dataset_size: 146253481
---
# Dataset Card for "style_change_detection"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://pan.webis.de/clef20/pan20-web/style-change-detection.html](https://pan.webis.de/clef20/pan20-web/style-change-detection.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 207.20 MB
- **Total amount of disk used:** 207.20 MB
### Dataset Summary
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general.
Access to the dataset needs to be requested from zenodo.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### narrow
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 60.94 MB
- **Total amount of disk used:** 60.94 MB
An example of 'validation' looks as follows.
```
{
"authors": 2,
"changes": [false, false, true, false],
"id": "2",
"multi-author": true,
"site": "exampleSite",
"structure": ["A1", "A2"],
"text": "This is text from example problem 2.\n"
}
```
#### wide
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 146.26 MB
- **Total amount of disk used:** 146.26 MB
An example of 'train' looks as follows.
```
{
"authors": 2,
"changes": [false, false, true, false],
"id": "2",
"multi-author": true,
"site": "exampleSite",
"structure": ["A1", "A2"],
"text": "This is text from example problem 2.\n"
}
```
### Data Fields
The data fields are the same among all splits.
#### narrow
- `id`: a `string` feature.
- `text`: a `string` feature.
- `authors`: a `int32` feature.
- `structure`: a `list` of `string` features.
- `site`: a `string` feature.
- `multi-author`: a `bool` feature.
- `changes`: a `list` of `bool` features.
#### wide
- `id`: a `string` feature.
- `text`: a `string` feature.
- `authors`: a `int32` feature.
- `structure`: a `list` of `string` features.
- `site`: a `string` feature.
- `multi-author`: a `bool` feature.
- `changes`: a `list` of `bool` features.
### Data Splits
| name |train|validation|
|------|----:|---------:|
|narrow| 3418| 1713|
|wide | 8030| 4019|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{bevendorff2020shared,
title={Shared Tasks on Authorship Analysis at PAN 2020},
author={Bevendorff, Janek and Ghanem, Bilal and Giachanou, Anastasia and Kestemont, Mike and Manjavacas, Enrique and Potthast, Martin and Rangel, Francisco and Rosso, Paolo and Specht, G{"u}nther and Stamatatos, Efstathios and others},
booktitle={European Conference on Information Retrieval},
pages={508--516},
year={2020},
organization={Springer}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
Divyanshu/IE_SemParse | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: IE-SemParse
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- parsing
---
# Dataset Card for "IE-SemParse"
## Table of Contents
- [Dataset Card for "IE-SemParse"](#dataset-card-for-ie-semparse)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset usage](#dataset-usage)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Human Verification Process](#human-verification-process)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** <https://github.com/divyanshuaggarwal/IE-SemParse>
- **Paper:** [Evaluating Inter-Bilingual Semantic Parsing for Indian Languages](https://arxiv.org/abs/2304.13005)
- **Point of Contact:** [Divyanshu Aggarwal](mailto:divyanshuggrwl@gmail.com)
### Dataset Summary
IE-SemParse is an InterBilingual Semantic Parsing Dataset for eleven major Indic languages that includes
Assamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’),
Malayalam (‘ml’), Marathi (‘mr’), Odia (‘or’),
Punjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi
(‘hi’), and Bengali (‘bn’).
### Supported Tasks and Leaderboards
**Tasks:** Inter-Bilingual Semantic Parsing
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
...
<!-- Below is the dataset split given for `hi` dataset.
```python
DatasetDict({
train: Dataset({
features: ['utterance', 'logical form', 'intent'],
num_rows: 36000
})
test: Dataset({
features: ['utterance', 'logical form', 'intent'],
num_rows: 3000
})
validation: Dataset({
features: ['utterance', 'logical form', 'intent'],
num_rows: 1500
})
})
``` -->
## Dataset usage
Code snippet for using the dataset using datasets library.
```python
from datasets import load_dataset
dataset = load_dataset("Divyanshu/IE_SemParse")
```
## Dataset Creation
Machine translation of 3 multilingual semantic Parsing datasets english dataset to 11 listed Indic Languages.
### Curation Rationale
[More information needed]
### Source Data
[mTOP dataset](https://aclanthology.org/2021.eacl-main.257/)
[multilingualTOP dataset](https://github.com/awslabs/multilingual-top)
[multi-ATIS++ dataset](https://paperswithcode.com/paper/end-to-end-slot-alignment-and-recognition-for)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
#### Human Verification Process
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
## Considerations for Using the Data
### Social Impact of Dataset
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
### Discussion of Biases
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
### Other Known Limitations
[Detailed in the paper](https://arxiv.org/abs/2304.13005)
### Dataset Curators
Divyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@misc{aggarwal2023evaluating,
title={Evaluating Inter-Bilingual Semantic Parsing for Indian Languages},
author={Divyanshu Aggarwal and Vivek Gupta and Anoop Kunchukuttan},
year={2023},
eprint={2304.13005},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- ### Contributions -->
|
liuyanchen1015/MULTI_VALUE_mnli_clause_final_really_but | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 8836
num_examples: 27
- name: dev_mismatched
num_bytes: 2768
num_examples: 8
- name: test_matched
num_bytes: 7777
num_examples: 28
- name: test_mismatched
num_bytes: 2454
num_examples: 16
- name: train
num_bytes: 244795
num_examples: 1018
download_size: 109528
dataset_size: 266630
---
# Dataset Card for "MULTI_VALUE_mnli_clause_final_really_but"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rombodawg/MegaCodeTraining | ---
license: other
---
_________________________________________________________________________________
VERSION 3 IS RELEASED DOWNLOAD HERE:
- https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV3_2.2m_Evol
_________________________________________________________________________________
This is a uncensored mega combined dataset using both razent/wizardlm-code-evol-32k and nickrosh/Evol-Instruct-Code-80k-v1
In this version many lines of instructions were removed in part of a uncensoring process.
The Rombo's format.rar file is so you can use the training data in oobagooba text generation webui. Simply unzip it, and use it as a json file.
All links bellow
https://huggingface.co/datasets/razent/wizardlm-code-evol-32k
(This repository was deleted, however you can find each individual data file from this repository
re-uploaded as their own individual repositories on my huggingface account)
https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/tree/main
Thank you to the contributors of the datasets. I do not own them, please give credit where credit is due |
distilled-one-sec-cv12-each-chunk-uniq/chunk_130 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1327135200.0
num_examples: 258600
download_size: 1359399489
dataset_size: 1327135200.0
---
# Dataset Card for "chunk_130"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davidreyblanco/distilabel-math-instructions-dpo | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
sequence: string
- name: generation_prompt
list:
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: rating
sequence: float64
- name: rationale
sequence: string
splits:
- name: train
num_bytes: 1723461
num_examples: 100
download_size: 606396
dataset_size: 1723461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_TheBloke__Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ | ---
pretty_name: Evaluation run of TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-07T10:50:58.801361](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ_public/blob/main/results_2023-11-07T10-50-58.801361.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.13485738255033558,\n\
\ \"em_stderr\": 0.003498008556560615,\n \"f1\": 0.2201814177852358,\n\
\ \"f1_stderr\": 0.003718008519979711,\n \"acc\": 0.3598741722131701,\n\
\ \"acc_stderr\": 0.006857552680201102\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.13485738255033558,\n \"em_stderr\": 0.003498008556560615,\n\
\ \"f1\": 0.2201814177852358,\n \"f1_stderr\": 0.003718008519979711\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \
\ \"acc_stderr\": 0.0010717793485492606\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7182320441988951,\n \"acc_stderr\": 0.012643326011852944\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_11_05T10_35_18.406812
path:
- '**/details_harness|drop|3_2023-11-05T10-35-18.406812.parquet'
- split: 2023_11_07T10_50_58.801361
path:
- '**/details_harness|drop|3_2023-11-07T10-50-58.801361.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-07T10-50-58.801361.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_05T10_35_18.406812
path:
- '**/details_harness|gsm8k|5_2023-11-05T10-35-18.406812.parquet'
- split: 2023_11_07T10_50_58.801361
path:
- '**/details_harness|gsm8k|5_2023-11-07T10-50-58.801361.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-07T10-50-58.801361.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_05T10_35_18.406812
path:
- '**/details_harness|winogrande|5_2023-11-05T10-35-18.406812.parquet'
- split: 2023_11_07T10_50_58.801361
path:
- '**/details_harness|winogrande|5_2023-11-07T10-50-58.801361.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-07T10-50-58.801361.parquet'
- config_name: results
data_files:
- split: 2023_11_05T10_35_18.406812
path:
- results_2023-11-05T10-35-18.406812.parquet
- split: 2023_11_07T10_50_58.801361
path:
- results_2023-11-07T10-50-58.801361.parquet
- split: latest
path:
- results_2023-11-07T10-50-58.801361.parquet
---
# Dataset Card for Evaluation run of TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-07T10:50:58.801361](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ_public/blob/main/results_2023-11-07T10-50-58.801361.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.13485738255033558,
"em_stderr": 0.003498008556560615,
"f1": 0.2201814177852358,
"f1_stderr": 0.003718008519979711,
"acc": 0.3598741722131701,
"acc_stderr": 0.006857552680201102
},
"harness|drop|3": {
"em": 0.13485738255033558,
"em_stderr": 0.003498008556560615,
"f1": 0.2201814177852358,
"f1_stderr": 0.003718008519979711
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.0010717793485492606
},
"harness|winogrande|5": {
"acc": 0.7182320441988951,
"acc_stderr": 0.012643326011852944
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ovior/twitter_dataset_1713120065 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2327564
num_examples: 7152
download_size: 1316078
dataset_size: 2327564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/3794d1ea | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1337
dataset_size: 186
---
# Dataset Card for "3794d1ea"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shrimantasatpati/testing | ---
license: afl-3.0
---
|
mc4 | ---
pretty_name: mC4
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- ht
- hu
- hy
- id
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
language_bcp47:
- bg-Latn
- el-Latn
- hi-Latn
- ja-Latn
- ru-Latn
- zh-Latn
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: mc4
viewer: false
---
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "mc4" is deprecated and will be deleted. Use "<a href="https://huggingface.co/datasets/allenai/c4">allenai/c4</a>" instead.</p>
</div>
# Dataset Card for mC4
## Table of Contents
- [Dataset Card for mC4](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/allenai/c4
- **Paper:** https://arxiv.org/abs/1910.10683
### Dataset Summary
A multilingual colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
108 languages are available and are reported in the table below.
Note that the languages that end with "-Latn" are simply romanized variants, i.e. written using the Latin script.
| language code | language name |
|:----------------|:---------------------|
| af | Afrikaans |
| am | Amharic |
| ar | Arabic |
| az | Azerbaijani |
| be | Belarusian |
| bg | Bulgarian |
| bg-Latn | Bulgarian (Latin) |
| bn | Bangla |
| ca | Catalan |
| ceb | Cebuano |
| co | Corsican |
| cs | Czech |
| cy | Welsh |
| da | Danish |
| de | German |
| el | Greek |
| el-Latn | Greek (Latin) |
| en | English |
| eo | Esperanto |
| es | Spanish |
| et | Estonian |
| eu | Basque |
| fa | Persian |
| fi | Finnish |
| fil | Filipino |
| fr | French |
| fy | Western Frisian |
| ga | Irish |
| gd | Scottish Gaelic |
| gl | Galician |
| gu | Gujarati |
| ha | Hausa |
| haw | Hawaiian |
| hi | Hindi |
| hi-Latn | Hindi (Latin script) |
| hmn | Hmong, Mong |
| ht | Haitian |
| hu | Hungarian |
| hy | Armenian |
| id | Indonesian |
| ig | Igbo |
| is | Icelandic |
| it | Italian |
| iw | former Hebrew |
| ja | Japanese |
| ja-Latn | Japanese (Latin) |
| jv | Javanese |
| ka | Georgian |
| kk | Kazakh |
| km | Khmer |
| kn | Kannada |
| ko | Korean |
| ku | Kurdish |
| ky | Kyrgyz |
| la | Latin |
| lb | Luxembourgish |
| lo | Lao |
| lt | Lithuanian |
| lv | Latvian |
| mg | Malagasy |
| mi | Maori |
| mk | Macedonian |
| ml | Malayalam |
| mn | Mongolian |
| mr | Marathi |
| ms | Malay |
| mt | Maltese |
| my | Burmese |
| ne | Nepali |
| nl | Dutch |
| no | Norwegian |
| ny | Nyanja |
| pa | Punjabi |
| pl | Polish |
| ps | Pashto |
| pt | Portuguese |
| ro | Romanian |
| ru | Russian |
| ru-Latn | Russian (Latin) |
| sd | Sindhi |
| si | Sinhala |
| sk | Slovak |
| sl | Slovenian |
| sm | Samoan |
| sn | Shona |
| so | Somali |
| sq | Albanian |
| sr | Serbian |
| st | Southern Sotho |
| su | Sundanese |
| sv | Swedish |
| sw | Swahili |
| ta | Tamil |
| te | Telugu |
| tg | Tajik |
| th | Thai |
| tr | Turkish |
| uk | Ukrainian |
| und | Unknown language |
| ur | Urdu |
| uz | Uzbek |
| vi | Vietnamese |
| xh | Xhosa |
| yi | Yiddish |
| yo | Yoruba |
| zh | Chinese |
| zh-Latn | Chinese (Latin) |
| zu | Zulu |
You can load the mC4 subset of any language like this:
```python
from datasets import load_dataset
en_mc4 = load_dataset("mc4", "en")
```
And if you can even specify a list of languages:
```python
from datasets import load_dataset
mc4_subset_with_five_languages = load_dataset("mc4", languages=["en", "fr", "es", "de", "zh"])
```
### Supported Tasks and Leaderboards
mC4 is mainly intended to pretrain language models and word representations.
### Languages
The dataset supports 108 languages.
## Dataset Structure
### Data Instances
An example form the `en` config is:
```
{'timestamp': '2018-06-24T01:32:39Z',
'text': 'Farm Resources in Plumas County\nShow Beginning Farmer Organizations & Professionals (304)\nThere are 304 resources serving Plumas County in the following categories:\nMap of Beginning Farmer Organizations & Professionals serving Plumas County\nVictoria Fisher - Office Manager - Loyalton, CA\nAmy Lynn Rasband - UCCE Plumas-Sierra Administrative Assistant II - Quincy , CA\nShow Farm Income Opportunities Organizations & Professionals (353)\nThere are 353 resources serving Plumas County in the following categories:\nFarm Ranch And Forest Retailers (18)\nMap of Farm Income Opportunities Organizations & Professionals serving Plumas County\nWarner Valley Wildlife Area - Plumas County\nShow Farm Resources Organizations & Professionals (297)\nThere are 297 resources serving Plumas County in the following categories:\nMap of Farm Resources Organizations & Professionals serving Plumas County\nThere are 57 resources serving Plumas County in the following categories:\nMap of Organic Certification Organizations & Professionals serving Plumas County',
'url': 'http://www.californialandcan.org/Plumas/Farm-Resources/'}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
To build mC4, the authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. The resulting mC4 subsets for each language are reported in this table:
| config | train | validation |
|:---------|:--------|:-------------|
| af | ? | ? |
| am | ? | ? |
| ar | ? | ? |
| az | ? | ? |
| be | ? | ? |
| bg | ? | ? |
| bg-Latn | ? | ? |
| bn | ? | ? |
| ca | ? | ? |
| ceb | ? | ? |
| co | ? | ? |
| cs | ? | ? |
| cy | ? | ? |
| da | ? | ? |
| de | ? | ? |
| el | ? | ? |
| el-Latn | ? | ? |
| en | ? | ? |
| eo | ? | ? |
| es | ? | ? |
| et | ? | ? |
| eu | ? | ? |
| fa | ? | ? |
| fi | ? | ? |
| fil | ? | ? |
| fr | ? | ? |
| fy | ? | ? |
| ga | ? | ? |
| gd | ? | ? |
| gl | ? | ? |
| gu | ? | ? |
| ha | ? | ? |
| haw | ? | ? |
| hi | ? | ? |
| hi-Latn | ? | ? |
| hmn | ? | ? |
| ht | ? | ? |
| hu | ? | ? |
| hy | ? | ? |
| id | ? | ? |
| ig | ? | ? |
| is | ? | ? |
| it | ? | ? |
| iw | ? | ? |
| ja | ? | ? |
| ja-Latn | ? | ? |
| jv | ? | ? |
| ka | ? | ? |
| kk | ? | ? |
| km | ? | ? |
| kn | ? | ? |
| ko | ? | ? |
| ku | ? | ? |
| ky | ? | ? |
| la | ? | ? |
| lb | ? | ? |
| lo | ? | ? |
| lt | ? | ? |
| lv | ? | ? |
| mg | ? | ? |
| mi | ? | ? |
| mk | ? | ? |
| ml | ? | ? |
| mn | ? | ? |
| mr | ? | ? |
| ms | ? | ? |
| mt | ? | ? |
| my | ? | ? |
| ne | ? | ? |
| nl | ? | ? |
| no | ? | ? |
| ny | ? | ? |
| pa | ? | ? |
| pl | ? | ? |
| ps | ? | ? |
| pt | ? | ? |
| ro | ? | ? |
| ru | ? | ? |
| ru-Latn | ? | ? |
| sd | ? | ? |
| si | ? | ? |
| sk | ? | ? |
| sl | ? | ? |
| sm | ? | ? |
| sn | ? | ? |
| so | ? | ? |
| sq | ? | ? |
| sr | ? | ? |
| st | ? | ? |
| su | ? | ? |
| sv | ? | ? |
| sw | ? | ? |
| ta | ? | ? |
| te | ? | ? |
| tg | ? | ? |
| th | ? | ? |
| tr | ? | ? |
| uk | ? | ? |
| und | ? | ? |
| ur | ? | ? |
| uz | ? | ? |
| vi | ? | ? |
| xh | ? | ? |
| yi | ? | ? |
| yo | ? | ? |
| zh | ? | ? |
| zh-Latn | ? | ? |
| zu | ? | ? |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
DragosGorduza/dataset_QUERY_FAQ_MISTRAL_TRAIN | ---
dataset_info:
features:
- name: positive_id
dtype: string
- name: query_id
dtype: string
- name: positive_content
dtype: string
- name: query_content
dtype: string
- name: positive_name
dtype: string
- name: query_name
dtype: string
- name: query_type
dtype: string
- name: instruction
dtype: string
- name: output
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
- name: text
dtype: string
splits:
- name: train
num_bytes: 115217882.46257988
num_examples: 50582
download_size: 49534813
dataset_size: 115217882.46257988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
l3v1k/autotrain-data-demo-train-project | ---
language:
- en
---
# AutoTrain Dataset for project: demo-train-project
## Dataset Description
This dataset has been automatically processed by AutoTrain for project demo-train-project.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "Users have the right to, if necessary, rectification of inaccurate personal data concerning that User, via a written request, using the contact details in paragraph 9 below. The User has the right to demand deletion or restriction of processing, and the right to object to processing based on legitimate interest under certain circumstances. The User has the right to revoke any consent to processing that has been given by the User to Controller. Using this right may however, mean that the User can not apply for a specific job or otherwise use the Service. The User has under certain circumstances a right to data portability, which means a right to get the personal data and transfer these to another controller as long as this does not negatively affect the rights and freedoms of others. User has the right to lodge a complaint to the supervisory authority regarding the processing of personal data relating to him or her, if the User considers that the processing of personal data infringes the legal framework of privacy law. 4.",
"question": "Can I edit or change the data that I have provided to you? ",
"answers.text": [
"Users have the right to, if necessary, rectification of inaccurate personal data concerning that User, via a written request, using the contact details"
],
"answers.answer_start": [
0
],
"feat_id": [
"310276"
],
"feat_title": [
""
]
},
{
"context": "The lawful basis is our legitimate interest in being able to administer our business and thereby provide Our Services (Article 6(1)(f) GDPR). Insurance companies. The purpose for these transfers is to handle insurance claims and administer Our insurance policies. The lawful basis is our legitimate interest in handling insurance claims and administrating Our insurance policies on an ongoing basis (Article 6(1)(f) GDPR). Courts and Counter Parties in legal matters. The purpose for these transfers is to defend, exercise and establish legal claims. The lawful basis is Our legitimate interest to defend, exercise and establish legal claims (Article 6(1)(f) GDPR). Regulators: to comply with all applicable laws, regulations and rules, and requests of law enforcement, regulatory and other governmental agencies;\nSolicitors and other professional services firms (including our auditors). Law enforcement agencies, including the Police. The purpose for these transfers is to assist law enforcement agencies and the Police in its investigations, to the extent we are obligated to do so.",
"question": "What is the lawful basis of the processing of my data? ",
"answers.text": [
"The lawful basis is our legitimate interest in being able to administer our business and thereby provide Our Services (Article 6(1)(f) GDPR)."
],
"answers.answer_start": [
0
],
"feat_id": [
"310267"
],
"feat_title": [
""
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat_id": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_title": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 456 |
| valid | 114 |
|
ASR-HypR/AISHELL1_withLM | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: ref
dtype: string
- name: hyps
sequence: string
- name: ctc_score
sequence: float64
- name: att_score
sequence: float64
- name: lm_score
sequence: float64
- name: utt_id
dtype: string
- name: score
sequence: float64
splits:
- name: train
num_bytes: 572977340
num_examples: 120098
- name: test
num_bytes: 34410820
num_examples: 7176
- name: dev
num_bytes: 67924134
num_examples: 14326
download_size: 355095107
dataset_size: 675312294
---
# Dataset Card for "AISHELL1_withLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
greathero/evenmorex4newercontrailsvalidationdataset | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 493022743.945
num_examples: 16695
download_size: 477064136
dataset_size: 493022743.945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/VALUE_sst2_null_genetive | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: sentence
dtype: string
- name: label
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 18790
num_examples: 120
- name: test
num_bytes: 41683
num_examples: 265
- name: train
num_bytes: 545836
num_examples: 4562
download_size: 351547
dataset_size: 606309
---
# Dataset Card for "VALUE_sst2_null_genetive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
IAMJB/SMMILE | ---
license: mit
dataset_info:
features:
- name: q
dtype: string
- name: a
dtype: string
- name: image
dtype: image
- name: image_url
dtype: string
- name: author
dtype: string
- name: problem_id
dtype: string
splits:
- name: train
num_bytes: 695956.0
num_examples: 12
download_size: 637400
dataset_size: 695956.0
---
|
KelvinTichana2/ConversationalData | ---
license: mit
---
|
merve/ai-tube-dummy | ---
license: apache-2.0
pretty_name: AI Tube
tags:
- "ai-tube:Dummy"
---
## Description
In a galaxy far far away, there was a wholesome Knight and a small creature as companion.
## Prompt
A video channel managed by the famous Space Knight Djin Darin.
The videos are scenery of galaxy, futuristic knights, aliens, planets, spacecrafts.
The humor should be about how the absurd the small creature acts.
The video will be with trips about the stories of the knight and a small creature, life on different planets. |
open-llm-leaderboard/details_bigcode__starcoderplus | ---
pretty_name: Evaluation run of bigcode/starcoderplus
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bigcode__starcoderplus\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T01:45:50.036434](https://huggingface.co/datasets/open-llm-leaderboard/details_bigcode__starcoderplus/blob/main/results_2023-10-15T01-45-50.036434.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0012583892617449664,\n\
\ \"em_stderr\": 0.00036305608931191014,\n \"f1\": 0.05428062080536913,\n\
\ \"f1_stderr\": 0.0012821278013514389,\n \"acc\": 0.39022141932642523,\n\
\ \"acc_stderr\": 0.010183303049937573\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.00036305608931191014,\n\
\ \"f1\": 0.05428062080536913,\n \"f1_stderr\": 0.0012821278013514389\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0803639120545868,\n \
\ \"acc_stderr\": 0.007488258573239077\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7000789265982637,\n \"acc_stderr\": 0.01287834752663607\n\
\ }\n}\n```"
repo_url: https://huggingface.co/bigcode/starcoderplus
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|arc:challenge|25_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_23T10_57_53.936866
path:
- '**/details_harness|drop|3_2023-09-23T10-57-53.936866.parquet'
- split: 2023_10_15T01_45_50.036434
path:
- '**/details_harness|drop|3_2023-10-15T01-45-50.036434.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T01-45-50.036434.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_23T10_57_53.936866
path:
- '**/details_harness|gsm8k|5_2023-09-23T10-57-53.936866.parquet'
- split: 2023_10_15T01_45_50.036434
path:
- '**/details_harness|gsm8k|5_2023-10-15T01-45-50.036434.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T01-45-50.036434.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hellaswag|10_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T09:43:16.279088.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-28T09:43:16.279088.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-28T09:43:16.279088.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_23T10_57_53.936866
path:
- '**/details_harness|winogrande|5_2023-09-23T10-57-53.936866.parquet'
- split: 2023_10_15T01_45_50.036434
path:
- '**/details_harness|winogrande|5_2023-10-15T01-45-50.036434.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T01-45-50.036434.parquet'
- config_name: results
data_files:
- split: 2023_08_28T09_43_16.279088
path:
- results_2023-08-28T09:43:16.279088.parquet
- split: 2023_09_23T10_57_53.936866
path:
- results_2023-09-23T10-57-53.936866.parquet
- split: 2023_10_15T01_45_50.036434
path:
- results_2023-10-15T01-45-50.036434.parquet
- split: latest
path:
- results_2023-10-15T01-45-50.036434.parquet
---
# Dataset Card for Evaluation run of bigcode/starcoderplus
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bigcode/starcoderplus
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bigcode__starcoderplus",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T01:45:50.036434](https://huggingface.co/datasets/open-llm-leaderboard/details_bigcode__starcoderplus/blob/main/results_2023-10-15T01-45-50.036434.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0012583892617449664,
"em_stderr": 0.00036305608931191014,
"f1": 0.05428062080536913,
"f1_stderr": 0.0012821278013514389,
"acc": 0.39022141932642523,
"acc_stderr": 0.010183303049937573
},
"harness|drop|3": {
"em": 0.0012583892617449664,
"em_stderr": 0.00036305608931191014,
"f1": 0.05428062080536913,
"f1_stderr": 0.0012821278013514389
},
"harness|gsm8k|5": {
"acc": 0.0803639120545868,
"acc_stderr": 0.007488258573239077
},
"harness|winogrande|5": {
"acc": 0.7000789265982637,
"acc_stderr": 0.01287834752663607
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-7d55fc88-11175496 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11
metrics: []
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
Gauravvaid-shell/instruct-python-llama2-20k | ---
license: gpl-3.0
---
|
jilp00/NousResearch-func-calling | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 3285167
num_examples: 1100
download_size: 1057557
dataset_size: 3285167
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Back-up/Topic-Prediction-with-pair-qa-v1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: topic
struct:
- name: topic
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: instruction
dtype: string
- name: prompt_name
dtype: string
splits:
- name: train
num_bytes: 153313
num_examples: 101
download_size: 82427
dataset_size: 153313
---
# Dataset Card for "Topic-Prediction-with-pair-qa-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sofc_materials_articles | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- token-classification
- text-classification
task_ids:
- named-entity-recognition
- slot-filling
- topic-classification
pretty_name: SofcMaterialsArticles
dataset_info:
features:
- name: text
dtype: string
- name: sentence_offsets
sequence:
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: sentences
sequence: string
- name: sentence_labels
sequence: int64
- name: token_offsets
sequence:
- name: offsets
sequence:
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: tokens
sequence:
sequence: string
- name: entity_labels
sequence:
sequence:
class_label:
names:
'0': B-DEVICE
'1': B-EXPERIMENT
'2': B-MATERIAL
'3': B-VALUE
'4': I-DEVICE
'5': I-EXPERIMENT
'6': I-MATERIAL
'7': I-VALUE
'8': O
- name: slot_labels
sequence:
sequence:
class_label:
names:
'0': B-anode_material
'1': B-cathode_material
'2': B-conductivity
'3': B-current_density
'4': B-degradation_rate
'5': B-device
'6': B-electrolyte_material
'7': B-experiment_evoking_word
'8': B-fuel_used
'9': B-interlayer_material
'10': B-interconnect_material
'11': B-open_circuit_voltage
'12': B-power_density
'13': B-resistance
'14': B-support_material
'15': B-thickness
'16': B-time_of_operation
'17': B-voltage
'18': B-working_temperature
'19': I-anode_material
'20': I-cathode_material
'21': I-conductivity
'22': I-current_density
'23': I-degradation_rate
'24': I-device
'25': I-electrolyte_material
'26': I-experiment_evoking_word
'27': I-fuel_used
'28': I-interlayer_material
'29': I-interconnect_material
'30': I-open_circuit_voltage
'31': I-power_density
'32': I-resistance
'33': I-support_material
'34': I-thickness
'35': I-time_of_operation
'36': I-voltage
'37': I-working_temperature
'38': O
- name: links
sequence:
- name: relation_label
dtype:
class_label:
names:
'0': coreference
'1': experiment_variation
'2': same_experiment
'3': thickness
- name: start_span_id
dtype: int64
- name: end_span_id
dtype: int64
- name: slots
sequence:
- name: frame_participant_label
dtype:
class_label:
names:
'0': anode_material
'1': cathode_material
'2': current_density
'3': degradation_rate
'4': device
'5': electrolyte_material
'6': fuel_used
'7': interlayer_material
'8': open_circuit_voltage
'9': power_density
'10': resistance
'11': support_material
'12': time_of_operation
'13': voltage
'14': working_temperature
- name: slot_id
dtype: int64
- name: spans
sequence:
- name: span_id
dtype: int64
- name: entity_label
dtype:
class_label:
names:
'0': ''
'1': DEVICE
'2': MATERIAL
'3': VALUE
- name: sentence_id
dtype: int64
- name: experiment_mention_type
dtype:
class_label:
names:
'0': ''
'1': current_exp
'2': future_work
'3': general_info
'4': previous_work
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: experiments
sequence:
- name: experiment_id
dtype: int64
- name: span_id
dtype: int64
- name: slots
sequence:
- name: frame_participant_label
dtype:
class_label:
names:
'0': anode_material
'1': cathode_material
'2': current_density
'3': degradation_rate
'4': conductivity
'5': device
'6': electrolyte_material
'7': fuel_used
'8': interlayer_material
'9': open_circuit_voltage
'10': power_density
'11': resistance
'12': support_material
'13': time_of_operation
'14': voltage
'15': working_temperature
- name: slot_id
dtype: int64
splits:
- name: train
num_bytes: 7402373
num_examples: 26
- name: test
num_bytes: 2650700
num_examples: 11
- name: validation
num_bytes: 1993857
num_examples: 8
download_size: 3733137
dataset_size: 12046930
---
# Dataset Card for SofcMaterialsArticles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [boschresearch/sofc-exp_textmining_resources](https://github.com/boschresearch/sofc-exp_textmining_resources)
- **Repository:** [boschresearch/sofc-exp_textmining_resources](https://github.com/boschresearch/sofc-exp_textmining_resources)
- **Paper:** [The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain](https://arxiv.org/abs/2006.03039)
- **Leaderboard:**
- **Point of Contact:** [Annemarie Friedrich](annemarie.friedrich@de.bosch.com)
### Dataset Summary
> The SOFC-Exp corpus contains 45 scientific publications about solid oxide fuel cells (SOFCs), published between 2013 and 2019 as open-access articles all with a CC-BY license. The dataset was manually annotated by domain experts with the following information:
>
> * Mentions of relevant experiments have been marked using a graph structure corresponding to instances of an Experiment frame (similar to the ones used in FrameNet.) We assume that an Experiment frame is introduced to the discourse by mentions of words such as report, test or measure (also called the frame-evoking elements). The nodes corresponding to the respective tokens are the heads of the graphs representing the Experiment frame.
> * The Experiment frame related to SOFC-Experiments defines a set of 16 possible participant slots. Participants are annotated as dependents of links between the frame-evoking element and the participant node.
> * In addition, we provide coarse-grained entity/concept types for all frame participants, i.e, MATERIAL, VALUE or DEVICE. Note that this annotation has not been performed on the full texts but only on sentences containing information about relevant experiments, and a few sentences in addition. In the paper, we run experiments for both tasks only on the set of sentences marked as experiment-describing in the gold standard, which is admittedly a slightly simplified setting. Entity types are only partially annotated on other sentences. Slot filling could of course also be evaluated in a fully automatic setting with automatic experiment sentence detection as a first step.
### Supported Tasks and Leaderboards
- `topic-classification`: The dataset can be used to train a model for topic-classification, to identify sentences that mention SOFC-related experiments.
- `named-entity-recognition`: The dataset can be used to train a named entity recognition model to detect `MATERIAL`, `VALUE`, `DEVICE`, and `EXPERIMENT` entities.
- `slot-filling`: The slot-filling task is approached as fine-grained entity-typing-in-context, assuming that each sentence represents a single experiment frame. Sequence tagging architectures are utilized for tagging the tokens of each experiment-describing sentence with the set of slot types.
The paper experiments with BiLSTM architectures with `BERT`- and `SciBERT`- generated token embeddings, as well as with `BERT` and `SciBERT` directly for the modeling task. A simple CRF architecture is used as a baseline for sequence-tagging tasks. Implementations of the transformer-based architectures can be found in the `huggingface/transformers` library: [BERT](https://huggingface.co/bert-base-uncased), [SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased)
### Languages
This corpus is in English.
## Dataset Structure
### Data Instances
As each example is a full text of an academic paper, plus annotations, a json formatted example is space-prohibitive for this README.
### Data Fields
- `text`: The full text of the paper
- `sentence_offsets`: Start and end character offsets for each sentence in the text.
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `sentences`: A sequence of the sentences in the text (using `sentence_offsets`)
- `sentence_labels`: Sequence of binary labels for whether a sentence contains information of interest.
- `token_offsets`: Sequence of sequences containing start and end character offsets for each token in each sentence in the text.
- `offsets`: a dictionary feature containing:
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `tokens`: Sequence of sequences containing the tokens for each sentence in the text.
- `feature`: a `string` feature.
- `entity_labels`: a dictionary feature containing:
- `feature`: a classification label, with possible values including `B-DEVICE`, `B-EXPERIMENT`, `B-MATERIAL`, `B-VALUE`, `I-DEVICE`.
- `slot_labels`: a dictionary feature containing:
- `feature`: a classification label, with possible values including `B-anode_material`, `B-cathode_material`, `B-conductivity`, `B-current_density`, `B-degradation_rate`.
- `links`: a dictionary feature containing:
- `relation_label`: a classification label, with possible values including `coreference`, `experiment_variation`, `same_experiment`, `thickness`.
- `start_span_id`: a `int64` feature.
- `end_span_id`: a `int64` feature.
- `slots`: a dictionary feature containing:
- `frame_participant_label`: a classification label, with possible values including `anode_material`, `cathode_material`, `current_density`, `degradation_rate`, `device`.
- `slot_id`: a `int64` feature.
- `spans`: a dictionary feature containing:
- `span_id`: a `int64` feature.
- `entity_label`: a classification label, with possible values including ``, `DEVICE`, `MATERIAL`, `VALUE`.
- `sentence_id`: a `int64` feature.
- `experiment_mention_type`: a classification label, with possible values including ``, `current_exp`, `future_work`, `general_info`, `previous_work`.
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `experiments`: a dictionary feature containing:
- `experiment_id`: a `int64` feature.
- `span_id`: a `int64` feature.
- `slots`: a dictionary feature containing:
- `frame_participant_label`: a classification label, with possible values including `anode_material`, `cathode_material`, `current_density`, `degradation_rate`, `conductivity`.
- `slot_id`: a `int64` feature.
Very detailed information for each of the fields can be found in the [corpus file formats section](https://github.com/boschresearch/sofc-exp_textmining_resources#corpus-file-formats) of the associated dataset repo
### Data Splits
This dataset consists of three splits:
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Examples | 26 | 8 | 11 |
The authors propose the experimental setting of using the training data in a 5-fold cross validation setting for development and tuning, and finally applying tte model(s) to the independent test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The corpus consists of 45
open-access scientific publications about SOFCs
and related research, annotated by domain experts.
### Annotations
#### Annotation process
For manual annotation, the authors use the InCeption annotation tool (Klie et al., 2018).
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The manual annotations created for the SOFC-Exp corpus are licensed under a [Creative Commons Attribution 4.0 International License (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@misc{friedrich2020sofcexp,
title={The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain},
author={Annemarie Friedrich and Heike Adel and Federico Tomazic and Johannes Hingerl and Renou Benteau and Anika Maruscyk and Lukas Lange},
year={2020},
eprint={2006.03039},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset. |
CyberHarem/medusa_fatestaynightufotable | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Medusa (Fate Stay Night [UFOTABLE])
This is the dataset of Medusa (Fate Stay Night [UFOTABLE]), containing 24 images and their tags.
The core tags of this character are `long_hair, purple_hair, very_long_hair, facial_mark, breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 24 | 19.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/medusa_fatestaynightufotable/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 24 | 15.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/medusa_fatestaynightufotable/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 45 | 28.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/medusa_fatestaynightufotable/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 24 | 19.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/medusa_fatestaynightufotable/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 45 | 35.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/medusa_fatestaynightufotable/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/medusa_fatestaynightufotable',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | 1girl, solo, blindfold, cleavage, bare_shoulders, forehead_mark, collar, detached_sleeves, strapless_dress, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blindfold | cleavage | bare_shoulders | forehead_mark | collar | detached_sleeves | strapless_dress | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:------------|:-----------|:-----------------|:----------------|:---------|:-------------------|:------------------|:-------------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X |
|
chansung/requested-arxiv-ids-3 | ---
dataset_info:
features:
- name: Requested arXiv IDs
sequence: string
splits:
- name: train
num_bytes: 7.5
num_examples: 1
download_size: 1042
dataset_size: 7.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andfanilo/streamlit-issues | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
dtype: 'null'
- name: comments
dtype: int64
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 15843221
num_examples: 5000
download_size: 3914406
dataset_size: 15843221
---
# Dataset Card for "streamlit-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_10000_spambase_sgosdt_l256_dim10_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 62261087
dataset_size: 472880000
---
# Dataset Card for "autotree_pmlb_10000_spambase_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367159 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: futin/guess
dataset_config: vi_3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
gmongaras/BERT_Base_Cased_512_Dataset | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 36961083473
num_examples: 136338653
download_size: 13895887135
dataset_size: 36961083473
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
Original datasets:
- https://huggingface.co/datasets/bookcorpus
- https://huggingface.co/datasets/wikipedia Variant: 20220301.en |
peldrak/riviera_labeled_split | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 298529487.0
num_examples: 231
download_size: 67692943
dataset_size: 298529487.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
custom-diffusion-library/customconcept101-customdiffusion | ---
license: cc-by-4.0
---
|
kardosdrur/nb-nli | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 74977588.8
num_examples: 502724
- name: test
num_bytes: 18744397.2
num_examples: 125681
download_size: 58272954
dataset_size: 93721986.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Norsk Bokmål NLI dataset
Machine translation of MNLI and SNLI to Bokmål.
Based on tollefj/all-nli-NOB, but all neutral examples are removed, test-train split is done,
and entailment is mapped to 1, while contradiction is mapped to 0. This is done so that we can use AnglE Training on the dataset. |
nlp-tlp/MaintNorm | ---
license: mit
text_categories:
- lexical normalization
language:
- en
pretty_name: MaintNorm
size_categories:
- 10K<n<100K
multilingualism:
- monolingual
---
# MaintNorm Dataset Card
## Overview
The MaintNorm dataset is a collection of 12,000 English language texts, specifically focusing on short texts extracted from maintenance work orders from three major mining organisations in Australia. This dataset is annotated for both lexical normalization and token-level entity tagging tasks, making it a valuable resource for natural language processing research and applications in industrial contexts.
For further information about the annotation process and dataset characteristics, refer to the [MaintNorm paper](https://aclanthology.org/2024.wnut-1.7/) or vitit the [GitHub repository](https://github.com/nlp-tlp/maintnorm)
## Dataset Structure
This dataset includes data from three distinct company-specific sources (`company_a`, `company_b`, `company_c`), along with a `combined` dataset that integrates data across these sources. This structure supports both granular and comprehensive analyses.
## Masking Scheme
To address privacy and data specificity, the following token-level entity tags are used:
- `<id>`: Asset identifiers, for example, _ENG001_, _rd1286_
- `<sensitive>`: Sensitive information specific to organisations, including proprietary systems, third-party contractors, and names of personnel.
- `<num>`: Numerical entities, such as _8_, _7001223_
- `<date>`: Representations of dates, either in numerical form like _10/10/2023_ or phrase form such as _8th Dec_
## Dataset Instances
The dataset adopts a standard normalisation format similar to that used in the WNUT shared tasks, with each text resembling the format seen in CoNLL03: tokens are separated by newlines, and each token is accompanied by its normalised or masked counterpart, separated by a tab.
### Examples
```txt
Exhaust exhaust
Fan fan
#6 number <num>
Tripping tripping
c/b circuit breaker
HF338 <id>
INVESTAGATE investigate
24V <num> V
FAULT fault
```
## Citation
Please cite the following paper if you use this dataset in your research:
```
@inproceedings{bikaun-etal-2024-maintnorm,
title = "{M}aint{N}orm: A corpus and benchmark model for lexical normalisation and masking of industrial maintenance short text",
author = "Bikaun, Tyler and
Hodkiewicz, Melinda and
Liu, Wei",
editor = {van der Goot, Rob and
Bak, JinYeong and
M{\"u}ller-Eberstein, Max and
Xu, Wei and
Ritter, Alan and
Baldwin, Tim},
booktitle = "Proceedings of the Ninth Workshop on Noisy and User-generated Text (W-NUT 2024)",
month = mar,
year = "2024",
address = "San {\.G}iljan, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnut-1.7",
pages = "68--78",
}
``` |
benjis/diversevul | ---
size_categories:
- 100K<n<1M
pretty_name: DiverseVul
tags:
- vulnerability detection
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: func
dtype: string
- name: target
dtype: int64
- name: cwe
sequence: string
- name: project
dtype: string
- name: commit_id
dtype: string
- name: hash
dtype: float64
- name: size
dtype: int64
- name: message
dtype: string
splits:
- name: train
num_bytes: 536747553.93245524
num_examples: 264393
- name: validation
num_bytes: 67093190.47748508
num_examples: 33049
- name: test
num_bytes: 67095220.59005967
num_examples: 33050
download_size: 61493712
dataset_size: 670935965.0
---
# Dataset Card for "diversevul"
Unofficial, not affiliated with the authors.
- **Paper:** https://surrealyz.github.io/files/pubs/raid23-diversevul.pdf
- **Repository:** https://github.com/wagner-group/diversevul
|
Back-up/test_ds_v3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: response
struct:
- name: response
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: instruction
dtype: string
- name: prompt_name
dtype: string
- name: metadata
struct:
- name: max_ratio
dtype: float64
- name: paragraph_similar
dtype: string
- name: start_index
dtype: int64
splits:
- name: train
num_bytes: 21511788
num_examples: 7597
download_size: 8245485
dataset_size: 21511788
---
# Dataset Card for "test_ds_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
accavdar/layoutlmv3_employee_info_v1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: words
sequence: string
- name: bboxes
sequence:
sequence: int64
- name: labels
sequence:
class_label:
names:
'0': ssn
'1': home_address
'2': employer_name
'3': work_address
'4': full_name
'5': hp_area
'6': hp_number
'7': home_apt
'8': home_city_state
'9': home_zip_code
'10': wp_area
'11': wp_number
'12': work_city_state
'13': work_zip_code
- name: image
dtype: image
splits:
- name: train
num_bytes: 6204338.0
num_examples: 80
- name: test
num_bytes: 1565146.0
num_examples: 20
download_size: 6637326
dataset_size: 7769484.0
---
# Dataset Card for "layoutlmv3_employee_info_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KanishkaRandunu/SinhalaWikipediaArticles | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 99188239
num_examples: 43328
download_size: 41545918
dataset_size: 99188239
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- si
tags:
- sinhalawikipedia
- sinhalawiki
- sinhala
- wikipedia
- sinhaladataset
- sinhalatext
size_categories:
- 10K<n<100K
--- |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-92000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1069541
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fossoescp/morality | ---
license: mit
---
|
InceptiveDev/DatasetCoverLetters | ---
license: mit
---
|
AdapterOcean/med_alpaca_standardized_cluster_53 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 79304459
num_examples: 8034
download_size: 23261451
dataset_size: 79304459
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_53"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Denm/lch_codebase | ---
license: apache-2.0
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1563382173
num_examples: 72255
download_size: 445895201
dataset_size: 1563382173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_jae24__openhermes_dpo_norobot_0201 | ---
pretty_name: Evaluation run of jae24/openhermes_dpo_norobot_0201
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [jae24/openhermes_dpo_norobot_0201](https://huggingface.co/jae24/openhermes_dpo_norobot_0201)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jae24__openhermes_dpo_norobot_0201\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-04T14:15:33.723990](https://huggingface.co/datasets/open-llm-leaderboard/details_jae24__openhermes_dpo_norobot_0201/blob/main/results_2024-01-04T14-15-33.723990.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6230441962103045,\n\
\ \"acc_stderr\": 0.0325156991551045,\n \"acc_norm\": 0.6274562240705078,\n\
\ \"acc_norm_stderr\": 0.033162684621809393,\n \"mc1\": 0.2913096695226438,\n\
\ \"mc1_stderr\": 0.01590598704818483,\n \"mc2\": 0.474388925160649,\n\
\ \"mc2_stderr\": 0.014635683515771682\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5597269624573379,\n \"acc_stderr\": 0.014506769524804237,\n\
\ \"acc_norm\": 0.6203071672354948,\n \"acc_norm_stderr\": 0.01418211986697487\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6061541525592511,\n\
\ \"acc_stderr\": 0.004876028037941937,\n \"acc_norm\": 0.8339972117108145,\n\
\ \"acc_norm_stderr\": 0.003713227064225387\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n\
\ \"acc_stderr\": 0.042763494943765995,\n \"acc_norm\": 0.5703703703703704,\n\
\ \"acc_norm_stderr\": 0.042763494943765995\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.038424985593952694,\n\
\ \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.038424985593952694\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n\
\ \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.660377358490566,\n \"acc_stderr\": 0.02914690474779833,\n\
\ \"acc_norm\": 0.660377358490566,\n \"acc_norm_stderr\": 0.02914690474779833\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n\
\ \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n\
\ \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\"\
: 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6127167630057804,\n\
\ \"acc_stderr\": 0.03714325906302065,\n \"acc_norm\": 0.6127167630057804,\n\
\ \"acc_norm_stderr\": 0.03714325906302065\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107224,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107224\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5404255319148936,\n \"acc_stderr\": 0.03257901482099835,\n\
\ \"acc_norm\": 0.5404255319148936,\n \"acc_norm_stderr\": 0.03257901482099835\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n\
\ \"acc_stderr\": 0.046774730044911984,\n \"acc_norm\": 0.4473684210526316,\n\
\ \"acc_norm_stderr\": 0.046774730044911984\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4896551724137931,\n \"acc_stderr\": 0.04165774775728763,\n\
\ \"acc_norm\": 0.4896551724137931,\n \"acc_norm_stderr\": 0.04165774775728763\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41798941798941797,\n \"acc_stderr\": 0.025402555503260912,\n \"\
acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.025402555503260912\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7612903225806451,\n\
\ \"acc_stderr\": 0.024251071262208837,\n \"acc_norm\": 0.7612903225806451,\n\
\ \"acc_norm_stderr\": 0.024251071262208837\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.034991131376767445,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.034991131376767445\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\"\
: 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n\
\ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7777777777777778,\n \"acc_stderr\": 0.02962022787479049,\n \"\
acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.02962022787479049\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.025416343096306422,\n\
\ \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.025416343096306422\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6051282051282051,\n \"acc_stderr\": 0.024784316942156402,\n\
\ \"acc_norm\": 0.6051282051282051,\n \"acc_norm_stderr\": 0.024784316942156402\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028742040903948485,\n \
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028742040903948485\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566545,\n\
\ \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566545\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8311926605504587,\n \"acc_stderr\": 0.01606005626853034,\n \"\
acc_norm\": 0.8311926605504587,\n \"acc_norm_stderr\": 0.01606005626853034\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.48148148148148145,\n \"acc_stderr\": 0.03407632093854052,\n \"\
acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.03407632093854052\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7892156862745098,\n \"acc_stderr\": 0.02862654791243741,\n \"\
acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.02862654791243741\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7679324894514767,\n \"acc_stderr\": 0.02747974455080851,\n \
\ \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.02747974455080851\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n\
\ \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8376068376068376,\n\
\ \"acc_stderr\": 0.024161618127987745,\n \"acc_norm\": 0.8376068376068376,\n\
\ \"acc_norm_stderr\": 0.024161618127987745\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n\
\ \"acc_stderr\": 0.013890862162876168,\n \"acc_norm\": 0.8148148148148148,\n\
\ \"acc_norm_stderr\": 0.013890862162876168\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7052023121387283,\n \"acc_stderr\": 0.024547617794803828,\n\
\ \"acc_norm\": 0.7052023121387283,\n \"acc_norm_stderr\": 0.024547617794803828\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.25139664804469275,\n\
\ \"acc_stderr\": 0.014508979453553984,\n \"acc_norm\": 0.25139664804469275,\n\
\ \"acc_norm_stderr\": 0.014508979453553984\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n\
\ \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6816720257234726,\n\
\ \"acc_stderr\": 0.026457225067811025,\n \"acc_norm\": 0.6816720257234726,\n\
\ \"acc_norm_stderr\": 0.026457225067811025\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4574468085106383,\n \"acc_stderr\": 0.029719281272236837,\n \
\ \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.029719281272236837\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4595827900912647,\n\
\ \"acc_stderr\": 0.012728446067669975,\n \"acc_norm\": 0.4595827900912647,\n\
\ \"acc_norm_stderr\": 0.012728446067669975\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.02888819310398863,\n\
\ \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.02888819310398863\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6699346405228758,\n \"acc_stderr\": 0.019023726160724556,\n \
\ \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.019023726160724556\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\
\ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\
\ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.0293936093198798,\n\
\ \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.0293936093198798\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n\
\ \"acc_stderr\": 0.027686913588013024,\n \"acc_norm\": 0.8109452736318408,\n\
\ \"acc_norm_stderr\": 0.027686913588013024\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977704,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977704\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n\
\ \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n\
\ \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2913096695226438,\n\
\ \"mc1_stderr\": 0.01590598704818483,\n \"mc2\": 0.474388925160649,\n\
\ \"mc2_stderr\": 0.014635683515771682\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7821625887924231,\n \"acc_stderr\": 0.011601066079939324\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4920394238059136,\n \
\ \"acc_stderr\": 0.01377073906313537\n }\n}\n```"
repo_url: https://huggingface.co/jae24/openhermes_dpo_norobot_0201
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|arc:challenge|25_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|gsm8k|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hellaswag|10_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T14-15-33.723990.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T14-15-33.723990.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- '**/details_harness|winogrande|5_2024-01-04T14-15-33.723990.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-04T14-15-33.723990.parquet'
- config_name: results
data_files:
- split: 2024_01_04T14_15_33.723990
path:
- results_2024-01-04T14-15-33.723990.parquet
- split: latest
path:
- results_2024-01-04T14-15-33.723990.parquet
---
# Dataset Card for Evaluation run of jae24/openhermes_dpo_norobot_0201
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [jae24/openhermes_dpo_norobot_0201](https://huggingface.co/jae24/openhermes_dpo_norobot_0201) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jae24__openhermes_dpo_norobot_0201",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-04T14:15:33.723990](https://huggingface.co/datasets/open-llm-leaderboard/details_jae24__openhermes_dpo_norobot_0201/blob/main/results_2024-01-04T14-15-33.723990.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6230441962103045,
"acc_stderr": 0.0325156991551045,
"acc_norm": 0.6274562240705078,
"acc_norm_stderr": 0.033162684621809393,
"mc1": 0.2913096695226438,
"mc1_stderr": 0.01590598704818483,
"mc2": 0.474388925160649,
"mc2_stderr": 0.014635683515771682
},
"harness|arc:challenge|25": {
"acc": 0.5597269624573379,
"acc_stderr": 0.014506769524804237,
"acc_norm": 0.6203071672354948,
"acc_norm_stderr": 0.01418211986697487
},
"harness|hellaswag|10": {
"acc": 0.6061541525592511,
"acc_stderr": 0.004876028037941937,
"acc_norm": 0.8339972117108145,
"acc_norm_stderr": 0.003713227064225387
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.042763494943765995,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.042763494943765995
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6644736842105263,
"acc_stderr": 0.038424985593952694,
"acc_norm": 0.6644736842105263,
"acc_norm_stderr": 0.038424985593952694
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.660377358490566,
"acc_stderr": 0.02914690474779833,
"acc_norm": 0.660377358490566,
"acc_norm_stderr": 0.02914690474779833
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6127167630057804,
"acc_stderr": 0.03714325906302065,
"acc_norm": 0.6127167630057804,
"acc_norm_stderr": 0.03714325906302065
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.74,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5404255319148936,
"acc_stderr": 0.03257901482099835,
"acc_norm": 0.5404255319148936,
"acc_norm_stderr": 0.03257901482099835
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.046774730044911984,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.046774730044911984
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4896551724137931,
"acc_stderr": 0.04165774775728763,
"acc_norm": 0.4896551724137931,
"acc_norm_stderr": 0.04165774775728763
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04444444444444449,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04444444444444449
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7612903225806451,
"acc_stderr": 0.024251071262208837,
"acc_norm": 0.7612903225806451,
"acc_norm_stderr": 0.024251071262208837
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.034991131376767445,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.034991131376767445
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.02962022787479049,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.02962022787479049
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.025416343096306422,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.025416343096306422
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6051282051282051,
"acc_stderr": 0.024784316942156402,
"acc_norm": 0.6051282051282051,
"acc_norm_stderr": 0.024784316942156402
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.028742040903948485,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.028742040903948485
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6512605042016807,
"acc_stderr": 0.030956636328566545,
"acc_norm": 0.6512605042016807,
"acc_norm_stderr": 0.030956636328566545
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.038615575462551684,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.038615575462551684
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8311926605504587,
"acc_stderr": 0.01606005626853034,
"acc_norm": 0.8311926605504587,
"acc_norm_stderr": 0.01606005626853034
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.03407632093854052,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.03407632093854052
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7892156862745098,
"acc_stderr": 0.02862654791243741,
"acc_norm": 0.7892156862745098,
"acc_norm_stderr": 0.02862654791243741
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7679324894514767,
"acc_stderr": 0.02747974455080851,
"acc_norm": 0.7679324894514767,
"acc_norm_stderr": 0.02747974455080851
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8376068376068376,
"acc_stderr": 0.024161618127987745,
"acc_norm": 0.8376068376068376,
"acc_norm_stderr": 0.024161618127987745
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.013890862162876168,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.013890862162876168
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7052023121387283,
"acc_stderr": 0.024547617794803828,
"acc_norm": 0.7052023121387283,
"acc_norm_stderr": 0.024547617794803828
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.25139664804469275,
"acc_stderr": 0.014508979453553984,
"acc_norm": 0.25139664804469275,
"acc_norm_stderr": 0.014508979453553984
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6816720257234726,
"acc_stderr": 0.026457225067811025,
"acc_norm": 0.6816720257234726,
"acc_norm_stderr": 0.026457225067811025
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4574468085106383,
"acc_stderr": 0.029719281272236837,
"acc_norm": 0.4574468085106383,
"acc_norm_stderr": 0.029719281272236837
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4595827900912647,
"acc_stderr": 0.012728446067669975,
"acc_norm": 0.4595827900912647,
"acc_norm_stderr": 0.012728446067669975
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6544117647058824,
"acc_stderr": 0.02888819310398863,
"acc_norm": 0.6544117647058824,
"acc_norm_stderr": 0.02888819310398863
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6699346405228758,
"acc_stderr": 0.019023726160724556,
"acc_norm": 0.6699346405228758,
"acc_norm_stderr": 0.019023726160724556
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6979591836734694,
"acc_stderr": 0.0293936093198798,
"acc_norm": 0.6979591836734694,
"acc_norm_stderr": 0.0293936093198798
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8109452736318408,
"acc_stderr": 0.027686913588013024,
"acc_norm": 0.8109452736318408,
"acc_norm_stderr": 0.027686913588013024
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.034873508801977704,
"acc_norm": 0.86,
"acc_norm_stderr": 0.034873508801977704
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2913096695226438,
"mc1_stderr": 0.01590598704818483,
"mc2": 0.474388925160649,
"mc2_stderr": 0.014635683515771682
},
"harness|winogrande|5": {
"acc": 0.7821625887924231,
"acc_stderr": 0.011601066079939324
},
"harness|gsm8k|5": {
"acc": 0.4920394238059136,
"acc_stderr": 0.01377073906313537
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
NickKolok/regs-flat2danimerge-v20 | ---
license: agpl-3.0
---
|
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-84482e-60145145395 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: google/roberta2roberta_L-24_cnn_daily_mail
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/roberta2roberta_L-24_cnn_daily_mail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SINI RAJ P](https://huggingface.co/SINI RAJ P) for evaluating this model. |
ajhamdi/SPARF | ---
license: mit
---
|
CyberHarem/august_von_parseval_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of august_von_parseval/アウグスト・フォン・パーセヴァル/奥古斯特·冯·帕塞瓦尔 (Azur Lane)
This is the dataset of august_von_parseval/アウグスト・フォン・パーセヴァル/奥古斯特·冯·帕塞瓦尔 (Azur Lane), containing 394 images and their tags.
The core tags of this character are `long_hair, breasts, horns, hair_over_one_eye, large_breasts, mechanical_horns, purple_eyes, curled_horns, very_long_hair, purple_hair, between_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 394 | 775.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/august_von_parseval_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 394 | 352.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/august_von_parseval_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1019 | 783.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/august_von_parseval_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 394 | 640.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/august_von_parseval_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1019 | 1.21 GiB | [Download](https://huggingface.co/datasets/CyberHarem/august_von_parseval_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/august_von_parseval_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 28 |  |  |  |  |  | 1girl, bare_shoulders, black_thighhighs, cleavage, solo, looking_at_viewer, white_gloves, black_dress, clothing_cutout, sitting, iron_cross, one_eye_covered, detached_sleeves |
| 1 | 6 |  |  |  |  |  | 1girl, bare_shoulders, black_dress, black_thighhighs, cleavage, clothing_cutout, looking_at_viewer, solo, white_gloves, blue_eyes, grey_hair, iron_cross, simple_background, white_background, bangs, blush, one_eye_covered, short_dress, parted_lips |
| 2 | 22 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, solo, black_dress, looking_at_viewer, upper_body, clothing_cutout, white_gloves, detached_sleeves, simple_background, iron_cross, white_background, blue_eyes, bangs, parted_lips |
| 3 | 29 |  |  |  |  |  | 1girl, official_alternate_costume, solo, looking_at_viewer, black_dress, white_apron, white_thighhighs, white_dress, two-tone_dress, sleeveless_dress, maid_headdress, simple_background, clothing_cutout, one_eye_covered, strap_between_breasts, white_background, bare_shoulders, sitting, cleavage |
| 4 | 8 |  |  |  |  |  | 1girl, armpits, looking_at_viewer, official_alternate_costume, one_eye_covered, solo, white_dress, white_thighhighs, arms_up, full_body, garter_straps, no_shoes, sitting, black_footwear, feet, shoe_dangle, shoes_removed, thighs, two-tone_dress, couch, high_heels, ribbon_in_mouth, single_shoe, black_dress, indoors, legs, maid_headdress, plant, sideboob, soles, toes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | black_thighhighs | cleavage | solo | looking_at_viewer | white_gloves | black_dress | clothing_cutout | sitting | iron_cross | one_eye_covered | detached_sleeves | blue_eyes | grey_hair | simple_background | white_background | bangs | blush | short_dress | parted_lips | upper_body | official_alternate_costume | white_apron | white_thighhighs | white_dress | two-tone_dress | sleeveless_dress | maid_headdress | strap_between_breasts | armpits | arms_up | full_body | garter_straps | no_shoes | black_footwear | feet | shoe_dangle | shoes_removed | thighs | couch | high_heels | ribbon_in_mouth | single_shoe | indoors | legs | plant | sideboob | soles | toes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-------------------|:-----------|:-------|:--------------------|:---------------|:--------------|:------------------|:----------|:-------------|:------------------|:-------------------|:------------|:------------|:--------------------|:-------------------|:--------|:--------|:--------------|:--------------|:-------------|:-----------------------------|:--------------|:-------------------|:--------------|:-----------------|:-------------------|:-----------------|:------------------------|:----------|:----------|:------------|:----------------|:-----------|:-----------------|:-------|:--------------|:----------------|:---------|:--------|:-------------|:------------------|:--------------|:----------|:-------|:--------|:-----------|:--------|:-------|
| 0 | 28 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 22 |  |  |  |  |  | X | X | | X | X | X | X | X | X | | X | | X | X | | X | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 29 |  |  |  |  |  | X | X | | X | X | X | | X | X | X | | X | | | | X | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | | | X | X | | X | | X | | X | | | | | | | | | | | X | | X | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
DBQ/Balenciaga.Product.prices.China | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: China - Balenciaga - Product-level price list
tags:
- webscraping
- ecommerce
- Balenciaga
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: string
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 619495
num_examples: 1944
download_size: 176304
dataset_size: 619495
---
# Balenciaga web scraped data
## About the website
The **fashion industry** in the **Asia Pacific** region, particularly in **China**, is a hotbed of activity. It is one of the most lucrative markets in the world, spurred by a fast-growing middle class with an increased appetite for luxury products. The Chinese market, especially, plays host to many high-end, luxury fashion brands like **Balenciaga**. A significant transition has been noted in the mode of shopping, with a sharp turn towards **Ecommerce**. The dataset represents **Ecommerce product-list page (PLP) data** specific to Balenciagas online marketplace in China, highlighting the extensive variety of products offered by this luxury fashion house in the booming Chinese digital market.
## Link to **dataset**
[China - Balenciaga - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Balenciaga%20Product-prices%20China/r/recUPih9uOFY6nzNC)
|
dustflover/rebecca | ---
license: unknown
---
|
notrichardren/political-sychophancy-lie | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Topic
dtype: string
- name: Type
dtype: string
splits:
- name: train
num_bytes: 554133
num_examples: 1564
download_size: 158946
dataset_size: 554133
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "political-sychophancy-lie"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Matinrzv/player | ---
license: apache-2.0
---
|
koaning/fashion-test | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Detecting fashion substrings in text.
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: section
dtype: string
- name: _input_hash
dtype: int64
- name: _task_hash
dtype: int64
- name: tokens
list:
- name: end
dtype: int64
- name: id
dtype: int64
- name: start
dtype: int64
- name: text
dtype: string
- name: spans
list:
- name: end
dtype: int64
- name: input_hash
dtype: int64
- name: label
dtype: string
- name: source
dtype: string
- name: start
dtype: int64
- name: text
dtype: string
- name: token_end
dtype: int64
- name: token_start
dtype: int64
- name: _session_id
dtype: 'null'
- name: _view_id
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3120984
num_examples: 1735
download_size: 817069
dataset_size: 3120984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- reddit
- fashion
---
This dataset represents some data that Ines annotated. I am adding this info manually.
|
liuyanchen1015/MULTI_VALUE_qqp_verbal_ing_suffix | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 6290758
num_examples: 40007
- name: test
num_bytes: 61445656
num_examples: 387547
- name: train
num_bytes: 56637778
num_examples: 359978
download_size: 74644045
dataset_size: 124374192
---
# Dataset Card for "MULTI_VALUE_qqp_verbal_ing_suffix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-amazon_polarity-amazon_polarity-afc8c5-93509145863 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- amazon_polarity
eval_info:
task: binary_classification
model: AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
metrics: []
dataset_name: amazon_polarity
dataset_config: amazon_polarity
dataset_split: test
col_mapping:
text: content
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
* Dataset: amazon_polarity
* Config: amazon_polarity
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@AdamCodd](https://huggingface.co/AdamCodd) for evaluating this model. |
simonry14/luganda-news-articles | ---
license: mit
---
### Dataset Summary
This dataset is comprised of news articles in the luganda language. For each article a title is also given. The dataset can be used to fine tune a Luganda news article generator.
### Languages
Luganda. Luganda is the most widely spoken indigenous Language in Uganda.
### Source Data
The articles were sourced from various online luganda news websites like Bukedde.
|
edwinpalegre/trashnet_enhanced | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': biodegradable
'1': cardboard
'2': glass
'3': metal
'4': paper
'5': plastic
'6': trash
splits:
- name: train
num_bytes: 505205957.636
num_examples: 19892
download_size: 3977396925
dataset_size: 505205957.636
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Prag12/ExcellAssistant-Llama2-1kDemo | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1664926
num_examples: 1000
download_size: 974900
dataset_size: 1664926
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ibranze/araproje_arc_en_conf_mgpt_nearestscore_true_y | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: validation
num_bytes: 80031.0
num_examples: 250
download_size: 46799
dataset_size: 80031.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_arc_en_conf_mgpt_nearestscore_true_y"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ColumbiaNLP/V-FLUTE-test | ---
dataset_info:
features:
- name: image
dtype: image
- name: source_dataset
dtype: string
- name: claim
dtype: string
splits:
- name: test
num_bytes: 635587916.0
num_examples: 689
download_size: 606729604
dataset_size: 635587916.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Zy12138/dataset_seg_breast | ---
license: apache-2.0
---
|
ColinCcz/fake-news-74k | ---
dataset_info:
features:
- name: label
dtype: int64
- name: statement
dtype: string
splits:
- name: train
num_bytes: 231230626.873984
num_examples: 78588
download_size: 139931140
dataset_size: 231230626.873984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.