datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
AdapterOcean/med_alpaca_standardized_cluster_31 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 106608924
num_examples: 10689
download_size: 31397451
dataset_size: 106608924
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_31"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
misoda/MLW_data | ---
license: cc-by-4.0
---
Card-Index of the Medieval Latin Dictionary
===========================================
This data-set contains a subset of the digitzed card-index of
the [Medieval Latein Dictionary](https://mlw.badw.de/en/), a long-term
research-project of the [Bavarian Academy of Sciences and Humanities](https://badw.de/en/) (BAdW).
Card-indexes have traditionally been used as a preparatory work stage
for writing the actual dictionary entries.
The data is copyright (c) 1948-1959 by the [Bavarian Academy of Sciences and Humanities](https://badw.de/en/).
It is provided here under the [CC-BY 4.0 International License](https://creativecommons.org/licenses/by/4.0/deed.en)
If you use this data, please quote it as follows:
Medieval Latin Dictionary of the Bavarian Academy of Sciences and Humanities. Card-Index-Scans (hugging-face-subset), 1948-1959, URL:
huggingface.co/datasets/misoda/MLW_data
BibTeX-entry:
@dataset{SchellingInMunich:2024,
author = {MLW-Team},
organiosation = {Bavarian Academy of Sciences and Humanities}
title = {Medieval Latin Dictionary. Card-Index-Scans (hugging-face-subset)},
year = {1948-1959},
version = {1.0}
url = {huggingface.co/datasets/misoda/MLW_data}
} |
cdminix/libritts-aligned | ---
pretty_name: LibriTTS Corpus with Forced Alignments
annotations_creators:
- crowdsourced
language: en
tags:
- speech
- audio
- automatic-speech-recognition
- text-to-speech
license:
- cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
extra_gated_prompt: "When using this dataset to download LibriTTS, you agree to the terms on https://www.openslr.org"
---
> There is also an identical dataset for the new libritts-r dataset at [cdminix/libritts-r-aligned](https://huggingface.co/datasets/cdminix/libritts-r-aligned)
# Dataset Card for LibriTTS with Forced Alignments (and Measures)
UPDATE: The preprocessed alignments are now in this repository, so montreal forced aligner does not have to run locally.
## Requirements
- ``pip install alignments phones`` **(required)**
- ``pip install speech-collator`` (optional)
## Example Item
```json
{
'id': '100_122655_000073_000002.wav',
'speaker': '100',
'text': 'the day after, diana and mary quitted it for distant b.',
'start': 0.0,
'end': 3.6500000953674316,
'phones': ['[SILENCE]', 'ð', 'ʌ', '[SILENCE]', 'd', 'eɪ', '[SILENCE]', 'æ', 'f', 't', 'ɜ˞', '[COMMA]', 'd', 'aɪ', 'æ', 'n', 'ʌ', '[SILENCE]', 'æ', 'n', 'd', '[SILENCE]', 'm', 'ɛ', 'ɹ', 'i', '[SILENCE]', 'k', 'w', 'ɪ', 't', 'ɪ', 'd', '[SILENCE]', 'ɪ', 't', '[SILENCE]', 'f', 'ɜ˞', '[SILENCE]', 'd', 'ɪ', 's', 't', 'ʌ', 'n', 't', '[SILENCE]', 'b', 'i', '[FULL STOP]'],
'phone_durations': [5, 2, 4, 0, 5, 13, 0, 16, 7, 5, 20, 2, 6, 9, 15, 4, 2, 0, 11, 3, 5, 0, 3, 8, 9, 8, 0, 13, 3, 5, 3, 6, 4, 0, 8, 5, 0, 9, 5, 0, 7, 5, 6, 7, 4, 5, 10, 0, 3, 35, 9],
'audio': '/dev/shm/metts/train-clean-360-alignments/100/100_122655_000073_000002.wav'
}
```
The phones are IPA phones, and the phone durations are in frames (assuming a hop length of 256, sample rate of 22050 and window length of 1024). These attributes can be changed using the ``hop_length``, ``sample_rate`` and ``window_length`` arguments to ``LibriTTSAlign``.
## Data Collator
This dataset comes with a data collator which can be used to create batches of data for training.
It can be installed using ``pip install speech-collator`` ([MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator)) and can be used as follows:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator
from torch.utils.data import DataLoader
dataset = load_dataset('cdminix/libritts-aligned', split="train")
speaker2ixd = json.load(open("speaker2idx.json"))
phone2ixd = json.load(open("phone2idx.json"))
collator = SpeechCollator(
speaker2ixd=speaker2idx,
phone2ixd=phone2idx ,
)
dataloader = DataLoader(dataset, collate_fn=collator.collate_fn, batch_size=8)
```
You can either download the ``speaker2idx.json`` and ``phone2idx.json`` files from [here](https://huggingface.co/datasets/cdminix/libritts-aligned/tree/main/data) or create them yourself using the following code:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
dataset = load_dataset("cdminix/libritts-aligned", split="train")
# Create speaker2idx and phone2idx
speaker2idx = create_speaker2idx(dataset, unk_idx=0)
phone2idx = create_phone2idx(dataset, unk_idx=0)
# save to json
with open("speaker2idx.json", "w") as f:
json.dump(speaker2idx, f)
with open("phone2idx.json", "w") as f:
json.dump(phone2idx, f)
```
### Measures
When using ``speech-collator`` you can also use the ``measures`` argument to specify which measures to use. The following example extracts Pitch and Energy on the fly.
```python
import json
from torch.utils.data import DataLoader
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
from speech_collator.measures import PitchMeasure, EnergyMeasure
dataset = load_dataset("cdminix/libritts-aligned", split="train")
speaker2idx = json.load(open("data/speaker2idx.json"))
phone2idx = json.load(open("data/phone2idx.json"))
# Create SpeechCollator
speech_collator = SpeechCollator(
speaker2idx=speaker2idx,
phone2idx=phone2idx,
measures=[PitchMeasure(), EnergyMeasure()],
return_keys=["measures"]
)
# Create DataLoader
dataloader = DataLoader(
dataset,
batch_size=8,
collate_fn=speech_collator.collate_fn,
)
```
COMING SOON: Detailed documentation on how to use the measures at [MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator).
## Splits
This dataset has the following splits:
- ``train``: All the training data, except one sample per speaker which is used for validation.
- ``dev``: The validation data, one sample per speaker.
- ``train.clean.100``: Training set derived from the original materials of the train-clean-100 subset of LibriSpeech.
- ``train.clean.360``: Training set derived from the original materials of the train-clean-360 subset of LibriSpeech.
- ``train.other.500``: Training set derived from the original materials of the train-other-500 subset of LibriSpeech.
- ``dev.clean``: Validation set derived from the original materials of the dev-clean subset of LibriSpeech.
- ``dev.other``: Validation set derived from the original materials of the dev-other subset of LibriSpeech.
- ``test.clean``: Test set derived from the original materials of the test-clean subset of LibriSpeech.
- ``test.other``: Test set derived from the original materials of the test-other subset of LibriSpeech.
## Environment Variables
There are a few environment variable which can be set.
- ``LIBRITTS_VERBOSE``: If set, will print out more information about the dataset creation process.
- ``LIBRITTS_MAX_WORKERS``: The number of workers to use when creating the alignments. Defaults to ``cpu_count()``.
- ``LIBRITTS_PATH``: The path to download LibriTTS to. Defaults to the value of ``HF_DATASETS_CACHE``.
# Citation
When using LibriTTS please cite the following papers:
- [LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech](https://arxiv.org/abs/1904.02882)
- [Montreal Forced Aligner: Trainable text-speech alignment using Kaldi](https://www.researchgate.net/publication/319185277_Montreal_Forced_Aligner_Trainable_Text-Speech_Alignment_Using_Kaldi)
When using the Measures please cite the following paper (ours):
- [Evaluating and reducing the distance between synthetic and real speech distributions](https://arxiv.org/abs/2211.16049) |
maxidl/no_robots-de | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages_en
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
- name: messages_de
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 30309297
num_examples: 9500
- name: test
num_bytes: 1627501
num_examples: 500
download_size: 19860319
dataset_size: 31936798
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- de
- en
size_categories:
- 1K<n<10K
---
German version of [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots). Translated using DeepL (informal style).
|lang|split|#chars|
|---|---|---|
|en|train|11_589_702|
|de|train|13_260_900|
|en|test|618_783|
|de|test|709_985| |
jaegerking/testygimptraintest | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
splits:
- name: train
num_bytes: 26897
num_examples: 88
download_size: 16615
dataset_size: 26897
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wessmetal/poppyhigh | ---
license: bsl-1.0
---
|
deokhk/am_wiki_sentences_100000 | ---
dataset_info:
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 15005045
num_examples: 100000
- name: dev
num_bytes: 114806
num_examples: 1000
download_size: 7271644
dataset_size: 15119851
---
# Dataset Card for "am_wiki_sentences_100000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ashnaz/fine_tuned_symptoms | ---
dataset_info:
features:
- name: symptoms
dtype: string
- name: doctor
dtype: string
- name: disease
dtype: string
splits:
- name: train
num_bytes: 9914
num_examples: 72
download_size: 5873
dataset_size: 9914
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nexdata/Japanese_Synthesis_Corpus-Female | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Japanese_Synthesis_Corpus-Female
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1165?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
10.4 Hours - Japanese Synthesis Corpus-Female. It is recorded by Japanese native speaker, with authentic accent. The phoneme coverage is balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1165?source=Huggingface
### Supported Tasks and Leaderboards
tts: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Japanese
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
mattismegevand/pitchfork | ---
license: mit
language:
- en
task_categories:
- summarization
- text-generation
- question-answering
tags:
- music
size_categories:
- 10K<n<100K
---
# Pitchfork Music Reviews Dataset
This repository contains the code and dataset for scraping music reviews from Pitchfork.
## Dataset Overview
The Pitchfork Music Reviews dataset is a collection of music album reviews from the Pitchfork website. Each entry in the dataset represents a single review and includes the following attributes:
- `artist`: The artist of the album.
- `album`: The name of the album.
- `year_released`: The year the album was released.
- `rating`: The rating given to the album by the reviewer.
- `small_text`: A short snippet from the review.
- `review`: The full text of the review.
- `reviewer`: The name of the reviewer.
- `genre`: The genre(s) of the album.
- `label`: The record label that released the album.
- `release_date`: The release date of the review.
- `album_art_url`: The URL of the album art.
## Usage
This dataset is publicly available for research. The data is provided 'as is', and you assume full responsibility for any legal or ethical issues that may arise from the use of the data.
## Scraping Process
The dataset was generated by scraping the Pitchfork website. The Python script uses the `requests` and `BeautifulSoup` libraries to send HTTP requests to the website and parse the resulting HTML content.
The script saves the data in an SQLite database and can also export the data to a CSV file. Duplicate entries are avoided by checking for existing entries with the same artist and album name before inserting new ones into the database.
## Potential Applications
This dataset can be used for a variety of research purposes, such as:
- Music information retrieval
- Text mining and sentiment analysis
- Music recommendation systems
- Music trend analysis
## Acknowledgments
The dataset is sourced from [Pitchfork](https://pitchfork.com/), a website that publishes daily reviews, features, and news stories about music.
## License
Please ensure you comply with Pitchfork's terms of service before using or distributing this data. |
supremezxc/nlpcc_2017 | ---
license: openrail
task_categories:
- summarization
language:
- zh
pretty_name: NLPCC2017中文新闻数据集
size_categories:
- 10K<n<100K
--- |
shokhjakhon/law_data_mix | ---
license: apache-2.0
---
|
zicsx/Wikipedia-Hindi | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 576464909.5937778
num_examples: 154867
download_size: 216951489
dataset_size: 576464909.5937778
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Wikipedia-Hindi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kiringodhwani/msp11 | ---
dataset_info:
features:
- name: From
sequence: string
- name: Sent
sequence: string
- name: To
sequence: string
- name: Cc
sequence: string
- name: Subject
sequence: string
- name: Attachment
sequence: string
- name: body
dtype: string
splits:
- name: train
num_bytes: 7371058
num_examples: 2200
download_size: 3570495
dataset_size: 7371058
---
# Dataset Card for "msp11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
linhtran92/infer_55epoch_onValid | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: w2v2_baseline_transcription
dtype: string
- name: w2v2_baseline_norm
dtype: string
splits:
- name: train
num_bytes: 147365
num_examples: 748
download_size: 64689
dataset_size: 147365
---
# Dataset Card for "infer_55epoch_onValid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
48xrf/lilmabu | ---
license: wtfpl
---
|
BoodBooed/Hitl | ---
license: afl-3.0
---
|
heliosprime/twitter_dataset_1713138757 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 228263
num_examples: 618
download_size: 139608
dataset_size: 228263
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713138757"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-one-sec-cv12/chunk_31 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 803700912
num_examples: 157836
download_size: 818726715
dataset_size: 803700912
---
# Dataset Card for "chunk_31"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_automl_10000_covertype_sgosdt_l256_dim10_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 151417455
dataset_size: 472880000
---
# Dataset Card for "autotree_automl_10000_covertype_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
responsibleai/Drug_Interactions_Dataset | ---
license: cc-by-nc-4.0
---
|
veswaran/movie-dataset | ---
license: mit
---
|
bhatvineet/shrutilipi_mr | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 114253169328.11655
num_examples: 474332
- name: test
num_bytes: 39048725811.21545
num_examples: 158111
download_size: 147662822982
dataset_size: 153301895139.332
---
# Dataset Card for "shrutilipi_mr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/aketa_mikoto_theidolmstershinycolors | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of aketa_mikoto/緋田美琴 (THE iDOLM@STER: SHINY COLORS)
This is the dataset of aketa_mikoto/緋田美琴 (THE iDOLM@STER: SHINY COLORS), containing 320 images and their tags.
The core tags of this character are `brown_hair, gradient_hair, multicolored_hair, breasts, blonde_hair, large_breasts, multicolored_eyes, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 320 | 621.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aketa_mikoto_theidolmstershinycolors/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 320 | 296.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aketa_mikoto_theidolmstershinycolors/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 812 | 658.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aketa_mikoto_theidolmstershinycolors/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 320 | 524.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aketa_mikoto_theidolmstershinycolors/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 812 | 1.02 GiB | [Download](https://huggingface.co/datasets/CyberHarem/aketa_mikoto_theidolmstershinycolors/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/aketa_mikoto_theidolmstershinycolors',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, simple_background, solo, sleeveless_shirt, upper_body, white_background, black_jacket, black_shirt, jacket_partially_removed, chain_necklace, earrings, off_shoulder, smile |
| 1 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, nail_polish, solo, fur_trim, long_hair, smile, upper_body, earrings, necklace, brown_eyes, green_nails, black_jacket, simple_background |
| 2 | 7 |  |  |  |  |  | 1girl, cleavage, grey_tank_top, solo, collarbone, simple_background, jacket_partially_removed, looking_at_viewer, blush, hairclip, off_shoulder, :o, grey_jacket, grey_shorts, hair_bun, medium_breasts, short_hair, upper_body, white_background |
| 3 | 8 |  |  |  |  |  | 1girl, collarbone, looking_at_viewer, solo, blush, cleavage, navel, smile, arm_up, brown_eyes, eyewear_on_head, parted_lips, sunglasses, armpits, black_bikini, blurry, hair_between_eyes, outdoors, upper_body, wet |
| 4 | 7 |  |  |  |  |  | 1girl, collarbone, looking_at_viewer, solo, blush, cleavage, medium_breasts, navel, outdoors, water, black_bikini, side-tie_bikini_bottom, beach, blue_sky, blurry_foreground, braid, day, halterneck, nail_polish, ocean, wet |
| 5 | 5 |  |  |  |  |  | 1girl, blush, floral_print, obi, smile, solo, looking_at_viewer, outdoors, print_kimono, wide_sleeves, brown_eyes, closed_mouth, yukata, black_kimono, hair_flower, handbag, holding, long_sleeves, short_hair, upper_body, white_kimono |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | simple_background | solo | sleeveless_shirt | upper_body | white_background | black_jacket | black_shirt | jacket_partially_removed | chain_necklace | earrings | off_shoulder | smile | nail_polish | fur_trim | long_hair | necklace | brown_eyes | green_nails | cleavage | grey_tank_top | collarbone | blush | hairclip | :o | grey_jacket | grey_shorts | hair_bun | medium_breasts | short_hair | navel | arm_up | eyewear_on_head | parted_lips | sunglasses | armpits | black_bikini | blurry | hair_between_eyes | outdoors | wet | water | side-tie_bikini_bottom | beach | blue_sky | blurry_foreground | braid | day | halterneck | ocean | floral_print | obi | print_kimono | wide_sleeves | closed_mouth | yukata | black_kimono | hair_flower | handbag | holding | long_sleeves | white_kimono |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------------------|:-------|:-------------------|:-------------|:-------------------|:---------------|:--------------|:---------------------------|:-----------------|:-----------|:---------------|:--------|:--------------|:-----------|:------------|:-----------|:-------------|:--------------|:-----------|:----------------|:-------------|:--------|:-----------|:-----|:--------------|:--------------|:-----------|:-----------------|:-------------|:--------|:---------|:------------------|:--------------|:-------------|:----------|:---------------|:---------|:--------------------|:-----------|:------|:--------|:-------------------------|:--------|:-----------|:--------------------|:--------|:------|:-------------|:--------|:---------------|:------|:---------------|:---------------|:---------------|:---------|:---------------|:--------------|:----------|:----------|:---------------|:---------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | | X | | X | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | | X | X | | | X | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | X | | X | | X | | | | | | | | X | | | | | X | | X | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | X | | X | | | | | | | | | | | X | | | | | | X | | X | X | | | | | | X | | X | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | X | | X | | | | | | | | X | | | | | X | | | | | X | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667123 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_v5
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_v5
dataset_config: mathemakitten--winobias_antistereotype_test_v5
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
tyzhu/squad_title_v3_train_30_eval_10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 658246
num_examples: 378
- name: validation
num_bytes: 68651
num_examples: 60
download_size: 123968
dataset_size: 726897
---
# Dataset Card for "squad_title_v3_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yunij/datasets | ---
dataset_info:
features:
- name: query_id
dtype: int32
- name: answers
sequence: string
- name: passages
struct:
- name: is_selected
sequence: int32
- name: passage_text
sequence: string
- name: url
sequence: string
- name: query
dtype: string
- name: query_type
dtype: string
- name: wellFormedAnswers
sequence: 'null'
- name: ai_answers
dtype: string
- name: query_len
dtype: int64
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 22694124
num_examples: 4999
download_size: 11057267
dataset_size: 22694124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/find_second_sent_train_100_eval_10_recite | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 339545
num_examples: 210
- name: validation
num_bytes: 17697
num_examples: 10
download_size: 0
dataset_size: 357242
---
# Dataset Card for "find_second_sent_train_100_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sukantan/nyaya-ae-msmarco-distilbert-base-tas-b | ---
dataset_info:
features:
- name: '0'
dtype: float32
- name: '1'
dtype: float32
- name: '2'
dtype: float32
- name: '3'
dtype: float32
- name: '4'
dtype: float32
- name: '5'
dtype: float32
- name: '6'
dtype: float32
- name: '7'
dtype: float32
- name: '8'
dtype: float32
- name: '9'
dtype: float32
- name: '10'
dtype: float32
- name: '11'
dtype: float32
- name: '12'
dtype: float32
- name: '13'
dtype: float32
- name: '14'
dtype: float32
- name: '15'
dtype: float32
- name: '16'
dtype: float32
- name: '17'
dtype: float32
- name: '18'
dtype: float32
- name: '19'
dtype: float32
- name: '20'
dtype: float32
- name: '21'
dtype: float32
- name: '22'
dtype: float32
- name: '23'
dtype: float32
- name: '24'
dtype: float32
- name: '25'
dtype: float32
- name: '26'
dtype: float32
- name: '27'
dtype: float32
- name: '28'
dtype: float32
- name: '29'
dtype: float32
- name: '30'
dtype: float32
- name: '31'
dtype: float32
- name: '32'
dtype: float32
- name: '33'
dtype: float32
- name: '34'
dtype: float32
- name: '35'
dtype: float32
- name: '36'
dtype: float32
- name: '37'
dtype: float32
- name: '38'
dtype: float32
- name: '39'
dtype: float32
- name: '40'
dtype: float32
- name: '41'
dtype: float32
- name: '42'
dtype: float32
- name: '43'
dtype: float32
- name: '44'
dtype: float32
- name: '45'
dtype: float32
- name: '46'
dtype: float32
- name: '47'
dtype: float32
- name: '48'
dtype: float32
- name: '49'
dtype: float32
- name: '50'
dtype: float32
- name: '51'
dtype: float32
- name: '52'
dtype: float32
- name: '53'
dtype: float32
- name: '54'
dtype: float32
- name: '55'
dtype: float32
- name: '56'
dtype: float32
- name: '57'
dtype: float32
- name: '58'
dtype: float32
- name: '59'
dtype: float32
- name: '60'
dtype: float32
- name: '61'
dtype: float32
- name: '62'
dtype: float32
- name: '63'
dtype: float32
- name: '64'
dtype: float32
- name: '65'
dtype: float32
- name: '66'
dtype: float32
- name: '67'
dtype: float32
- name: '68'
dtype: float32
- name: '69'
dtype: float32
- name: '70'
dtype: float32
- name: '71'
dtype: float32
- name: '72'
dtype: float32
- name: '73'
dtype: float32
- name: '74'
dtype: float32
- name: '75'
dtype: float32
- name: '76'
dtype: float32
- name: '77'
dtype: float32
- name: '78'
dtype: float32
- name: '79'
dtype: float32
- name: '80'
dtype: float32
- name: '81'
dtype: float32
- name: '82'
dtype: float32
- name: '83'
dtype: float32
- name: '84'
dtype: float32
- name: '85'
dtype: float32
- name: '86'
dtype: float32
- name: '87'
dtype: float32
- name: '88'
dtype: float32
- name: '89'
dtype: float32
- name: '90'
dtype: float32
- name: '91'
dtype: float32
- name: '92'
dtype: float32
- name: '93'
dtype: float32
- name: '94'
dtype: float32
- name: '95'
dtype: float32
- name: '96'
dtype: float32
- name: '97'
dtype: float32
- name: '98'
dtype: float32
- name: '99'
dtype: float32
- name: '100'
dtype: float32
- name: '101'
dtype: float32
- name: '102'
dtype: float32
- name: '103'
dtype: float32
- name: '104'
dtype: float32
- name: '105'
dtype: float32
- name: '106'
dtype: float32
- name: '107'
dtype: float32
- name: '108'
dtype: float32
- name: '109'
dtype: float32
- name: '110'
dtype: float32
- name: '111'
dtype: float32
- name: '112'
dtype: float32
- name: '113'
dtype: float32
- name: '114'
dtype: float32
- name: '115'
dtype: float32
- name: '116'
dtype: float32
- name: '117'
dtype: float32
- name: '118'
dtype: float32
- name: '119'
dtype: float32
- name: '120'
dtype: float32
- name: '121'
dtype: float32
- name: '122'
dtype: float32
- name: '123'
dtype: float32
- name: '124'
dtype: float32
- name: '125'
dtype: float32
- name: '126'
dtype: float32
- name: '127'
dtype: float32
- name: '128'
dtype: float32
- name: '129'
dtype: float32
- name: '130'
dtype: float32
- name: '131'
dtype: float32
- name: '132'
dtype: float32
- name: '133'
dtype: float32
- name: '134'
dtype: float32
- name: '135'
dtype: float32
- name: '136'
dtype: float32
- name: '137'
dtype: float32
- name: '138'
dtype: float32
- name: '139'
dtype: float32
- name: '140'
dtype: float32
- name: '141'
dtype: float32
- name: '142'
dtype: float32
- name: '143'
dtype: float32
- name: '144'
dtype: float32
- name: '145'
dtype: float32
- name: '146'
dtype: float32
- name: '147'
dtype: float32
- name: '148'
dtype: float32
- name: '149'
dtype: float32
- name: '150'
dtype: float32
- name: '151'
dtype: float32
- name: '152'
dtype: float32
- name: '153'
dtype: float32
- name: '154'
dtype: float32
- name: '155'
dtype: float32
- name: '156'
dtype: float32
- name: '157'
dtype: float32
- name: '158'
dtype: float32
- name: '159'
dtype: float32
- name: '160'
dtype: float32
- name: '161'
dtype: float32
- name: '162'
dtype: float32
- name: '163'
dtype: float32
- name: '164'
dtype: float32
- name: '165'
dtype: float32
- name: '166'
dtype: float32
- name: '167'
dtype: float32
- name: '168'
dtype: float32
- name: '169'
dtype: float32
- name: '170'
dtype: float32
- name: '171'
dtype: float32
- name: '172'
dtype: float32
- name: '173'
dtype: float32
- name: '174'
dtype: float32
- name: '175'
dtype: float32
- name: '176'
dtype: float32
- name: '177'
dtype: float32
- name: '178'
dtype: float32
- name: '179'
dtype: float32
- name: '180'
dtype: float32
- name: '181'
dtype: float32
- name: '182'
dtype: float32
- name: '183'
dtype: float32
- name: '184'
dtype: float32
- name: '185'
dtype: float32
- name: '186'
dtype: float32
- name: '187'
dtype: float32
- name: '188'
dtype: float32
- name: '189'
dtype: float32
- name: '190'
dtype: float32
- name: '191'
dtype: float32
- name: '192'
dtype: float32
- name: '193'
dtype: float32
- name: '194'
dtype: float32
- name: '195'
dtype: float32
- name: '196'
dtype: float32
- name: '197'
dtype: float32
- name: '198'
dtype: float32
- name: '199'
dtype: float32
- name: '200'
dtype: float32
- name: '201'
dtype: float32
- name: '202'
dtype: float32
- name: '203'
dtype: float32
- name: '204'
dtype: float32
- name: '205'
dtype: float32
- name: '206'
dtype: float32
- name: '207'
dtype: float32
- name: '208'
dtype: float32
- name: '209'
dtype: float32
- name: '210'
dtype: float32
- name: '211'
dtype: float32
- name: '212'
dtype: float32
- name: '213'
dtype: float32
- name: '214'
dtype: float32
- name: '215'
dtype: float32
- name: '216'
dtype: float32
- name: '217'
dtype: float32
- name: '218'
dtype: float32
- name: '219'
dtype: float32
- name: '220'
dtype: float32
- name: '221'
dtype: float32
- name: '222'
dtype: float32
- name: '223'
dtype: float32
- name: '224'
dtype: float32
- name: '225'
dtype: float32
- name: '226'
dtype: float32
- name: '227'
dtype: float32
- name: '228'
dtype: float32
- name: '229'
dtype: float32
- name: '230'
dtype: float32
- name: '231'
dtype: float32
- name: '232'
dtype: float32
- name: '233'
dtype: float32
- name: '234'
dtype: float32
- name: '235'
dtype: float32
- name: '236'
dtype: float32
- name: '237'
dtype: float32
- name: '238'
dtype: float32
- name: '239'
dtype: float32
- name: '240'
dtype: float32
- name: '241'
dtype: float32
- name: '242'
dtype: float32
- name: '243'
dtype: float32
- name: '244'
dtype: float32
- name: '245'
dtype: float32
- name: '246'
dtype: float32
- name: '247'
dtype: float32
- name: '248'
dtype: float32
- name: '249'
dtype: float32
- name: '250'
dtype: float32
- name: '251'
dtype: float32
- name: '252'
dtype: float32
- name: '253'
dtype: float32
- name: '254'
dtype: float32
- name: '255'
dtype: float32
- name: '256'
dtype: float32
- name: '257'
dtype: float32
- name: '258'
dtype: float32
- name: '259'
dtype: float32
- name: '260'
dtype: float32
- name: '261'
dtype: float32
- name: '262'
dtype: float32
- name: '263'
dtype: float32
- name: '264'
dtype: float32
- name: '265'
dtype: float32
- name: '266'
dtype: float32
- name: '267'
dtype: float32
- name: '268'
dtype: float32
- name: '269'
dtype: float32
- name: '270'
dtype: float32
- name: '271'
dtype: float32
- name: '272'
dtype: float32
- name: '273'
dtype: float32
- name: '274'
dtype: float32
- name: '275'
dtype: float32
- name: '276'
dtype: float32
- name: '277'
dtype: float32
- name: '278'
dtype: float32
- name: '279'
dtype: float32
- name: '280'
dtype: float32
- name: '281'
dtype: float32
- name: '282'
dtype: float32
- name: '283'
dtype: float32
- name: '284'
dtype: float32
- name: '285'
dtype: float32
- name: '286'
dtype: float32
- name: '287'
dtype: float32
- name: '288'
dtype: float32
- name: '289'
dtype: float32
- name: '290'
dtype: float32
- name: '291'
dtype: float32
- name: '292'
dtype: float32
- name: '293'
dtype: float32
- name: '294'
dtype: float32
- name: '295'
dtype: float32
- name: '296'
dtype: float32
- name: '297'
dtype: float32
- name: '298'
dtype: float32
- name: '299'
dtype: float32
- name: '300'
dtype: float32
- name: '301'
dtype: float32
- name: '302'
dtype: float32
- name: '303'
dtype: float32
- name: '304'
dtype: float32
- name: '305'
dtype: float32
- name: '306'
dtype: float32
- name: '307'
dtype: float32
- name: '308'
dtype: float32
- name: '309'
dtype: float32
- name: '310'
dtype: float32
- name: '311'
dtype: float32
- name: '312'
dtype: float32
- name: '313'
dtype: float32
- name: '314'
dtype: float32
- name: '315'
dtype: float32
- name: '316'
dtype: float32
- name: '317'
dtype: float32
- name: '318'
dtype: float32
- name: '319'
dtype: float32
- name: '320'
dtype: float32
- name: '321'
dtype: float32
- name: '322'
dtype: float32
- name: '323'
dtype: float32
- name: '324'
dtype: float32
- name: '325'
dtype: float32
- name: '326'
dtype: float32
- name: '327'
dtype: float32
- name: '328'
dtype: float32
- name: '329'
dtype: float32
- name: '330'
dtype: float32
- name: '331'
dtype: float32
- name: '332'
dtype: float32
- name: '333'
dtype: float32
- name: '334'
dtype: float32
- name: '335'
dtype: float32
- name: '336'
dtype: float32
- name: '337'
dtype: float32
- name: '338'
dtype: float32
- name: '339'
dtype: float32
- name: '340'
dtype: float32
- name: '341'
dtype: float32
- name: '342'
dtype: float32
- name: '343'
dtype: float32
- name: '344'
dtype: float32
- name: '345'
dtype: float32
- name: '346'
dtype: float32
- name: '347'
dtype: float32
- name: '348'
dtype: float32
- name: '349'
dtype: float32
- name: '350'
dtype: float32
- name: '351'
dtype: float32
- name: '352'
dtype: float32
- name: '353'
dtype: float32
- name: '354'
dtype: float32
- name: '355'
dtype: float32
- name: '356'
dtype: float32
- name: '357'
dtype: float32
- name: '358'
dtype: float32
- name: '359'
dtype: float32
- name: '360'
dtype: float32
- name: '361'
dtype: float32
- name: '362'
dtype: float32
- name: '363'
dtype: float32
- name: '364'
dtype: float32
- name: '365'
dtype: float32
- name: '366'
dtype: float32
- name: '367'
dtype: float32
- name: '368'
dtype: float32
- name: '369'
dtype: float32
- name: '370'
dtype: float32
- name: '371'
dtype: float32
- name: '372'
dtype: float32
- name: '373'
dtype: float32
- name: '374'
dtype: float32
- name: '375'
dtype: float32
- name: '376'
dtype: float32
- name: '377'
dtype: float32
- name: '378'
dtype: float32
- name: '379'
dtype: float32
- name: '380'
dtype: float32
- name: '381'
dtype: float32
- name: '382'
dtype: float32
- name: '383'
dtype: float32
- name: '384'
dtype: float32
- name: '385'
dtype: float32
- name: '386'
dtype: float32
- name: '387'
dtype: float32
- name: '388'
dtype: float32
- name: '389'
dtype: float32
- name: '390'
dtype: float32
- name: '391'
dtype: float32
- name: '392'
dtype: float32
- name: '393'
dtype: float32
- name: '394'
dtype: float32
- name: '395'
dtype: float32
- name: '396'
dtype: float32
- name: '397'
dtype: float32
- name: '398'
dtype: float32
- name: '399'
dtype: float32
- name: '400'
dtype: float32
- name: '401'
dtype: float32
- name: '402'
dtype: float32
- name: '403'
dtype: float32
- name: '404'
dtype: float32
- name: '405'
dtype: float32
- name: '406'
dtype: float32
- name: '407'
dtype: float32
- name: '408'
dtype: float32
- name: '409'
dtype: float32
- name: '410'
dtype: float32
- name: '411'
dtype: float32
- name: '412'
dtype: float32
- name: '413'
dtype: float32
- name: '414'
dtype: float32
- name: '415'
dtype: float32
- name: '416'
dtype: float32
- name: '417'
dtype: float32
- name: '418'
dtype: float32
- name: '419'
dtype: float32
- name: '420'
dtype: float32
- name: '421'
dtype: float32
- name: '422'
dtype: float32
- name: '423'
dtype: float32
- name: '424'
dtype: float32
- name: '425'
dtype: float32
- name: '426'
dtype: float32
- name: '427'
dtype: float32
- name: '428'
dtype: float32
- name: '429'
dtype: float32
- name: '430'
dtype: float32
- name: '431'
dtype: float32
- name: '432'
dtype: float32
- name: '433'
dtype: float32
- name: '434'
dtype: float32
- name: '435'
dtype: float32
- name: '436'
dtype: float32
- name: '437'
dtype: float32
- name: '438'
dtype: float32
- name: '439'
dtype: float32
- name: '440'
dtype: float32
- name: '441'
dtype: float32
- name: '442'
dtype: float32
- name: '443'
dtype: float32
- name: '444'
dtype: float32
- name: '445'
dtype: float32
- name: '446'
dtype: float32
- name: '447'
dtype: float32
- name: '448'
dtype: float32
- name: '449'
dtype: float32
- name: '450'
dtype: float32
- name: '451'
dtype: float32
- name: '452'
dtype: float32
- name: '453'
dtype: float32
- name: '454'
dtype: float32
- name: '455'
dtype: float32
- name: '456'
dtype: float32
- name: '457'
dtype: float32
- name: '458'
dtype: float32
- name: '459'
dtype: float32
- name: '460'
dtype: float32
- name: '461'
dtype: float32
- name: '462'
dtype: float32
- name: '463'
dtype: float32
- name: '464'
dtype: float32
- name: '465'
dtype: float32
- name: '466'
dtype: float32
- name: '467'
dtype: float32
- name: '468'
dtype: float32
- name: '469'
dtype: float32
- name: '470'
dtype: float32
- name: '471'
dtype: float32
- name: '472'
dtype: float32
- name: '473'
dtype: float32
- name: '474'
dtype: float32
- name: '475'
dtype: float32
- name: '476'
dtype: float32
- name: '477'
dtype: float32
- name: '478'
dtype: float32
- name: '479'
dtype: float32
- name: '480'
dtype: float32
- name: '481'
dtype: float32
- name: '482'
dtype: float32
- name: '483'
dtype: float32
- name: '484'
dtype: float32
- name: '485'
dtype: float32
- name: '486'
dtype: float32
- name: '487'
dtype: float32
- name: '488'
dtype: float32
- name: '489'
dtype: float32
- name: '490'
dtype: float32
- name: '491'
dtype: float32
- name: '492'
dtype: float32
- name: '493'
dtype: float32
- name: '494'
dtype: float32
- name: '495'
dtype: float32
- name: '496'
dtype: float32
- name: '497'
dtype: float32
- name: '498'
dtype: float32
- name: '499'
dtype: float32
- name: '500'
dtype: float32
- name: '501'
dtype: float32
- name: '502'
dtype: float32
- name: '503'
dtype: float32
- name: '504'
dtype: float32
- name: '505'
dtype: float32
- name: '506'
dtype: float32
- name: '507'
dtype: float32
- name: '508'
dtype: float32
- name: '509'
dtype: float32
- name: '510'
dtype: float32
- name: '511'
dtype: float32
- name: '512'
dtype: float32
- name: '513'
dtype: float32
- name: '514'
dtype: float32
- name: '515'
dtype: float32
- name: '516'
dtype: float32
- name: '517'
dtype: float32
- name: '518'
dtype: float32
- name: '519'
dtype: float32
- name: '520'
dtype: float32
- name: '521'
dtype: float32
- name: '522'
dtype: float32
- name: '523'
dtype: float32
- name: '524'
dtype: float32
- name: '525'
dtype: float32
- name: '526'
dtype: float32
- name: '527'
dtype: float32
- name: '528'
dtype: float32
- name: '529'
dtype: float32
- name: '530'
dtype: float32
- name: '531'
dtype: float32
- name: '532'
dtype: float32
- name: '533'
dtype: float32
- name: '534'
dtype: float32
- name: '535'
dtype: float32
- name: '536'
dtype: float32
- name: '537'
dtype: float32
- name: '538'
dtype: float32
- name: '539'
dtype: float32
- name: '540'
dtype: float32
- name: '541'
dtype: float32
- name: '542'
dtype: float32
- name: '543'
dtype: float32
- name: '544'
dtype: float32
- name: '545'
dtype: float32
- name: '546'
dtype: float32
- name: '547'
dtype: float32
- name: '548'
dtype: float32
- name: '549'
dtype: float32
- name: '550'
dtype: float32
- name: '551'
dtype: float32
- name: '552'
dtype: float32
- name: '553'
dtype: float32
- name: '554'
dtype: float32
- name: '555'
dtype: float32
- name: '556'
dtype: float32
- name: '557'
dtype: float32
- name: '558'
dtype: float32
- name: '559'
dtype: float32
- name: '560'
dtype: float32
- name: '561'
dtype: float32
- name: '562'
dtype: float32
- name: '563'
dtype: float32
- name: '564'
dtype: float32
- name: '565'
dtype: float32
- name: '566'
dtype: float32
- name: '567'
dtype: float32
- name: '568'
dtype: float32
- name: '569'
dtype: float32
- name: '570'
dtype: float32
- name: '571'
dtype: float32
- name: '572'
dtype: float32
- name: '573'
dtype: float32
- name: '574'
dtype: float32
- name: '575'
dtype: float32
- name: '576'
dtype: float32
- name: '577'
dtype: float32
- name: '578'
dtype: float32
- name: '579'
dtype: float32
- name: '580'
dtype: float32
- name: '581'
dtype: float32
- name: '582'
dtype: float32
- name: '583'
dtype: float32
- name: '584'
dtype: float32
- name: '585'
dtype: float32
- name: '586'
dtype: float32
- name: '587'
dtype: float32
- name: '588'
dtype: float32
- name: '589'
dtype: float32
- name: '590'
dtype: float32
- name: '591'
dtype: float32
- name: '592'
dtype: float32
- name: '593'
dtype: float32
- name: '594'
dtype: float32
- name: '595'
dtype: float32
- name: '596'
dtype: float32
- name: '597'
dtype: float32
- name: '598'
dtype: float32
- name: '599'
dtype: float32
- name: '600'
dtype: float32
- name: '601'
dtype: float32
- name: '602'
dtype: float32
- name: '603'
dtype: float32
- name: '604'
dtype: float32
- name: '605'
dtype: float32
- name: '606'
dtype: float32
- name: '607'
dtype: float32
- name: '608'
dtype: float32
- name: '609'
dtype: float32
- name: '610'
dtype: float32
- name: '611'
dtype: float32
- name: '612'
dtype: float32
- name: '613'
dtype: float32
- name: '614'
dtype: float32
- name: '615'
dtype: float32
- name: '616'
dtype: float32
- name: '617'
dtype: float32
- name: '618'
dtype: float32
- name: '619'
dtype: float32
- name: '620'
dtype: float32
- name: '621'
dtype: float32
- name: '622'
dtype: float32
- name: '623'
dtype: float32
- name: '624'
dtype: float32
- name: '625'
dtype: float32
- name: '626'
dtype: float32
- name: '627'
dtype: float32
- name: '628'
dtype: float32
- name: '629'
dtype: float32
- name: '630'
dtype: float32
- name: '631'
dtype: float32
- name: '632'
dtype: float32
- name: '633'
dtype: float32
- name: '634'
dtype: float32
- name: '635'
dtype: float32
- name: '636'
dtype: float32
- name: '637'
dtype: float32
- name: '638'
dtype: float32
- name: '639'
dtype: float32
- name: '640'
dtype: float32
- name: '641'
dtype: float32
- name: '642'
dtype: float32
- name: '643'
dtype: float32
- name: '644'
dtype: float32
- name: '645'
dtype: float32
- name: '646'
dtype: float32
- name: '647'
dtype: float32
- name: '648'
dtype: float32
- name: '649'
dtype: float32
- name: '650'
dtype: float32
- name: '651'
dtype: float32
- name: '652'
dtype: float32
- name: '653'
dtype: float32
- name: '654'
dtype: float32
- name: '655'
dtype: float32
- name: '656'
dtype: float32
- name: '657'
dtype: float32
- name: '658'
dtype: float32
- name: '659'
dtype: float32
- name: '660'
dtype: float32
- name: '661'
dtype: float32
- name: '662'
dtype: float32
- name: '663'
dtype: float32
- name: '664'
dtype: float32
- name: '665'
dtype: float32
- name: '666'
dtype: float32
- name: '667'
dtype: float32
- name: '668'
dtype: float32
- name: '669'
dtype: float32
- name: '670'
dtype: float32
- name: '671'
dtype: float32
- name: '672'
dtype: float32
- name: '673'
dtype: float32
- name: '674'
dtype: float32
- name: '675'
dtype: float32
- name: '676'
dtype: float32
- name: '677'
dtype: float32
- name: '678'
dtype: float32
- name: '679'
dtype: float32
- name: '680'
dtype: float32
- name: '681'
dtype: float32
- name: '682'
dtype: float32
- name: '683'
dtype: float32
- name: '684'
dtype: float32
- name: '685'
dtype: float32
- name: '686'
dtype: float32
- name: '687'
dtype: float32
- name: '688'
dtype: float32
- name: '689'
dtype: float32
- name: '690'
dtype: float32
- name: '691'
dtype: float32
- name: '692'
dtype: float32
- name: '693'
dtype: float32
- name: '694'
dtype: float32
- name: '695'
dtype: float32
- name: '696'
dtype: float32
- name: '697'
dtype: float32
- name: '698'
dtype: float32
- name: '699'
dtype: float32
- name: '700'
dtype: float32
- name: '701'
dtype: float32
- name: '702'
dtype: float32
- name: '703'
dtype: float32
- name: '704'
dtype: float32
- name: '705'
dtype: float32
- name: '706'
dtype: float32
- name: '707'
dtype: float32
- name: '708'
dtype: float32
- name: '709'
dtype: float32
- name: '710'
dtype: float32
- name: '711'
dtype: float32
- name: '712'
dtype: float32
- name: '713'
dtype: float32
- name: '714'
dtype: float32
- name: '715'
dtype: float32
- name: '716'
dtype: float32
- name: '717'
dtype: float32
- name: '718'
dtype: float32
- name: '719'
dtype: float32
- name: '720'
dtype: float32
- name: '721'
dtype: float32
- name: '722'
dtype: float32
- name: '723'
dtype: float32
- name: '724'
dtype: float32
- name: '725'
dtype: float32
- name: '726'
dtype: float32
- name: '727'
dtype: float32
- name: '728'
dtype: float32
- name: '729'
dtype: float32
- name: '730'
dtype: float32
- name: '731'
dtype: float32
- name: '732'
dtype: float32
- name: '733'
dtype: float32
- name: '734'
dtype: float32
- name: '735'
dtype: float32
- name: '736'
dtype: float32
- name: '737'
dtype: float32
- name: '738'
dtype: float32
- name: '739'
dtype: float32
- name: '740'
dtype: float32
- name: '741'
dtype: float32
- name: '742'
dtype: float32
- name: '743'
dtype: float32
- name: '744'
dtype: float32
- name: '745'
dtype: float32
- name: '746'
dtype: float32
- name: '747'
dtype: float32
- name: '748'
dtype: float32
- name: '749'
dtype: float32
- name: '750'
dtype: float32
- name: '751'
dtype: float32
- name: '752'
dtype: float32
- name: '753'
dtype: float32
- name: '754'
dtype: float32
- name: '755'
dtype: float32
- name: '756'
dtype: float32
- name: '757'
dtype: float32
- name: '758'
dtype: float32
- name: '759'
dtype: float32
- name: '760'
dtype: float32
- name: '761'
dtype: float32
- name: '762'
dtype: float32
- name: '763'
dtype: float32
- name: '764'
dtype: float32
- name: '765'
dtype: float32
- name: '766'
dtype: float32
- name: '767'
dtype: float32
splits:
- name: train
num_bytes: 165236736
num_examples: 53788
download_size: 199550010
dataset_size: 165236736
---
# Dataset Card for "nyaya-ae-msmarco-distilbert-base-tas-b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YURIJ24/ReginaTodorenko | ---
license: mit
---
|
cahya/instructions-zh | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 22155876.092150297
num_examples: 76886
- name: test
num_bytes: 583246.536567284
num_examples: 2024
- name: validation
num_bytes: 582958.3712824188
num_examples: 2023
download_size: 15122185
dataset_size: 23322081.0
---
# Dataset Card for "instructions-zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
indicbench/hellaswag_te | ---
dataset_info:
features:
- name: ind
dtype: int64
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 30192819
num_examples: 10042
- name: test
num_bytes: 28995872
num_examples: 10003
download_size: 21022349
dataset_size: 59188691
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
neel-17/audio_dataset | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype: int64
splits:
- name: train
num_bytes: 5879428208.184
num_examples: 1752
- name: validation
num_bytes: 1522701105.0
num_examples: 579
download_size: 5773793341
dataset_size: 7402129313.184
---
# Dataset Card for "audio_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distilled-one-sec-cv12-each-chunk-uniq/chunk_45 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1219034752.0
num_examples: 237536
download_size: 1245101001
dataset_size: 1219034752.0
---
# Dataset Card for "chunk_45"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MatsuoDochiai/Lanjax | ---
license: openrail
---
|
asnq | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- extended|natural_questions
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: asnq
pretty_name: Answer Sentence Natural Questions (ASNQ)
dataset_info:
features:
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: sentence_in_long_answer
dtype: bool
- name: short_answer_in_sentence
dtype: bool
splits:
- name: train
num_bytes: 3656865072
num_examples: 20377568
- name: validation
num_bytes: 168004403
num_examples: 930062
download_size: 2496835395
dataset_size: 3824869475
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "asnq"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/alexa/wqa_tanda#answer-sentence-natural-questions-asnq](https://github.com/alexa/wqa_tanda#answer-sentence-natural-questions-asnq)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection](https://arxiv.org/abs/1911.04118)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.56 GB
- **Size of the generated dataset:** 3.82 GB
- **Total amount of disk used:** 7.39 GB
### Dataset Summary
ASNQ is a dataset for answer sentence selection derived from
Google's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).
Each example contains a question, candidate sentence, label indicating whether or not
the sentence answers the question, and two additional features --
sentence_in_long_answer and short_answer_in_sentence indicating whether ot not the
candidate sentence is contained in the long_answer and if the short_answer is in the candidate sentence.
For more details please see
https://arxiv.org/abs/1911.04118
and
https://research.google/pubs/pub47761/
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.56 GB
- **Size of the generated dataset:** 3.82 GB
- **Total amount of disk used:** 7.39 GB
An example of 'validation' looks as follows.
```
{
"label": 0,
"question": "when did somewhere over the rainbow come out",
"sentence": "In films and TV shows ( edit ) In the film Third Finger , Left Hand ( 1940 ) with Myrna Loy , Melvyn Douglas , and Raymond Walburn , the tune played throughout the film in short sequences .",
"sentence_in_long_answer": false,
"short_answer_in_sentence": false
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `question`: a `string` feature.
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `neg` (0), `pos` (1).
- `sentence_in_long_answer`: a `bool` feature.
- `short_answer_in_sentence`: a `bool` feature.
### Data Splits
| name | train |validation|
|-------|-------:|---------:|
|default|20377568| 930062|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The data is made available under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License:
https://github.com/alexa/wqa_tanda/blob/master/LICENSE
### Citation Information
```
@article{Garg_2020,
title={TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection},
volume={34},
ISSN={2159-5399},
url={http://dx.doi.org/10.1609/AAAI.V34I05.6282},
DOI={10.1609/aaai.v34i05.6282},
number={05},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
publisher={Association for the Advancement of Artificial Intelligence (AAAI)},
author={Garg, Siddhant and Vu, Thuy and Moschitti, Alessandro},
year={2020},
month={Apr},
pages={7780–7788}
}
```
### Contributions
Thanks to [@mkserge](https://github.com/mkserge) for adding this dataset. |
MohamedRashad/rasaif-translations | ---
dataset_info:
features:
- name: arabic
dtype: string
- name: english
dtype: string
splits:
- name: train
num_bytes: 458802
num_examples: 1951
download_size: 245732
dataset_size: 458802
task_categories:
- translation
language:
- ar
- en
pretty_name: Rasaif Translation
size_categories:
- 1K<n<10K
---
# Dataset Source
https://rasaif.com |
metaltiger775/Scraped_Dataset_Articles | ---
license: mit
---
|
liuyanchen1015/MULTI_VALUE_cola_chaining_main_verbs | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 507
num_examples: 7
- name: test
num_bytes: 563
num_examples: 7
- name: train
num_bytes: 2714
num_examples: 35
download_size: 8023
dataset_size: 3784
---
# Dataset Card for "MULTI_VALUE_cola_chaining_main_verbs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ycchen/oasst_lima_arc | ---
dataset_info:
features:
- name: conversations
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 8102880
num_examples: 4970
download_size: 4569911
dataset_size: 8102880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "oasst_lima_arc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cha7ura/dummy_data_peft | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k | ---
pretty_name: Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-30T16:16:53.571803](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k/blob/main/results_2023-12-30T16-16-53.571803.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6939517211944045,\n\
\ \"acc_stderr\": 0.030232673494217974,\n \"acc_norm\": 0.7084301138333359,\n\
\ \"acc_norm_stderr\": 0.031054743745039477,\n \"mc1\": 0.397796817625459,\n\
\ \"mc1_stderr\": 0.01713393424855964,\n \"mc2\": 0.5597457443511287,\n\
\ \"mc2_stderr\": 0.014917533204367936\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.23976109215017063,\n \"acc_stderr\": 0.012476304127453947,\n\
\ \"acc_norm\": 0.2909556313993174,\n \"acc_norm_stderr\": 0.013273077865907586\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6341366261700856,\n\
\ \"acc_stderr\": 0.004806870285747291,\n \"acc_norm\": 0.8227444732125074,\n\
\ \"acc_norm_stderr\": 0.0038110434120246514\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n\
\ \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n\
\ \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7763157894736842,\n \"acc_stderr\": 0.033911609343436025,\n\
\ \"acc_norm\": 0.7763157894736842,\n \"acc_norm_stderr\": 0.033911609343436025\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7849056603773585,\n \"acc_stderr\": 0.02528839450289137,\n\
\ \"acc_norm\": 0.7849056603773585,\n \"acc_norm_stderr\": 0.02528839450289137\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8333333333333334,\n\
\ \"acc_stderr\": 0.031164899666948614,\n \"acc_norm\": 0.8333333333333334,\n\
\ \"acc_norm_stderr\": 0.031164899666948614\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n\
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n\
\ \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n\
\ \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5196078431372549,\n \"acc_stderr\": 0.04971358884367406,\n\
\ \"acc_norm\": 0.5196078431372549,\n \"acc_norm_stderr\": 0.04971358884367406\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6680851063829787,\n \"acc_stderr\": 0.030783736757745653,\n\
\ \"acc_norm\": 0.6680851063829787,\n \"acc_norm_stderr\": 0.030783736757745653\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6842105263157895,\n\
\ \"acc_stderr\": 0.043727482902780085,\n \"acc_norm\": 0.6842105263157895,\n\
\ \"acc_norm_stderr\": 0.043727482902780085\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6758620689655173,\n \"acc_stderr\": 0.03900432069185555,\n\
\ \"acc_norm\": 0.6758620689655173,\n \"acc_norm_stderr\": 0.03900432069185555\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4894179894179894,\n \"acc_stderr\": 0.02574554227604548,\n \"\
acc_norm\": 0.4894179894179894,\n \"acc_norm_stderr\": 0.02574554227604548\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5317460317460317,\n\
\ \"acc_stderr\": 0.04463112720677173,\n \"acc_norm\": 0.5317460317460317,\n\
\ \"acc_norm_stderr\": 0.04463112720677173\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8451612903225807,\n\
\ \"acc_stderr\": 0.020579287326583227,\n \"acc_norm\": 0.8451612903225807,\n\
\ \"acc_norm_stderr\": 0.020579287326583227\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.03465304488406796,\n\
\ \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.03465304488406796\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\"\
: 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n\
\ \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8535353535353535,\n \"acc_stderr\": 0.025190921114603918,\n \"\
acc_norm\": 0.8535353535353535,\n \"acc_norm_stderr\": 0.025190921114603918\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9222797927461139,\n \"acc_stderr\": 0.01932180555722315,\n\
\ \"acc_norm\": 0.9222797927461139,\n \"acc_norm_stderr\": 0.01932180555722315\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7051282051282052,\n \"acc_stderr\": 0.023119362758232297,\n\
\ \"acc_norm\": 0.7051282051282052,\n \"acc_norm_stderr\": 0.023119362758232297\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.362962962962963,\n \"acc_stderr\": 0.029318203645206865,\n \
\ \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.029318203645206865\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8109243697478992,\n \"acc_stderr\": 0.025435119438105353,\n\
\ \"acc_norm\": 0.8109243697478992,\n \"acc_norm_stderr\": 0.025435119438105353\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4503311258278146,\n \"acc_stderr\": 0.04062290018683776,\n \"\
acc_norm\": 0.4503311258278146,\n \"acc_norm_stderr\": 0.04062290018683776\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8917431192660551,\n \"acc_stderr\": 0.013321348447611764,\n \"\
acc_norm\": 0.8917431192660551,\n \"acc_norm_stderr\": 0.013321348447611764\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"\
acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"\
acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8776371308016878,\n \"acc_stderr\": 0.021331741829746793,\n \
\ \"acc_norm\": 0.8776371308016878,\n \"acc_norm_stderr\": 0.021331741829746793\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7533632286995515,\n\
\ \"acc_stderr\": 0.028930413120910884,\n \"acc_norm\": 0.7533632286995515,\n\
\ \"acc_norm_stderr\": 0.028930413120910884\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.035477710041594626,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.035477710041594626\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8512396694214877,\n \"acc_stderr\": 0.03248470083807194,\n \"\
acc_norm\": 0.8512396694214877,\n \"acc_norm_stderr\": 0.03248470083807194\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8518518518518519,\n\
\ \"acc_stderr\": 0.03434300243630999,\n \"acc_norm\": 0.8518518518518519,\n\
\ \"acc_norm_stderr\": 0.03434300243630999\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7975460122699386,\n \"acc_stderr\": 0.031570650789119005,\n\
\ \"acc_norm\": 0.7975460122699386,\n \"acc_norm_stderr\": 0.031570650789119005\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5803571428571429,\n\
\ \"acc_stderr\": 0.046840993210771065,\n \"acc_norm\": 0.5803571428571429,\n\
\ \"acc_norm_stderr\": 0.046840993210771065\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.035865947385739734,\n\
\ \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.035865947385739734\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.905982905982906,\n\
\ \"acc_stderr\": 0.01911989279892498,\n \"acc_norm\": 0.905982905982906,\n\
\ \"acc_norm_stderr\": 0.01911989279892498\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \
\ \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8748403575989783,\n\
\ \"acc_stderr\": 0.011832954239305736,\n \"acc_norm\": 0.8748403575989783,\n\
\ \"acc_norm_stderr\": 0.011832954239305736\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7745664739884393,\n \"acc_stderr\": 0.022497230190967558,\n\
\ \"acc_norm\": 0.7745664739884393,\n \"acc_norm_stderr\": 0.022497230190967558\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5273743016759777,\n\
\ \"acc_stderr\": 0.016697420650642752,\n \"acc_norm\": 0.5273743016759777,\n\
\ \"acc_norm_stderr\": 0.016697420650642752\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7973856209150327,\n \"acc_stderr\": 0.023015446877985693,\n\
\ \"acc_norm\": 0.7973856209150327,\n \"acc_norm_stderr\": 0.023015446877985693\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7942122186495176,\n\
\ \"acc_stderr\": 0.022961339906764244,\n \"acc_norm\": 0.7942122186495176,\n\
\ \"acc_norm_stderr\": 0.022961339906764244\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.020736358408060006,\n\
\ \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.020736358408060006\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.524822695035461,\n \"acc_stderr\": 0.0297907192438297,\n \
\ \"acc_norm\": 0.524822695035461,\n \"acc_norm_stderr\": 0.0297907192438297\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5195567144719687,\n\
\ \"acc_stderr\": 0.012760464028289299,\n \"acc_norm\": 0.5195567144719687,\n\
\ \"acc_norm_stderr\": 0.012760464028289299\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.02518778666022726,\n\
\ \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.02518778666022726\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7549019607843137,\n \"acc_stderr\": 0.017401816711427657,\n \
\ \"acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.017401816711427657\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7959183673469388,\n \"acc_stderr\": 0.025801283475090506,\n\
\ \"acc_norm\": 0.7959183673469388,\n \"acc_norm_stderr\": 0.025801283475090506\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8706467661691543,\n\
\ \"acc_stderr\": 0.023729830881018526,\n \"acc_norm\": 0.8706467661691543,\n\
\ \"acc_norm_stderr\": 0.023729830881018526\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \
\ \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.032659863237109066\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n\
\ \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n\
\ \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8654970760233918,\n \"acc_stderr\": 0.026168221344662297,\n\
\ \"acc_norm\": 0.8654970760233918,\n \"acc_norm_stderr\": 0.026168221344662297\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.397796817625459,\n\
\ \"mc1_stderr\": 0.01713393424855964,\n \"mc2\": 0.5597457443511287,\n\
\ \"mc2_stderr\": 0.014917533204367936\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698334\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|arc:challenge|25_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|gsm8k|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hellaswag|10_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-30T16-16-53.571803.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-30T16-16-53.571803.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- '**/details_harness|winogrande|5_2023-12-30T16-16-53.571803.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-30T16-16-53.571803.parquet'
- config_name: results
data_files:
- split: 2023_12_30T16_16_53.571803
path:
- results_2023-12-30T16-16-53.571803.parquet
- split: latest
path:
- results_2023-12-30T16-16-53.571803.parquet
---
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-30T16:16:53.571803](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k/blob/main/results_2023-12-30T16-16-53.571803.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6939517211944045,
"acc_stderr": 0.030232673494217974,
"acc_norm": 0.7084301138333359,
"acc_norm_stderr": 0.031054743745039477,
"mc1": 0.397796817625459,
"mc1_stderr": 0.01713393424855964,
"mc2": 0.5597457443511287,
"mc2_stderr": 0.014917533204367936
},
"harness|arc:challenge|25": {
"acc": 0.23976109215017063,
"acc_stderr": 0.012476304127453947,
"acc_norm": 0.2909556313993174,
"acc_norm_stderr": 0.013273077865907586
},
"harness|hellaswag|10": {
"acc": 0.6341366261700856,
"acc_stderr": 0.004806870285747291,
"acc_norm": 0.8227444732125074,
"acc_norm_stderr": 0.0038110434120246514
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7763157894736842,
"acc_stderr": 0.033911609343436025,
"acc_norm": 0.7763157894736842,
"acc_norm_stderr": 0.033911609343436025
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7849056603773585,
"acc_stderr": 0.02528839450289137,
"acc_norm": 0.7849056603773585,
"acc_norm_stderr": 0.02528839450289137
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.031164899666948614,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.031164899666948614
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5196078431372549,
"acc_stderr": 0.04971358884367406,
"acc_norm": 0.5196078431372549,
"acc_norm_stderr": 0.04971358884367406
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6680851063829787,
"acc_stderr": 0.030783736757745653,
"acc_norm": 0.6680851063829787,
"acc_norm_stderr": 0.030783736757745653
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.043727482902780085,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.043727482902780085
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6758620689655173,
"acc_stderr": 0.03900432069185555,
"acc_norm": 0.6758620689655173,
"acc_norm_stderr": 0.03900432069185555
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4894179894179894,
"acc_stderr": 0.02574554227604548,
"acc_norm": 0.4894179894179894,
"acc_norm_stderr": 0.02574554227604548
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5317460317460317,
"acc_stderr": 0.04463112720677173,
"acc_norm": 0.5317460317460317,
"acc_norm_stderr": 0.04463112720677173
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8451612903225807,
"acc_stderr": 0.020579287326583227,
"acc_norm": 0.8451612903225807,
"acc_norm_stderr": 0.020579287326583227
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.03465304488406796,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.03465304488406796
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8242424242424242,
"acc_stderr": 0.02972094300622445,
"acc_norm": 0.8242424242424242,
"acc_norm_stderr": 0.02972094300622445
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8535353535353535,
"acc_stderr": 0.025190921114603918,
"acc_norm": 0.8535353535353535,
"acc_norm_stderr": 0.025190921114603918
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9222797927461139,
"acc_stderr": 0.01932180555722315,
"acc_norm": 0.9222797927461139,
"acc_norm_stderr": 0.01932180555722315
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7051282051282052,
"acc_stderr": 0.023119362758232297,
"acc_norm": 0.7051282051282052,
"acc_norm_stderr": 0.023119362758232297
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.362962962962963,
"acc_stderr": 0.029318203645206865,
"acc_norm": 0.362962962962963,
"acc_norm_stderr": 0.029318203645206865
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8109243697478992,
"acc_stderr": 0.025435119438105353,
"acc_norm": 0.8109243697478992,
"acc_norm_stderr": 0.025435119438105353
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4503311258278146,
"acc_stderr": 0.04062290018683776,
"acc_norm": 0.4503311258278146,
"acc_norm_stderr": 0.04062290018683776
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8917431192660551,
"acc_stderr": 0.013321348447611764,
"acc_norm": 0.8917431192660551,
"acc_norm_stderr": 0.013321348447611764
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5787037037037037,
"acc_stderr": 0.033674621388960775,
"acc_norm": 0.5787037037037037,
"acc_norm_stderr": 0.033674621388960775
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931792,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931792
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8776371308016878,
"acc_stderr": 0.021331741829746793,
"acc_norm": 0.8776371308016878,
"acc_norm_stderr": 0.021331741829746793
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7533632286995515,
"acc_stderr": 0.028930413120910884,
"acc_norm": 0.7533632286995515,
"acc_norm_stderr": 0.028930413120910884
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.035477710041594626,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.035477710041594626
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8512396694214877,
"acc_stderr": 0.03248470083807194,
"acc_norm": 0.8512396694214877,
"acc_norm_stderr": 0.03248470083807194
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8518518518518519,
"acc_stderr": 0.03434300243630999,
"acc_norm": 0.8518518518518519,
"acc_norm_stderr": 0.03434300243630999
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7975460122699386,
"acc_stderr": 0.031570650789119005,
"acc_norm": 0.7975460122699386,
"acc_norm_stderr": 0.031570650789119005
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5803571428571429,
"acc_stderr": 0.046840993210771065,
"acc_norm": 0.5803571428571429,
"acc_norm_stderr": 0.046840993210771065
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.035865947385739734,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.035865947385739734
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.905982905982906,
"acc_stderr": 0.01911989279892498,
"acc_norm": 0.905982905982906,
"acc_norm_stderr": 0.01911989279892498
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8748403575989783,
"acc_stderr": 0.011832954239305736,
"acc_norm": 0.8748403575989783,
"acc_norm_stderr": 0.011832954239305736
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7745664739884393,
"acc_stderr": 0.022497230190967558,
"acc_norm": 0.7745664739884393,
"acc_norm_stderr": 0.022497230190967558
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.5273743016759777,
"acc_stderr": 0.016697420650642752,
"acc_norm": 0.5273743016759777,
"acc_norm_stderr": 0.016697420650642752
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7973856209150327,
"acc_stderr": 0.023015446877985693,
"acc_norm": 0.7973856209150327,
"acc_norm_stderr": 0.023015446877985693
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7942122186495176,
"acc_stderr": 0.022961339906764244,
"acc_norm": 0.7942122186495176,
"acc_norm_stderr": 0.022961339906764244
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.020736358408060006,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.020736358408060006
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.524822695035461,
"acc_stderr": 0.0297907192438297,
"acc_norm": 0.524822695035461,
"acc_norm_stderr": 0.0297907192438297
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5195567144719687,
"acc_stderr": 0.012760464028289299,
"acc_norm": 0.5195567144719687,
"acc_norm_stderr": 0.012760464028289299
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7794117647058824,
"acc_stderr": 0.02518778666022726,
"acc_norm": 0.7794117647058824,
"acc_norm_stderr": 0.02518778666022726
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.017401816711427657,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.017401816711427657
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7959183673469388,
"acc_stderr": 0.025801283475090506,
"acc_norm": 0.7959183673469388,
"acc_norm_stderr": 0.025801283475090506
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8706467661691543,
"acc_stderr": 0.023729830881018526,
"acc_norm": 0.8706467661691543,
"acc_norm_stderr": 0.023729830881018526
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.032659863237109066,
"acc_norm": 0.88,
"acc_norm_stderr": 0.032659863237109066
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8654970760233918,
"acc_stderr": 0.026168221344662297,
"acc_norm": 0.8654970760233918,
"acc_norm_stderr": 0.026168221344662297
},
"harness|truthfulqa:mc|0": {
"mc1": 0.397796817625459,
"mc1_stderr": 0.01713393424855964,
"mc2": 0.5597457443511287,
"mc2_stderr": 0.014917533204367936
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698334
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Noddy13/guanaco-llama2-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966692
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
winddude/finacial_pharsebank_66agree_split | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: apache-2.0
source_datasets:
- https://huggingface.co/datasets/financial_phrasebank
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
tags:
- finance
---
# FinacialPhrasebank 66agree test:train split
A selection of [FinacialPhrasebank](https://huggingface.co/datasets/financial_phrasebank) where 66% of annotators agreed on the classification split into 90% training and 10% test. |
senhorsapo/nick | ---
license: openrail
---
|
mteb/neuclir-2022 | ---
language:
- fas
- rus
- zho
multilinguality:
- multilingual
task_categories:
- text-retrieval
---
From the NeuCLIR TREC Track 2022: https://arxiv.org/abs/2304.12367
Generated from https://huggingface.co/datasets/neuclir/neuclir1
```
@article{lawrie2023overview,
title={Overview of the TREC 2022 NeuCLIR track},
author={Lawrie, Dawn and MacAvaney, Sean and Mayfield, James and McNamee, Paul and Oard, Douglas W and Soldaini, Luca and Yang, Eugene},
journal={arXiv preprint arXiv:2304.12367},
year={2023}
}
```
|
Tsuinzues/tenyaiida | ---
license: openrail
---
|
autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466617 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: facebook/opt-125m
metrics: []
dataset_name: futin/guess
dataset_config: en_3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
koakande/github-issues | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: labels
list:
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: id
dtype: int64
- name: name
dtype: string
- name: node_id
dtype: string
- name: url
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: assignees
list:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: milestone
struct:
- name: closed_at
dtype: string
- name: closed_issues
dtype: int64
- name: created_at
dtype: string
- name: creator
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: description
dtype: string
- name: due_on
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: labels_url
dtype: string
- name: node_id
dtype: string
- name: number
dtype: int64
- name: open_issues
dtype: int64
- name: state
dtype: string
- name: title
dtype: string
- name: updated_at
dtype: string
- name: url
dtype: string
- name: comments
dtype: int64
- name: created_at
dtype: timestamp[ns, tz=UTC]
- name: updated_at
dtype: timestamp[ns, tz=UTC]
- name: closed_at
dtype: timestamp[ns, tz=UTC]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: float64
- name: body
dtype: string
- name: reactions
struct:
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: confused
dtype: int64
- name: eyes
dtype: int64
- name: heart
dtype: int64
- name: hooray
dtype: int64
- name: laugh
dtype: int64
- name: rocket
dtype: int64
- name: total_count
dtype: int64
- name: url
dtype: string
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: float64
- name: state_reason
dtype: string
- name: draft
dtype: float64
- name: pull_request
struct:
- name: diff_url
dtype: string
- name: html_url
dtype: string
- name: merged_at
dtype: string
- name: patch_url
dtype: string
- name: url
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 19882005
num_examples: 6493
download_size: 4841547
dataset_size: 19882005
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yicozy/dataset_pfs_hr_by_subgroup | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6991314
num_examples: 8668
download_size: 0
dataset_size: 6991314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_pfs_hr_by_subgroup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-one-sec-cv12/chunk_147 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1072981148
num_examples: 210719
download_size: 1094154918
dataset_size: 1072981148
---
# Dataset Card for "chunk_147"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Moonny/ZladyN | ---
license: unlicense
---
|
Babypotatotang/logo-combined-weighted-b | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 167147766.253
num_examples: 12907
- name: test
num_bytes: 42122543.76
num_examples: 3232
download_size: 209180456
dataset_size: 209270310.01299998
---
# Dataset Card for "logo-combined-weighted-b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Duno9/text_inversion_toril | ---
license: openrail
---
|
CyberHarem/great_stone_statue_god_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of great_stone_statue_god/大いなる石像神/伟大的石像神 (Fate/Grand Order)
This is the dataset of great_stone_statue_god/大いなる石像神/伟大的石像神 (Fate/Grand Order), containing 101 images and their tags.
The core tags of this character are `brown_hair, long_hair, glasses, brown_eyes, breasts, bow, hair_bow, black-framed_eyewear, large_breasts, messy_hair, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 101 | 122.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/great_stone_statue_god_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 101 | 107.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/great_stone_statue_god_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 248 | 219.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/great_stone_statue_god_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/great_stone_statue_god_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 22 |  |  |  |  |  | 1girl, navel, plump, solo, belly, cleavage, looking_at_viewer, smile, blush, fat, jewelry, long_sleeves, simple_background, pants, bindi, white_background, open_mouth |
| 1 | 6 |  |  |  |  |  | 1girl, solo, belly, jeans, plump, looking_at_viewer, smile, tank_top, bare_shoulders, blush |
| 2 | 8 |  |  |  |  |  | 1girl, jeans, shirt, solo, long_sleeves, looking_at_viewer, sneakers, white_background, blue_pants, cardigan, simple_background, smile, bag, blush, sitting, collarbone, sleeves_past_wrists, standing, strap_between_breasts |
| 3 | 5 |  |  |  |  |  | 1girl, from_behind, huge_ass, looking_at_viewer, looking_back, plump, smile, solo, thick_thighs, blush, open_mouth, artist_name, barefoot, completely_nude, huge_breasts, nipples, yellow_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | navel | plump | solo | belly | cleavage | looking_at_viewer | smile | blush | fat | jewelry | long_sleeves | simple_background | pants | bindi | white_background | open_mouth | jeans | tank_top | bare_shoulders | shirt | sneakers | blue_pants | cardigan | bag | sitting | collarbone | sleeves_past_wrists | standing | strap_between_breasts | from_behind | huge_ass | looking_back | thick_thighs | artist_name | barefoot | completely_nude | huge_breasts | nipples | yellow_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------|:-------|:--------|:-----------|:--------------------|:--------|:--------|:------|:----------|:---------------|:--------------------|:--------|:--------|:-------------------|:-------------|:--------|:-----------|:-----------------|:--------|:-----------|:-------------|:-----------|:------|:----------|:-------------|:----------------------|:-----------|:------------------------|:--------------|:-----------|:---------------|:---------------|:--------------|:-----------|:------------------|:---------------|:----------|:--------------------|
| 0 | 22 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | X | X | X | | X | X | X | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | X | | | X | X | X | | | X | X | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | X | | | X | X | X | | | | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CVasNLPExperiments/Hatefulmemes_test_google_flan_t5_xxl_mode_CM_OCR_rices_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
sequence: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_text
num_bytes: 5957868
num_examples: 1000
download_size: 1101608
dataset_size: 5957868
---
# Dataset Card for "Hatefulmemes_test_google_flan_t5_xxl_mode_CM_OCR_rices_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shershen/ru_anglicism | ---
license: apache-2.0
dataset_info:
features:
- name: word
dtype: string
- name: form
dtype: string
- name: sentence
dtype: string
- name: paraphrase
dtype: string
splits:
- name: train
num_bytes: 480909
num_examples: 1007
- name: test
num_bytes: 42006
num_examples: 77
download_size: 290128
dataset_size: 522915
task_categories:
- text-generation
- text2text-generation
language:
- ru
size_categories:
- 1K<n<10K
---
# Dataset Card for Ru Anglicism
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
## Dataset Description
### Dataset Summary
Dataset for detection and substraction anglicisms from sentences in Russian. Sentences with anglicism automatically parsed from National Corpus of the Russian language, Habr and Pikabu. The paraphrases for the sentences were created manually.
### Languages
The dataset is in Russian.
### Usage
Loading dataset:
```python
from datasets import load_dataset
dataset = load_dataset('shershen/ru_anglicism')
```
## Dataset Structure
### Data Instunces
For each instance, there are four strings: word, form, sentence and paraphrase.
```
{
'word': 'коллаб',
'form': 'коллабу',
'sentence': 'Сделаем коллабу, раскрутимся.',
'paraphrase': 'Сделаем совместный проект, раскрутимся.'
}
```
### Data Splits
Full dataset contains 1084 sentences. Split of dataset is:
| Dataset Split | Number of Rows
|:---------|:---------|
| Train | 1007 |
| Test | 77 | |
DaDavinci/mixamo-gltf-library | ---
license: mit
---
|
datahrvoje/twitter_dataset_1713173919 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 21768
num_examples: 50
download_size: 11654
dataset_size: 21768
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
coastalcph/fm_classifier-1-1 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1095051.1775751072
num_examples: 6230
- name: validation
num_bytes: 995400.6136754095
num_examples: 5783
- name: test
num_bytes: 858612.5253924284
num_examples: 4360
download_size: 1062146
dataset_size: 2949064.316642945
---
# Dataset Card for "fm_classifier-1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_stsb_null_referential_pronouns | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 19529
num_examples: 91
- name: test
num_bytes: 11035
num_examples: 58
- name: train
num_bytes: 55914
num_examples: 242
download_size: 68262
dataset_size: 86478
---
# Dataset Card for "MULTI_VALUE_stsb_null_referential_pronouns"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
krasaee/nietzsche | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9929433
num_examples: 60480
download_size: 6288420
dataset_size: 9929433
---
# Dataset Card for "nietzsche"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PlanTL-GOB-ES/SQAC | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Spanish Question Answering Corpus (SQAC)
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# SQAC (Spanish Question-Answering Corpus)
## Dataset Description
SQAC is an extractive QA dataset for the Spanish language.
- **Paper:** [MarIA: Spanish Language Models](https://upcommons.upc.edu/bitstream/handle/2117/367156/6405-5863-1-PB%20%281%29.pdf?sequence=1)
- **Point of Contact:** carlos.rodriguez1@bsc.es
- **Leaderboard:** [EvalEs] (https://plantl-gob-es.github.io/spanish-benchmark/)
### Dataset Summary
Contains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from the [Spanish Wikipedia](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
* News articles from [Wikinews](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
* Newswire and literature text from the [AnCora corpus](http://clic.ub.edu/corpus/en), used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
### Supported Tasks
Extractive-QA
### Languages
- Spanish (es)
### Directory Structure
- README.md
- SQAC.py
- dev.json
- test.json
- train.json
## Dataset Structure
### Data Instances
<pre>
{
'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628',
'title': 'Historia de Japón',
'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.',
'question': '¿Qué influencia convirtió Japón en una nación industrial?',
'answers': {
'text': ['la de origen occidental'],
'answer_start': [473]
}
}
</pre>
### Data Fields
<pre>
{
id: str
title: str
context: str
question: str
answers: {
answer_start: [int]
text: [str]
}
}
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 15,036 |
| `dev` | 1,864 |
| `test` | 1.910 |
## Content analysis
### Number of articles, paragraphs and questions
* Number of articles: 3,834
* Number of contexts: 6,247
* Number of questions: 18,817
* Number of sentences: 48,026
* Questions/Context ratio: 3.01
* Sentences/Context ratio: 7.70
### Number of tokens
* Total tokens in context: 1,561,616
* Average tokens/context: 250
* Total tokens in questions: 203,235
* Average tokens/question: 10.80
* Total tokens in answers: 90,307
* Average tokens/answer: 4.80
### Lexical variation
46.38% of the words in the Question can be found in the Context.
### Question type
| Question | Count | % |
|----------|-------:|---:|
| qué | 6,381 | 33.91 % |
| quién/es | 2,952 | 15.69 % |
| cuál/es | 2,034 | 10.81 % |
| cómo | 1,949 | 10.36 % |
| dónde | 1,856 | 9.86 % |
| cuándo | 1,639 | 8.71 % |
| cuánto | 1,311 | 6.97 % |
| cuántos | 495 |2.63 % |
| adónde | 100 | 0.53 % |
| cuánta | 49 | 0.26 % |
| no question mark | 43 | 0.23 % |
| cuántas | 19 | 0.10 % |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250).
### Source Data
#### Initial Data Collection and Normalization
The source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus.
- [Spanish Wikipedia](https://es.wikipedia.org)
- [Spanish Wikinews](https://es.wikinews.org/)
- [AnCora corpus](http://clic.ub.edu/corpus/en)
#### Who are the source language producers?
Contributors to the aforementioned sites.
### Annotations
#### Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250).
#### Who are the annotators?
Native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of language models in Spanish.
### Discussion of Biases
No postprocessing steps were applied to mitigate potential social biases.
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```
@article{maria,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
### Contributions
[N/A]
|
tathagataraha/ficle | ---
dataset_info:
features:
- name: Claim
dtype: string
- name: Context
dtype: string
- name: Source
dtype: string
- name: Source Indices
dtype: string
- name: Relation
dtype: string
- name: Relation Indices
dtype: string
- name: Target
dtype: string
- name: Target Indices
dtype: string
- name: Inconsistent Claim Component
dtype: string
- name: Inconsistent Context-Span
dtype: string
- name: Inconsistent Context-Span Indices
dtype: string
- name: Inconsistency Type
dtype: string
- name: Fine-grained Inconsistent Entity-Type
dtype: string
- name: Coarse Inconsistent Entity-Type
dtype: string
splits:
- name: train
num_bytes: 2657091
num_examples: 6443
- name: validation
num_bytes: 333142
num_examples: 806
- name: test
num_bytes: 332484
num_examples: 806
download_size: 1784422
dataset_size: 3322717
task_categories:
- token-classification
- text-classification
- text-generation
language:
- en
pretty_name: FICLE
size_categories:
- 1K<n<10K
license: gpl-3.0
tags:
- span
- explanation
---
# FICLE Dataset
The dataset can be loaded and utilized through the following:
```python
from datasets import load_dataset
ficle_data = load_dataset("tathagataraha/ficle")
```
# Dataset card for FICLE
## Dataset Description
* **GitHub Repo:** https://github.com/blitzprecision/FICLE
* **Paper:**
* **Point of Contact:**
### Dataset Summary
The FICLE dataset is a derivative of the FEVER dataset, which is a collection of 185,445 claims generated by modifying sentences obtained from Wikipedia.
These claims were then verified without knowledge of the original sentences they were derived from. Each sample in the FEVER dataset consists of a claim sentence, a context sentence extracted from a Wikipedia URL as evidence, and a type label indicating whether the claim is supported, refuted, or lacks sufficient information.
### Languages
The FICLE Dataset contains only English.
## Dataset Structure
### Data Fields
* `Claim (string)`: A statement or proposition relating to the consistency or inconsistency of certain facts or information.
* `Context (string)`: The surrounding information or background against which the claim is being evaluated or compared. It provides additional details or evidence that can support or challenge the claim.
* `Source (string)`: It is the linguistic chunk containing the entity lying to the left of the main verb/relating chunk.
* `Source Indices (string)`: Source indices refer to the specific indices or positions within the source string that indicate the location of the relevant information.
* `Relation (string)`: It is the linguistic chunk containing the verb/relation at the core of the identified inconsistency.
* `Relation Indices (string)`: Relation indices indicate the specific indices or positions within the relation string that highlight the location of the relevant information.
* `Target (string)`: It is the linguistic chunk containing the entity lying to the right of the main verb/relating chunk.
* `Target Indices (string)`: Target indices represent the specific indices or positions within the target string that indicate the location of the relevant information.
* `Inconsistent Claim Component (string)`: The inconsistent claim component refers to a specific linguistic chunk within the claim that is identified as inconsistent with the context. It helps identify which part of the claim triple is problematic in terms of its alignment with the surrounding information.
* `Inconsistent Context-Span (string)`: A span or portion marked within the context sentence that is found to be inconsistent with the claim. It highlights a discrepancy or contradiction between the information in the claim and the corresponding context.
* `Inconsistent Context-Span Indices (string)`: The specific indices or location within the context sentence that indicate the inconsistent span.
* `Inconsistency Type (string)`: The category or type of inconsistency identified in the claim and context.
* `Fine-grained Inconsistent Entity-Type (string)`: The specific detailed category or type of entity causing the inconsistency within the claim or context. It provides a more granular classification of the entity associated with the inconsistency.
* `Coarse Inconsistent Entity-Type (string)`: The broader or general category or type of entity causing the inconsistency within the claim or context. It provides a higher-level classification of the entity associated with the inconsistency.
### Data Splits
The FICLE dataset comprises a total of 8,055 samples in the English language, each representing different instances of inconsistencies.
These inconsistencies are categorized into five types: Taxonomic Relations (4,842 samples), Negation (1,630 samples), Set Based (642 samples), Gradable (526 samples), and Simple (415 samples).
Within the dataset, there are six possible components that contribute to the inconsistencies found in the claim sentences.
These components are distributed as follows: Target-Head (3,960 samples), Target-Modifier (1,529 samples), Relation-Head (951 samples), Relation-Modifier (1,534 samples), Source-Head (45 samples), and Source-Modifier (36 samples).
The dataset is split into `train`, `validation`, and `test`.
* `train`: 6.44k rows
* `validation`: 806 rows
* `test`: 806 rows
## Dataset Creation
### Curation Rationale
We propose a linguistically enriched dataset to help detect inconsistencies and explain them.
To this end, the broad requirements are to locate where the inconsistency is present between a claim and a context and to have a classification scheme for better explainability.
### Data Collection and Preprocessing
The FICLE dataset is derived from the FEVER dataset, using the following-
ing processing steps. FEVER (Fact Extraction and VERification) consists of
185,445 claims were generated by altering sentences extracted from Wikipedia and
subsequently verified without knowledge of the sentence they were derived from.
Every sample in the FEVER dataset contains the claim sentence, evidence (or
context) sentence from a Wikipedia URL, and a type label (‘supports’, ‘refutes’, or
‘not enough info’). Out of these, we leverage only the samples with the ‘refutes’ label
to build our dataset.
### Annotations
You can see the annotation guidelines [here](https://github.com/blitzprecision/FICLE/blob/main/ficle_annotation_guidelines.pdf).
In order to provide detailed explanations for inconsistencies, extensive annotations were conducted for each sample in the FICLE dataset. The annotation process involved two iterations, with each iteration focusing on different aspects of the dataset.
In the first iteration, the annotations were primarily "syntactic-oriented." These fields included identifying the inconsistent claim fact triple, marking inconsistent context spans, and categorizing the six possible inconsistent claim components.
The second iteration of annotations concentrated on "semantic-oriented" aspects. Annotators labeled semantic fields for each sample, such as the type of inconsistency, coarse inconsistent entity types, and fine-grained inconsistent entity types.
This stage aimed to capture the semantic nuances and provide a deeper understanding of the inconsistencies present in the dataset.
The annotation process was carried out by a group of four annotators, two of whom are also authors of the dataset. The annotators possess a strong command of the English language and hold Bachelor's degrees in Computer Science, specializing in computational linguistics.
Their expertise in the field ensured accurate and reliable annotations. The annotators' ages range from 20 to 22 years, indicating their familiarity with contemporary language usage and computational linguistic concepts.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Citation Information
```
@misc{raha2023neural,
title={Neural models for Factual Inconsistency Classification with Explanations},
author={Tathagata Raha and Mukund Choudhary and Abhinav Menon and Harshit Gupta and KV Aditya Srivatsa and Manish Gupta and Vasudeva Varma},
year={2023},
eprint={2306.08872},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contact |
gsarti/iwslt2017_context | ---
annotations_creators:
- crowdsourced
language:
- ar
- de
- en
- fr
- it
- ja
- ko
- nl
- ro
- zh
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2017
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2017
dataset_info:
- config_name: iwslt2017-en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-it-en
features:
- name: translation
dtype:
translation:
languages:
- it
- en
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-it-nl
features:
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-it-ro
features:
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-nl-en
features:
- name: translation
dtype:
translation:
languages:
- nl
- en
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-nl-it
features:
- name: translation
dtype:
translation:
languages:
- nl
- it
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-nl-ro
features:
- name: translation
dtype:
translation:
languages:
- nl
- ro
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ro-en
features:
- name: translation
dtype:
translation:
languages:
- ro
- en
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-ro-it
features:
- name: translation
dtype:
translation:
languages:
- ro
- it
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-ro-nl
features:
- name: translation
dtype:
translation:
languages:
- ro
- nl
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 27748780
dataset_size: 58736561
- config_name: iwslt2017-de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758320
dataset_size: 44427829
- config_name: iwslt2017-en-ar
features:
- name: translation
dtype:
translation:
languages:
- en
- ar
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 29333173
dataset_size: 58736561
- config_name: iwslt2017-en-de
features:
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758334
dataset_size: 44427829
- config_name: iwslt2017-en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 27699724
dataset_size: 51248330
- config_name: iwslt2017-en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26983602
dataset_size: 50222118
- config_name: iwslt2017-en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364776
dataset_size: 53767131
- config_name: iwslt2017-en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 27597071
dataset_size: 46079068
- config_name: iwslt2017-fr-en
features:
- name: translation
dtype:
translation:
languages:
- fr
- en
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 26880731
dataset_size: 51248330
- config_name: iwslt2017-ja-en
features:
- name: translation
dtype:
translation:
languages:
- ja
- en
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26190859
dataset_size: 50222118
- config_name: iwslt2017-ko-en
features:
- name: translation
dtype:
translation:
languages:
- ko
- en
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364733
dataset_size: 53767131
- config_name: iwslt2017-zh-en
features:
- name: translation
dtype:
translation:
languages:
- zh
- en
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 26849290
dataset_size: 46079068
---
# Dataset Card for IWSLT 2017
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2017/TED-tasks](https://sites.google.com/site/iwsltevaluation2017/TED-tasks)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Overview of the IWSLT 2017 Evaluation Campaign](https://aclanthology.org/2017.iwslt-1.1/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.24 GB
- **Size of the generated dataset:** 1.14 GB
- **Total amount of disk used:** 5.38 GB
*This repository contain a modified version of the loading script used in the official [iwslt2017](https://huggingface.co/datasets/iwslt2017) repository updated to include document and segment information for all available sentence pairs, enabling their usage for document-level and context-aware MT applications. Refer to the original repository for additional information.*
|
open-llm-leaderboard/details_leveldevai__TurdusBeagle-7B | ---
pretty_name: Evaluation run of leveldevai/TurdusBeagle-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_leveldevai__TurdusBeagle-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-18T18:27:55.293799](https://huggingface.co/datasets/open-llm-leaderboard/details_leveldevai__TurdusBeagle-7B/blob/main/results_2024-01-18T18-27-55.293799.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6533770356186305,\n\
\ \"acc_stderr\": 0.032071476577749926,\n \"acc_norm\": 0.6525881962766505,\n\
\ \"acc_norm_stderr\": 0.032742193158041825,\n \"mc1\": 0.576499388004896,\n\
\ \"mc1_stderr\": 0.01729742144853475,\n \"mc2\": 0.6971449186129537,\n\
\ \"mc2_stderr\": 0.015083616284271144\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7167235494880546,\n \"acc_stderr\": 0.013167478735134575,\n\
\ \"acc_norm\": 0.7363481228668942,\n \"acc_norm_stderr\": 0.01287592915129704\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.722266480780721,\n\
\ \"acc_stderr\": 0.004469659042824774,\n \"acc_norm\": 0.8888667596096396,\n\
\ \"acc_norm_stderr\": 0.0031365472766898906\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n\
\ \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \
\ \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n\
\ \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\"\
: 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736411,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736411\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.032400380867927465,\n\
\ \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.032400380867927465\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555498,\n\
\ \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555498\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4074074074074074,\n \"acc_stderr\": 0.02530590624159063,\n \"\
acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.02530590624159063\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7870967741935484,\n\
\ \"acc_stderr\": 0.023287665127268545,\n \"acc_norm\": 0.7870967741935484,\n\
\ \"acc_norm_stderr\": 0.023287665127268545\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603348,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603348\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.02371088850197057,\n \
\ \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.02371088850197057\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \
\ \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886786,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886786\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374308,\n \"\
acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374308\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"\
acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"\
acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7890295358649789,\n \"acc_stderr\": 0.026558372502661916,\n \
\ \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.026558372502661916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.01358661921990334,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.01358661921990334\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7312138728323699,\n \"acc_stderr\": 0.023868003262500104,\n\
\ \"acc_norm\": 0.7312138728323699,\n \"acc_norm_stderr\": 0.023868003262500104\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.441340782122905,\n\
\ \"acc_stderr\": 0.016607021781050873,\n \"acc_norm\": 0.441340782122905,\n\
\ \"acc_norm_stderr\": 0.016607021781050873\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818733,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818733\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n\
\ \"acc_stderr\": 0.02549425935069491,\n \"acc_norm\": 0.7202572347266881,\n\
\ \"acc_norm_stderr\": 0.02549425935069491\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n \
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\"\
: 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"\
acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4706649282920469,\n\
\ \"acc_stderr\": 0.012748238397365549,\n \"acc_norm\": 0.4706649282920469,\n\
\ \"acc_norm_stderr\": 0.012748238397365549\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.028332959514031208,\n\
\ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.028332959514031208\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6666666666666666,\n \"acc_stderr\": 0.019070985589687495,\n \
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.019070985589687495\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n\
\ \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197771,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197771\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n\
\ \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n\
\ \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.576499388004896,\n\
\ \"mc1_stderr\": 0.01729742144853475,\n \"mc2\": 0.6971449186129537,\n\
\ \"mc2_stderr\": 0.015083616284271144\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8389897395422258,\n \"acc_stderr\": 0.010329712832785722\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7005307050796058,\n \
\ \"acc_stderr\": 0.012616300735519654\n }\n}\n```"
repo_url: https://huggingface.co/leveldevai/TurdusBeagle-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|arc:challenge|25_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|gsm8k|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hellaswag|10_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T18-27-55.293799.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-18T18-27-55.293799.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- '**/details_harness|winogrande|5_2024-01-18T18-27-55.293799.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-18T18-27-55.293799.parquet'
- config_name: results
data_files:
- split: 2024_01_18T18_27_55.293799
path:
- results_2024-01-18T18-27-55.293799.parquet
- split: latest
path:
- results_2024-01-18T18-27-55.293799.parquet
---
# Dataset Card for Evaluation run of leveldevai/TurdusBeagle-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_leveldevai__TurdusBeagle-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-18T18:27:55.293799](https://huggingface.co/datasets/open-llm-leaderboard/details_leveldevai__TurdusBeagle-7B/blob/main/results_2024-01-18T18-27-55.293799.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6533770356186305,
"acc_stderr": 0.032071476577749926,
"acc_norm": 0.6525881962766505,
"acc_norm_stderr": 0.032742193158041825,
"mc1": 0.576499388004896,
"mc1_stderr": 0.01729742144853475,
"mc2": 0.6971449186129537,
"mc2_stderr": 0.015083616284271144
},
"harness|arc:challenge|25": {
"acc": 0.7167235494880546,
"acc_stderr": 0.013167478735134575,
"acc_norm": 0.7363481228668942,
"acc_norm_stderr": 0.01287592915129704
},
"harness|hellaswag|10": {
"acc": 0.722266480780721,
"acc_stderr": 0.004469659042824774,
"acc_norm": 0.8888667596096396,
"acc_norm_stderr": 0.0031365472766898906
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.720754716981132,
"acc_stderr": 0.027611163402399715,
"acc_norm": 0.720754716981132,
"acc_norm_stderr": 0.027611163402399715
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736411,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736411
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.032400380867927465,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.032400380867927465
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555498,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555498
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.02530590624159063,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.02530590624159063
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7870967741935484,
"acc_stderr": 0.023287665127268545,
"acc_norm": 0.7870967741935484,
"acc_norm_stderr": 0.023287665127268545
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603348,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603348
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.676923076923077,
"acc_stderr": 0.02371088850197057,
"acc_norm": 0.676923076923077,
"acc_norm_stderr": 0.02371088850197057
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028593,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886786,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886786
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.038615575462551684,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.038615575462551684
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374308,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374308
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931792,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931792
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.026558372502661916,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.026558372502661916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092375,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092375
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.01358661921990334,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.01358661921990334
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7312138728323699,
"acc_stderr": 0.023868003262500104,
"acc_norm": 0.7312138728323699,
"acc_norm_stderr": 0.023868003262500104
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.441340782122905,
"acc_stderr": 0.016607021781050873,
"acc_norm": 0.441340782122905,
"acc_norm_stderr": 0.016607021781050873
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.02549425935069491,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.02549425935069491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4706649282920469,
"acc_stderr": 0.012748238397365549,
"acc_norm": 0.4706649282920469,
"acc_norm_stderr": 0.012748238397365549
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.028332959514031208,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.028332959514031208
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.019070985589687495,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.019070985589687495
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197771,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197771
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.576499388004896,
"mc1_stderr": 0.01729742144853475,
"mc2": 0.6971449186129537,
"mc2_stderr": 0.015083616284271144
},
"harness|winogrande|5": {
"acc": 0.8389897395422258,
"acc_stderr": 0.010329712832785722
},
"harness|gsm8k|5": {
"acc": 0.7005307050796058,
"acc_stderr": 0.012616300735519654
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Falah/book_cover_prompts_with_text | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 297202
num_examples: 1000
download_size: 30394
dataset_size: 297202
---
# Dataset Card for "book_cover_prompts_with_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
income/bioasq-top-20-gen-queries | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
---
# NFCorpus: 20 generated queries (BEIR Benchmark)
This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset.
- DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1)
- id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`).
- Questions generated: 20
- Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py)
Below contains the old dataset card for the BEIR benchmark.
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
qgallouedec/prj_gia_dataset_metaworld_faucet_open_v2_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the faucet-open-v2 environment, sample for the policy faucet-open-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_faucet_open_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_faucet_open_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
macadeliccc/distilabel-math-preferences | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
sequence: string
- name: generation_prompt
list:
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: rating
sequence: float64
- name: rationale
sequence: string
splits:
- name: train
num_bytes: 1694124
num_examples: 100
download_size: 566698
dataset_size: 1694124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
---
|
inorrr/BiLD_translation_discrim_eval | ---
license: apache-2.0
---
This data set utilized iwslt2017 with further unsupervised text classification to evaluate LLM translation for discrimimation issues. |
plaguss/test-dataset | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
sequence: string
- name: generation_prompt
list:
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
splits:
- name: train
num_bytes: 4270
num_examples: 2
download_size: 16358
dataset_size: 4270
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
openfun/tw-company-data | ---
license: apache-2.0
---
|
Minata/reduced_ast_method2test_v1 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 8220780
num_examples: 4905
download_size: 946349
dataset_size: 8220780
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Jeffzera/Sharonpt | ---
license: openrail
---
|
saibo/bookcorpus_compact_1024_shard3_of_10 | ---
dataset_info:
features:
- name: text
dtype: string
- name: concept_with_offset
dtype: string
splits:
- name: train
num_bytes: 764655737
num_examples: 61605
download_size: 384654577
dataset_size: 764655737
---
# Dataset Card for "bookcorpus_compact_1024_shard3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
delphi-suite/v0-next-logprobs-llama2-800k | ---
dataset_info:
features:
- name: logprobs
sequence: float64
splits:
- name: validation
num_bytes: 45818277
num_examples: 10982
download_size: 37757475
dataset_size: 45818277
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
shuvom/med_data | ---
license: mit
---
|
Chinchis/imagenes | ---
license: gpl
---
|
open-llm-leaderboard/details_vicgalle__zephyr-7b-truthy | ---
pretty_name: Evaluation run of vicgalle/zephyr-7b-truthy
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [vicgalle/zephyr-7b-truthy](https://huggingface.co/vicgalle/zephyr-7b-truthy)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_vicgalle__zephyr-7b-truthy\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-10T14:59:44.699643](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__zephyr-7b-truthy/blob/main/results_2024-02-10T14-59-44.699643.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.593174337288042,\n\
\ \"acc_stderr\": 0.033460408309810855,\n \"acc_norm\": 0.5997217687076803,\n\
\ \"acc_norm_stderr\": 0.034170774358741766,\n \"mc1\": 0.45532435740514077,\n\
\ \"mc1_stderr\": 0.017433490102538765,\n \"mc2\": 0.6330887790426952,\n\
\ \"mc2_stderr\": 0.01528797501626636\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5793515358361775,\n \"acc_stderr\": 0.0144262112525084,\n\
\ \"acc_norm\": 0.6075085324232082,\n \"acc_norm_stderr\": 0.014269634635670717\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6409081856203943,\n\
\ \"acc_stderr\": 0.004787537385153002,\n \"acc_norm\": 0.8464449312885879,\n\
\ \"acc_norm_stderr\": 0.0035978491398150577\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542129,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542129\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n\
\ \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n\
\ \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5723684210526315,\n \"acc_stderr\": 0.04026097083296563,\n\
\ \"acc_norm\": 0.5723684210526315,\n \"acc_norm_stderr\": 0.04026097083296563\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6452830188679245,\n \"acc_stderr\": 0.029445175328199586,\n\
\ \"acc_norm\": 0.6452830188679245,\n \"acc_norm_stderr\": 0.029445175328199586\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6736111111111112,\n\
\ \"acc_stderr\": 0.03921067198982266,\n \"acc_norm\": 0.6736111111111112,\n\
\ \"acc_norm_stderr\": 0.03921067198982266\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\"\
: 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n\
\ \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n\
\ \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.03267151848924777,\n\
\ \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.03267151848924777\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n\
\ \"acc_stderr\": 0.04644602091222318,\n \"acc_norm\": 0.42105263157894735,\n\
\ \"acc_norm_stderr\": 0.04644602091222318\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.04166567577101579,\n\
\ \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.04166567577101579\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3835978835978836,\n \"acc_stderr\": 0.025043757318520193,\n \"\
acc_norm\": 0.3835978835978836,\n \"acc_norm_stderr\": 0.025043757318520193\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7419354838709677,\n\
\ \"acc_stderr\": 0.024892469172462833,\n \"acc_norm\": 0.7419354838709677,\n\
\ \"acc_norm_stderr\": 0.024892469172462833\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n\
\ \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\"\
: 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.03453131801885417,\n\
\ \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.03453131801885417\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7525252525252525,\n \"acc_stderr\": 0.030746300742124498,\n \"\
acc_norm\": 0.7525252525252525,\n \"acc_norm_stderr\": 0.030746300742124498\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8186528497409327,\n \"acc_stderr\": 0.02780703236068609,\n\
\ \"acc_norm\": 0.8186528497409327,\n \"acc_norm_stderr\": 0.02780703236068609\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6153846153846154,\n \"acc_stderr\": 0.02466674491518721,\n \
\ \"acc_norm\": 0.6153846153846154,\n \"acc_norm_stderr\": 0.02466674491518721\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.03077805742293167,\n \
\ \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.03077805742293167\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\"\
: 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8110091743119267,\n\
\ \"acc_stderr\": 0.016785481159203624,\n \"acc_norm\": 0.8110091743119267,\n\
\ \"acc_norm_stderr\": 0.016785481159203624\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.5416666666666666,\n \"acc_stderr\": 0.033981108902946366,\n\
\ \"acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.033981108902946366\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.75,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.7172995780590717,\n \"acc_stderr\": 0.029312814153955934,\n\
\ \"acc_norm\": 0.7172995780590717,\n \"acc_norm_stderr\": 0.029312814153955934\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6188340807174888,\n\
\ \"acc_stderr\": 0.03259625118416827,\n \"acc_norm\": 0.6188340807174888,\n\
\ \"acc_norm_stderr\": 0.03259625118416827\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6641221374045801,\n \"acc_stderr\": 0.041423137719966634,\n\
\ \"acc_norm\": 0.6641221374045801,\n \"acc_norm_stderr\": 0.041423137719966634\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6687116564417178,\n \"acc_stderr\": 0.03697983910025588,\n\
\ \"acc_norm\": 0.6687116564417178,\n \"acc_norm_stderr\": 0.03697983910025588\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.36607142857142855,\n\
\ \"acc_stderr\": 0.0457237235873743,\n \"acc_norm\": 0.36607142857142855,\n\
\ \"acc_norm_stderr\": 0.0457237235873743\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6796116504854369,\n \"acc_stderr\": 0.04620284082280041,\n\
\ \"acc_norm\": 0.6796116504854369,\n \"acc_norm_stderr\": 0.04620284082280041\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n\
\ \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n\
\ \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7637292464878672,\n\
\ \"acc_stderr\": 0.015190473717037498,\n \"acc_norm\": 0.7637292464878672,\n\
\ \"acc_norm_stderr\": 0.015190473717037498\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.025416003773165552,\n\
\ \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.025416003773165552\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30726256983240224,\n\
\ \"acc_stderr\": 0.015430158846469606,\n \"acc_norm\": 0.30726256983240224,\n\
\ \"acc_norm_stderr\": 0.015430158846469606\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6405228758169934,\n \"acc_stderr\": 0.027475969910660952,\n\
\ \"acc_norm\": 0.6405228758169934,\n \"acc_norm_stderr\": 0.027475969910660952\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6688102893890675,\n\
\ \"acc_stderr\": 0.02673062072800491,\n \"acc_norm\": 0.6688102893890675,\n\
\ \"acc_norm_stderr\": 0.02673062072800491\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6358024691358025,\n \"acc_stderr\": 0.026774929899722334,\n\
\ \"acc_norm\": 0.6358024691358025,\n \"acc_norm_stderr\": 0.026774929899722334\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41460234680573665,\n\
\ \"acc_stderr\": 0.012582597058908284,\n \"acc_norm\": 0.41460234680573665,\n\
\ \"acc_norm_stderr\": 0.012582597058908284\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.028888193103988633,\n\
\ \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.028888193103988633\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6111111111111112,\n \"acc_stderr\": 0.019722058939618068,\n \
\ \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.019722058939618068\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\
\ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\
\ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.689795918367347,\n \"acc_stderr\": 0.029613459872484378,\n\
\ \"acc_norm\": 0.689795918367347,\n \"acc_norm_stderr\": 0.029613459872484378\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7313432835820896,\n\
\ \"acc_stderr\": 0.03134328358208955,\n \"acc_norm\": 0.7313432835820896,\n\
\ \"acc_norm_stderr\": 0.03134328358208955\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \
\ \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n\
\ \"acc_stderr\": 0.03891364495835821,\n \"acc_norm\": 0.4879518072289157,\n\
\ \"acc_norm_stderr\": 0.03891364495835821\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n\
\ \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.45532435740514077,\n\
\ \"mc1_stderr\": 0.017433490102538765,\n \"mc2\": 0.6330887790426952,\n\
\ \"mc2_stderr\": 0.01528797501626636\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7790055248618785,\n \"acc_stderr\": 0.011661223637643412\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.25473843821076575,\n \
\ \"acc_stderr\": 0.012001731232879126\n }\n}\n```"
repo_url: https://huggingface.co/vicgalle/zephyr-7b-truthy
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|arc:challenge|25_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|gsm8k|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hellaswag|10_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T14-59-44.699643.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-10T14-59-44.699643.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- '**/details_harness|winogrande|5_2024-02-10T14-59-44.699643.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-10T14-59-44.699643.parquet'
- config_name: results
data_files:
- split: 2024_02_10T14_59_44.699643
path:
- results_2024-02-10T14-59-44.699643.parquet
- split: latest
path:
- results_2024-02-10T14-59-44.699643.parquet
---
# Dataset Card for Evaluation run of vicgalle/zephyr-7b-truthy
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [vicgalle/zephyr-7b-truthy](https://huggingface.co/vicgalle/zephyr-7b-truthy) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_vicgalle__zephyr-7b-truthy",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-10T14:59:44.699643](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__zephyr-7b-truthy/blob/main/results_2024-02-10T14-59-44.699643.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.593174337288042,
"acc_stderr": 0.033460408309810855,
"acc_norm": 0.5997217687076803,
"acc_norm_stderr": 0.034170774358741766,
"mc1": 0.45532435740514077,
"mc1_stderr": 0.017433490102538765,
"mc2": 0.6330887790426952,
"mc2_stderr": 0.01528797501626636
},
"harness|arc:challenge|25": {
"acc": 0.5793515358361775,
"acc_stderr": 0.0144262112525084,
"acc_norm": 0.6075085324232082,
"acc_norm_stderr": 0.014269634635670717
},
"harness|hellaswag|10": {
"acc": 0.6409081856203943,
"acc_stderr": 0.004787537385153002,
"acc_norm": 0.8464449312885879,
"acc_norm_stderr": 0.0035978491398150577
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542129,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542129
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5723684210526315,
"acc_stderr": 0.04026097083296563,
"acc_norm": 0.5723684210526315,
"acc_norm_stderr": 0.04026097083296563
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.55,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6452830188679245,
"acc_stderr": 0.029445175328199586,
"acc_norm": 0.6452830188679245,
"acc_norm_stderr": 0.029445175328199586
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6736111111111112,
"acc_stderr": 0.03921067198982266,
"acc_norm": 0.6736111111111112,
"acc_norm_stderr": 0.03921067198982266
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5148936170212766,
"acc_stderr": 0.03267151848924777,
"acc_norm": 0.5148936170212766,
"acc_norm_stderr": 0.03267151848924777
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.04644602091222318,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.04644602091222318
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.503448275862069,
"acc_stderr": 0.04166567577101579,
"acc_norm": 0.503448275862069,
"acc_norm_stderr": 0.04166567577101579
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3835978835978836,
"acc_stderr": 0.025043757318520193,
"acc_norm": 0.3835978835978836,
"acc_norm_stderr": 0.025043757318520193
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.024892469172462833,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.024892469172462833
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4876847290640394,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.4876847290640394,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.03453131801885417,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.03453131801885417
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7525252525252525,
"acc_stderr": 0.030746300742124498,
"acc_norm": 0.7525252525252525,
"acc_norm_stderr": 0.030746300742124498
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8186528497409327,
"acc_stderr": 0.02780703236068609,
"acc_norm": 0.8186528497409327,
"acc_norm_stderr": 0.02780703236068609
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6153846153846154,
"acc_stderr": 0.02466674491518721,
"acc_norm": 0.6153846153846154,
"acc_norm_stderr": 0.02466674491518721
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131143,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131143
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.03077805742293167,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.03077805742293167
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8110091743119267,
"acc_stderr": 0.016785481159203624,
"acc_norm": 0.8110091743119267,
"acc_norm_stderr": 0.016785481159203624
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.033981108902946366,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.033981108902946366
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.75,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7172995780590717,
"acc_stderr": 0.029312814153955934,
"acc_norm": 0.7172995780590717,
"acc_norm_stderr": 0.029312814153955934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6188340807174888,
"acc_stderr": 0.03259625118416827,
"acc_norm": 0.6188340807174888,
"acc_norm_stderr": 0.03259625118416827
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6641221374045801,
"acc_stderr": 0.041423137719966634,
"acc_norm": 0.6641221374045801,
"acc_norm_stderr": 0.041423137719966634
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6687116564417178,
"acc_stderr": 0.03697983910025588,
"acc_norm": 0.6687116564417178,
"acc_norm_stderr": 0.03697983910025588
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.36607142857142855,
"acc_stderr": 0.0457237235873743,
"acc_norm": 0.36607142857142855,
"acc_norm_stderr": 0.0457237235873743
},
"harness|hendrycksTest-management|5": {
"acc": 0.6796116504854369,
"acc_stderr": 0.04620284082280041,
"acc_norm": 0.6796116504854369,
"acc_norm_stderr": 0.04620284082280041
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.0230866350868414,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.0230866350868414
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7637292464878672,
"acc_stderr": 0.015190473717037498,
"acc_norm": 0.7637292464878672,
"acc_norm_stderr": 0.015190473717037498
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.025416003773165552,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.025416003773165552
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.30726256983240224,
"acc_stderr": 0.015430158846469606,
"acc_norm": 0.30726256983240224,
"acc_norm_stderr": 0.015430158846469606
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6405228758169934,
"acc_stderr": 0.027475969910660952,
"acc_norm": 0.6405228758169934,
"acc_norm_stderr": 0.027475969910660952
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6688102893890675,
"acc_stderr": 0.02673062072800491,
"acc_norm": 0.6688102893890675,
"acc_norm_stderr": 0.02673062072800491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6358024691358025,
"acc_stderr": 0.026774929899722334,
"acc_norm": 0.6358024691358025,
"acc_norm_stderr": 0.026774929899722334
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.41460234680573665,
"acc_stderr": 0.012582597058908284,
"acc_norm": 0.41460234680573665,
"acc_norm_stderr": 0.012582597058908284
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6544117647058824,
"acc_stderr": 0.028888193103988633,
"acc_norm": 0.6544117647058824,
"acc_norm_stderr": 0.028888193103988633
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.019722058939618068,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.019722058939618068
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.689795918367347,
"acc_stderr": 0.029613459872484378,
"acc_norm": 0.689795918367347,
"acc_norm_stderr": 0.029613459872484378
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7313432835820896,
"acc_stderr": 0.03134328358208955,
"acc_norm": 0.7313432835820896,
"acc_norm_stderr": 0.03134328358208955
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4879518072289157,
"acc_stderr": 0.03891364495835821,
"acc_norm": 0.4879518072289157,
"acc_norm_stderr": 0.03891364495835821
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.45532435740514077,
"mc1_stderr": 0.017433490102538765,
"mc2": 0.6330887790426952,
"mc2_stderr": 0.01528797501626636
},
"harness|winogrande|5": {
"acc": 0.7790055248618785,
"acc_stderr": 0.011661223637643412
},
"harness|gsm8k|5": {
"acc": 0.25473843821076575,
"acc_stderr": 0.012001731232879126
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Ruth-Ann/jampatoisnli | ---
annotations_creators:
- expert-generated
language:
- jam
language_creators:
- expert-generated
- found
license:
- other
multilinguality:
- monolingual
- other-english-based-creole
pretty_name: JamPatoisNLI
size_categories:
- n<1K
source_datasets:
- original
tags:
- creole
- low-resource-language
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- jampatoisnli.github.io
- **Repository:**
- https://github.com/ruth-ann/jampatoisnli
- **Paper:**
- https://arxiv.org/abs/2212.03419
- **Point of Contact:**
- Ruth-Ann Armsrong: armstrongruthanna@gmail.com
### Dataset Summary
JamPatoisNLI provides the first dataset for natural language inference in a creole language, Jamaican Patois.
Many of the most-spoken low-resource languages are creoles. These languages commonly have a lexicon derived from
a major world language and a distinctive grammar reflecting the languages of the original speakers and the process
of language birth by creolization. This gives them a distinctive place in exploring the effectiveness of transfer
from large monolingual or multilingual pretrained models.
### Supported Tasks and Leaderboards
Natural language inference
### Languages
Jamaican Patois
### Data Fields
premise, hypothesis, label
### Data Splits
Train: 250
Val: 200
Test: 200
### Data set creation + Annotations
Premise collection:
97% of examples from Twitter; remaining pulled from literature and online cultural website
Hypothesis construction:
For each premise, hypothesis written by native speaker (our first author) so that pair’s classification would be E, N or C
Label validation:
Random sample of 100 sentence pairs double annotated by fluent speakers
### Social Impact of Dataset
JamPatoisNLI is a low-resource language dataset in an English-based Creole spoken in the Caribbean,
Jamaican Patois. The creation of the dataset contributes to expanding the scope of NLP research
to under-explored languages across the world.
### Dataset Curators
[@ruth-ann](https://github.com/ruth-ann)
### Citation Information
@misc{https://doi.org/10.48550/arxiv.2212.03419,
doi = {10.48550/ARXIV.2212.03419},
url = {https://arxiv.org/abs/2212.03419},
author = {Armstrong, Ruth-Ann and Hewitt, John and Manning, Christopher},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7},
title = {JamPatoisNLI: A Jamaican Patois Natural Language Inference Dataset},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
### Contributions
Thanks to Prof. Christopher Manning and John Hewitt for their contributions, guidance, facilitation and support related to the creation of this dataset.
|
moshew/my_raft | ---
benchmark: raft
type: prediction
submission_name: SetFit300
---
# RAFT submissions for my_raft
## Submitting to the leaderboard
To make a submission to the [leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard), there are three main steps:
1. Generate predictions on the unlabeled test set of each task
2. Validate the predictions are compatible with the evaluation framework
3. Push the predictions to the Hub!
See the instructions below for more details.
### Rules
1. To prevent overfitting to the public leaderboard, we only evaluate **one submission per week**. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week.
2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.
3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.
4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.
### Submission file format
For each task in RAFT, you should create a CSV file called `predictions.csv` with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:
* ID (int)
* Label (string)
See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
```python
from pathlib import Path
import pandas as pd
from collections import Counter
from datasets import load_dataset, get_dataset_config_names
tasks = get_dataset_config_names("ought/raft")
for task in tasks:
# Load dataset
raft_subset = load_dataset("ought/raft", task)
# Compute majority class over training set
counter = Counter(raft_subset["train"]["Label"])
majority_class = counter.most_common(1)[0][0]
# Load predictions file
preds = pd.read_csv(f"data/{task}/predictions.csv")
# Convert label IDs to label names
preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class)
# Save predictions
preds.to_csv(f"data/{task}/predictions.csv", index=False)
```
As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
```
data
├── ade_corpus_v2
│ ├── predictions.csv
│ └── task.json
├── banking_77
│ ├── predictions.csv
│ └── task.json
├── neurips_impact_statement_risks
│ ├── predictions.csv
│ └── task.json
├── one_stop_english
│ ├── predictions.csv
│ └── task.json
├── overruling
│ ├── predictions.csv
│ └── task.json
├── semiconductor_org_types
│ ├── predictions.csv
│ └── task.json
├── systematic_review_inclusion
│ ├── predictions.csv
│ └── task.json
├── tai_safety_research
│ ├── predictions.csv
│ └── task.json
├── terms_of_service
│ ├── predictions.csv
│ └── task.json
├── tweet_eval_hate
│ ├── predictions.csv
│ └── task.json
└── twitter_complaints
├── predictions.csv
└── task.json
```
### Validate your submission
To ensure that your submission files are correctly formatted, run the following command from the root of the repository:
```
python cli.py validate
```
If everything is correct, you should see the following message:
```
All submission files validated! ✨ 🚀 ✨
Now you can make a submission 🤗
```
### Push your submission to the Hugging Face Hub!
The final step is to commit your files and push them to the Hub:
```
python cli.py submit
```
If there are no errors, you should see the following message:
```
Submission successful! 🎉 🥳 🎉
Your submission will be evaulated on Sunday 05 September 2021 ⏳
```
where the evaluation is run every Sunday and your results will be visible on the leaderboard. |
japanese-asr/whisper_transcriptions.reazonspeech.all_21 | ---
dataset_info:
config_name: all
features:
- name: name
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: whisper_transcript
sequence: int64
splits:
- name: train
num_bytes: 30393392743.0
num_examples: 267807
download_size: 30157077157
dataset_size: 30393392743.0
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
---
|
IIC/livingner3 | ---
language:
- es
tags:
- biomedical
- clinical
- spanish
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-label-classification
license:
- cc-by-4.0
pretty_name: LivingNER3
train-eval-index:
- task: text-classification
task_id: multi_label_classification
splits:
train_split: train
eval_split: test
metrics:
- type: f1
name: f1
---
# LivingNER
This is a third party reupload of the [LivingNER](https://temu.bsc.es/livingner/) task 3 dataset.
It only contains the task 3 for the Spanish language. It does not include the multilingual data nor the background data.
This dataset is part of a benchmark in the paper [TODO](TODO).
### Citation Information
```bibtex
TODO
```
### Citation Information of the original dataset
```bibtex
@article{amiranda2022nlp,
title={Mention detection, normalization \& classification of species, pathogens, humans and food in clinical documents: Overview of LivingNER shared task and resources},
author={Miranda-Escalada, Antonio and Farr{'e}-Maduell, Eul{`a}lia and Lima-L{'o}pez, Salvador and Estrada, Darryl and Gasc{'o}, Luis and Krallinger, Martin},
journal = {Procesamiento del Lenguaje Natural},
year={2022}
}
```
|
magnifi/contextual-new-ontology-v2-contextual-lowercase | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: uid
dtype: string
- name: user_text
dtype: string
- name: true_intent
dtype: string
- name: completion
dtype: string
- name: Source
dtype: string
- name: chat_history
dtype: string
- name: contextual
dtype: bool
- name: synthetic
dtype: bool
- name: in_regression_test
dtype: bool
splits:
- name: train
num_bytes: 2431425
num_examples: 4165
- name: validation
num_bytes: 294522
num_examples: 496
download_size: 779849
dataset_size: 2725947
---
# Dataset Card for "contextual-new-ontology-v2-contextual-lowercase"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bibekyess/layout-detector-flagged-samples | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ygormota10/vozpatrick | ---
license: openrail
---
|
Fsoft-AIC/the-vault-class | ---
language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: true
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Statistics](#dataset-statistics)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
- **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
- **Contact:** support.ailab@fpt.com
- **Website:** https://www.fpt-aicenter.com/ai-residency/
<p align="center">
<img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
</p>
<div align="center">
# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
</div>
## Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
*Note: C and Go are not contained in this repo due to the nonexistence of traditional classes in these languages.*
## Dataset Structure
### Data Instances
```
{
"hexsha": "78b961a6673ec1e12f8d95c33ef081f75561a87c",
"repo": "AIS-Bonn/sl-cutscenes",
"path": "sl_cutscenes/object_models.py",
"license": [
"MIT"
],
"language": "Python",
"identifier": "MeshLoader",
"original_docstring": "\n Class to load the meshes for the objects in a scene.\n ",
"docstring": "Class to load the meshes for the objects in a scene.",
"docstring_tokens": [
"Class",
"to",
"load",
"the",
"meshes",
"for",
"the",
"objects",
"in",
"a",
"scene",
"."
],
"code": "class MeshLoader:\n \"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"\n\n def __init__(self):\n \"\"\"Module initializer\"\"\"\n self.base_dir = CONSTANTS.MESH_BASE_DIR\n self.text_dir = CONSTANTS.TEXT_BASE_DIR\n self.reset()\n\n def reset(self):\n self.loaded_meshes = []\n\n def get_meshes(self):\n \"\"\" \"\"\"\n extract_singular = lambda x: x[0] if len(x) == 1 else x\n return [extract_singular(item) for item in self.loaded_meshes]\n\n def load_meshes(self, obj_info: List[object_info.ObjectInfo], **kwargs):\n \"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"\n paths = []\n for obj in obj_info:\n path = self.text_dir if obj.name.endswith(\"_floor\") or obj.name.endswith(\"_wall\") else self.base_dir\n paths.append((path / obj.mesh_fp).resolve())\n scales = [obj.scale for obj in obj_info]\n class_ids = [obj.class_id for obj in obj_info]\n mod_scales = kwargs.get(\"mod_scale\", [1.0] * len(scales))\n scales = [s * ms for (s, ms) in zip(scales, mod_scales)]\n flags = [mesh_flags(obj) for obj in obj_info]\n meshes = sl.Mesh.load_threaded(filenames=paths, flags=flags)\n\n # Setup class IDs\n for _, (mesh, scale, class_id) in enumerate(zip(meshes, scales, class_ids)):\n pt = torch.eye(4)\n pt[:3, :3] *= scale\n mesh.pretransform = pt\n mesh.class_index = class_id\n\n info_mesh_tuples = list(zip(obj_info, meshes))\n self.loaded_meshes.append(info_mesh_tuples)",
"code_tokens": [
"class",
"MeshLoader",
":",
"def",
"__init__",
"(",
"self",
")",
":",
"\"\"\"Module initializer\"\"\"",
"self",
".",
"base_dir",
"=",
"CONSTANTS",
".",
"MESH_BASE_DIR",
"self",
".",
"text_dir",
"=",
"CONSTANTS",
".",
"TEXT_BASE_DIR",
"self",
".",
"reset",
"(",
")",
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"loaded_meshes",
"=",
"[",
"]",
"def",
"get_meshes",
"(",
"self",
")",
":",
"\"\"\" \"\"\"",
"extract_singular",
"=",
"lambda",
"x",
":",
"x",
"[",
"0",
"]",
"if",
"len",
"(",
"x",
")",
"==",
"1",
"else",
"x",
"return",
"[",
"extract_singular",
"(",
"item",
")",
"for",
"item",
"in",
"self",
".",
"loaded_meshes",
"]",
"def",
"load_meshes",
"(",
"self",
",",
"obj_info",
":",
"List",
"[",
"object_info",
".",
"ObjectInfo",
"]",
",",
"**",
"kwargs",
")",
":",
"\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"",
"paths",
"=",
"[",
"]",
"for",
"obj",
"in",
"obj_info",
":",
"path",
"=",
"self",
".",
"text_dir",
"if",
"obj",
".",
"name",
".",
"endswith",
"(",
"\"_floor\"",
")",
"or",
"obj",
".",
"name",
".",
"endswith",
"(",
"\"_wall\"",
")",
"else",
"self",
".",
"base_dir",
"paths",
".",
"append",
"(",
"(",
"path",
"/",
"obj",
".",
"mesh_fp",
")",
".",
"resolve",
"(",
")",
")",
"scales",
"=",
"[",
"obj",
".",
"scale",
"for",
"obj",
"in",
"obj_info",
"]",
"class_ids",
"=",
"[",
"obj",
".",
"class_id",
"for",
"obj",
"in",
"obj_info",
"]",
"mod_scales",
"=",
"kwargs",
".",
"get",
"(",
"\"mod_scale\"",
",",
"[",
"1.0",
"]",
"*",
"len",
"(",
"scales",
")",
")",
"scales",
"=",
"[",
"s",
"*",
"ms",
"for",
"(",
"s",
",",
"ms",
")",
"in",
"zip",
"(",
"scales",
",",
"mod_scales",
")",
"]",
"flags",
"=",
"[",
"mesh_flags",
"(",
"obj",
")",
"for",
"obj",
"in",
"obj_info",
"]",
"meshes",
"=",
"sl",
".",
"Mesh",
".",
"load_threaded",
"(",
"filenames",
"=",
"paths",
",",
"flags",
"=",
"flags",
")",
"for",
"_",
",",
"(",
"mesh",
",",
"scale",
",",
"class_id",
")",
"in",
"enumerate",
"(",
"zip",
"(",
"meshes",
",",
"scales",
",",
"class_ids",
")",
")",
":",
"pt",
"=",
"torch",
".",
"eye",
"(",
"4",
")",
"pt",
"[",
":",
"3",
",",
":",
"3",
"]",
"*=",
"scale",
"mesh",
".",
"pretransform",
"=",
"pt",
"mesh",
".",
"class_index",
"=",
"class_id",
"info_mesh_tuples",
"=",
"list",
"(",
"zip",
"(",
"obj_info",
",",
"meshes",
")",
")",
"self",
".",
"loaded_meshes",
".",
"append",
"(",
"info_mesh_tuples",
")"
],
"short_docstring": "Class to load the meshes for the objects in a scene.",
"short_docstring_tokens": [
"Class",
"to",
"load",
"the",
"meshes",
"for",
"the",
"objects",
"in",
"a",
"scene",
"."
],
"comment": [
"\"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"",
"\"\"\"Module initializer\"\"\"",
"\"\"\" \"\"\"",
"\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"",
"# Setup class IDs"
],
"parameters": [],
"docstring_params": {
"returns": [],
"raises": [],
"params": [],
"outlier_params": [],
"others": []
}
}
```
### Data Fields
Data fields for function level:
- **hexsha** (string): the unique git hash of file
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **license** (list): licenses in the repo
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **original_string** (string): original version of function/class node
- **original_docstring** (string): the raw string before tokenization or parsing
- **code** (string): the part of the original that is code
- **code_tokens** (list): tokenized version of `code`
- **short_docstring** (string): short, brief summarization (first line of the docstring)
- **short_docstring_tokens** (list): tokenized version of `short_docstring
- **docstring** (string): the top-level comment or docstring (docstring version without param’s doc, return, exception fields, etc)
- **docstring_tokens** (list): tokenized version of docstring
- **comment** (list): list of comments (line) inside the function/class
- **parameters** (list): List of parameters and its type (type can be None)
- **docstring_params** (dict): Dictionary of the parsed information from docstring
See [here](https://github.com/FSoft-AI4Code/TheVault/blob/main/data/README.md) for more details and examples.
### Data Splits
In this repo, the class level data is not split, and contained in only train set.
## Dataset Statistics
|Language | Number of samples |
|:-----------|------------------------:|
|Python | 422,187 |
|Java | 4,872,485 |
|JavaScript | 291,479 |
|PHP | 1,173,916 |
|C# | 1,437,800 |
|C++ | 174,370 |
|Ruby | 353,859 |
|Rust | 93,311 |
|C | - |
|Go | - |
|TOTAL | **9,121,300** |
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```python
from datasets import load_dataset
# Load full class level dataset
dataset = load_dataset("Fsoft-AIC/the-vault-class")
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-class", languages=['Python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-class", streaming= True)
for sample in iter(data['train']):
print(sample)
```
A back up dataset can be downloaded in azure storage. See [Download The Vault from Azure blob storage](https://github.com/FSoft-AI4Code/TheVault#download-via-link).
## Additional information
### Licensing Information
MIT License
### Citation Information
```
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
```
### Contributions
This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code). |
zicsx/mC4-hindi | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 137146387873
num_examples: 18507273
- name: validation
num_bytes: 138079468
num_examples: 18392
download_size: 4087107539
dataset_size: 137284467341
license: apache-2.0
task_categories:
- text-generation
language:
- hi
---
# Dataset Card for "mC4-hindi"
This dataset is a subset of the mC4 dataset, which is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. It contains natural text in 101 languages, including Hindi. This dataset is specifically focused on Hindi text, and contains a variety of different types of text, including news articles, blog posts, and social media posts.
This dataset is intended to be used for training and evaluating natural language processing models for Hindi. It can be used for a variety of tasks, such as pretraining language models, machine translation, text summarization, and question-answering.
**Data format**
The dataset is in JSONL format. Each line in the file contains a JSON object with the following fields:
* `text`: field contains the text of the document.
* `timestamp`: field contains the date and time when the document was crawled.
* `url`: field contains the URL of the document.
**Data splits**
The dataset is split into two parts: train and validation. The train split contains 90% of the data, the validation split contains 5% of the data, and the test split contains 5% of the data.
**Usage**
To use the dataset, you can load it into a Hugging Face Dataset object using the following code:
```python
import datasets
dataset = datasets.load_dataset("zicsx/mC4-hindi")
```
Once you have loaded the dataset, you can access the train and validation splits using the following code:
```python
train_dataset = dataset["train"]
validation_dataset = dataset["validation"]
```
You can then use the dataset to train and evaluate your natural language processing model.
|
DynamicSuperb/ReverberationDetection_LJSpeech_RirsNoises-MediumRoom | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 25740264.93129771
num_examples: 200
download_size: 25640552
dataset_size: 25740264.93129771
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "ReverberationDetectionmediumroom_LJSpeechRirsNoises"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FaalSa/dbscan4 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 63338968
num_examples: 794
- name: validation
num_bytes: 63720088
num_examples: 794
- name: test
num_bytes: 64101208
num_examples: 794
download_size: 271690
dataset_size: 191160264
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
javilonso/rest23_sentiment_data_v3_oversampling | ---
dataset_info:
features:
- name: Title
dtype: string
- name: Review
dtype: string
- name: Polarity
dtype: int64
- name: Country
dtype: int64
- name: Type
dtype: int64
- name: Title_Review
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 246698863.02163294
num_examples: 287936
- name: test
num_bytes: 27411079.978367075
num_examples: 31993
download_size: 170968852
dataset_size: 274109943.0
---
# Dataset Card for "rest23_sentiment_data_v3_oversampling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nzham/nzham | ---
size_categories:
- 1M<n<10M
--- |
open-llm-leaderboard/details_adamo1139__Yi-34B-200K-AEZAKMI-RAW-2301 | ---
pretty_name: Evaluation run of adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301](https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_adamo1139__Yi-34B-200K-AEZAKMI-RAW-2301\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-26T02:39:52.943697](https://huggingface.co/datasets/open-llm-leaderboard/details_adamo1139__Yi-34B-200K-AEZAKMI-RAW-2301/blob/main/results_2024-01-26T02-39-52.943697.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7432390573583032,\n\
\ \"acc_stderr\": 0.028856954294040817,\n \"acc_norm\": 0.749080934110935,\n\
\ \"acc_norm_stderr\": 0.02939165201523678,\n \"mc1\": 0.40024479804161567,\n\
\ \"mc1_stderr\": 0.017151605555749138,\n \"mc2\": 0.568876394941753,\n\
\ \"mc2_stderr\": 0.015032807114194642\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6160409556313993,\n \"acc_stderr\": 0.01421244498065189,\n\
\ \"acc_norm\": 0.6604095563139932,\n \"acc_norm_stderr\": 0.013839039762820169\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6509659430392352,\n\
\ \"acc_stderr\": 0.0047569058196499725,\n \"acc_norm\": 0.847042421828321,\n\
\ \"acc_norm_stderr\": 0.0035921097436286183\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6888888888888889,\n\
\ \"acc_stderr\": 0.03999262876617722,\n \"acc_norm\": 0.6888888888888889,\n\
\ \"acc_norm_stderr\": 0.03999262876617722\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.875,\n \"acc_stderr\": 0.026913523521537846,\n \
\ \"acc_norm\": 0.875,\n \"acc_norm_stderr\": 0.026913523521537846\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.024618298195866514,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.024618298195866514\n \
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8819444444444444,\n\
\ \"acc_stderr\": 0.026983346503309375,\n \"acc_norm\": 0.8819444444444444,\n\
\ \"acc_norm_stderr\": 0.026983346503309375\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.59,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\": 0.59,\n\
\ \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7167630057803468,\n\
\ \"acc_stderr\": 0.034355680560478746,\n \"acc_norm\": 0.7167630057803468,\n\
\ \"acc_norm_stderr\": 0.034355680560478746\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5098039215686274,\n \"acc_stderr\": 0.04974229460422817,\n\
\ \"acc_norm\": 0.5098039215686274,\n \"acc_norm_stderr\": 0.04974229460422817\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n\
\ \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7489361702127659,\n \"acc_stderr\": 0.02834696377716245,\n\
\ \"acc_norm\": 0.7489361702127659,\n \"acc_norm_stderr\": 0.02834696377716245\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.543859649122807,\n\
\ \"acc_stderr\": 0.046854730419077895,\n \"acc_norm\": 0.543859649122807,\n\
\ \"acc_norm_stderr\": 0.046854730419077895\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.7172413793103448,\n \"acc_stderr\": 0.037528339580033376,\n\
\ \"acc_norm\": 0.7172413793103448,\n \"acc_norm_stderr\": 0.037528339580033376\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.6613756613756614,\n \"acc_stderr\": 0.024373197867983053,\n \"\
acc_norm\": 0.6613756613756614,\n \"acc_norm_stderr\": 0.024373197867983053\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5396825396825397,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.5396825396825397,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8806451612903226,\n \"acc_stderr\": 0.01844341132531541,\n \"\
acc_norm\": 0.8806451612903226,\n \"acc_norm_stderr\": 0.01844341132531541\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.6650246305418719,\n \"acc_stderr\": 0.033208527423483104,\n \"\
acc_norm\": 0.6650246305418719,\n \"acc_norm_stderr\": 0.033208527423483104\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\"\
: 0.76,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n\
\ \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9292929292929293,\n \"acc_stderr\": 0.018263105420199488,\n \"\
acc_norm\": 0.9292929292929293,\n \"acc_norm_stderr\": 0.018263105420199488\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9689119170984456,\n \"acc_stderr\": 0.012525310625527029,\n\
\ \"acc_norm\": 0.9689119170984456,\n \"acc_norm_stderr\": 0.012525310625527029\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8025641025641026,\n \"acc_stderr\": 0.02018264696867483,\n \
\ \"acc_norm\": 0.8025641025641026,\n \"acc_norm_stderr\": 0.02018264696867483\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3888888888888889,\n \"acc_stderr\": 0.029723278961476664,\n \
\ \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.029723278961476664\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8361344537815126,\n \"acc_stderr\": 0.02404405494044049,\n \
\ \"acc_norm\": 0.8361344537815126,\n \"acc_norm_stderr\": 0.02404405494044049\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248436,\n \"\
acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248436\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9155963302752294,\n \"acc_stderr\": 0.011918819327334866,\n \"\
acc_norm\": 0.9155963302752294,\n \"acc_norm_stderr\": 0.011918819327334866\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6574074074074074,\n \"acc_stderr\": 0.032365852526021574,\n \"\
acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.032365852526021574\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9068627450980392,\n \"acc_stderr\": 0.020397853969426998,\n \"\
acc_norm\": 0.9068627450980392,\n \"acc_norm_stderr\": 0.020397853969426998\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9029535864978903,\n \"acc_stderr\": 0.019269323025640255,\n \
\ \"acc_norm\": 0.9029535864978903,\n \"acc_norm_stderr\": 0.019269323025640255\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7757847533632287,\n\
\ \"acc_stderr\": 0.027991534258519517,\n \"acc_norm\": 0.7757847533632287,\n\
\ \"acc_norm_stderr\": 0.027991534258519517\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8702290076335878,\n \"acc_stderr\": 0.029473649496907065,\n\
\ \"acc_norm\": 0.8702290076335878,\n \"acc_norm_stderr\": 0.029473649496907065\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8842975206611571,\n \"acc_stderr\": 0.02919980245562281,\n \"\
acc_norm\": 0.8842975206611571,\n \"acc_norm_stderr\": 0.02919980245562281\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8796296296296297,\n\
\ \"acc_stderr\": 0.031457038543062504,\n \"acc_norm\": 0.8796296296296297,\n\
\ \"acc_norm_stderr\": 0.031457038543062504\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8588957055214724,\n \"acc_stderr\": 0.027351605518389752,\n\
\ \"acc_norm\": 0.8588957055214724,\n \"acc_norm_stderr\": 0.027351605518389752\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5267857142857143,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.5267857142857143,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8737864077669902,\n \"acc_stderr\": 0.03288180278808628,\n\
\ \"acc_norm\": 0.8737864077669902,\n \"acc_norm_stderr\": 0.03288180278808628\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n\
\ \"acc_stderr\": 0.01553751426325388,\n \"acc_norm\": 0.9401709401709402,\n\
\ \"acc_norm_stderr\": 0.01553751426325388\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8914431673052363,\n\
\ \"acc_stderr\": 0.011124283175851188,\n \"acc_norm\": 0.8914431673052363,\n\
\ \"acc_norm_stderr\": 0.011124283175851188\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8208092485549133,\n \"acc_stderr\": 0.020647590029679332,\n\
\ \"acc_norm\": 0.8208092485549133,\n \"acc_norm_stderr\": 0.020647590029679332\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7094972067039106,\n\
\ \"acc_stderr\": 0.015183844307206151,\n \"acc_norm\": 0.7094972067039106,\n\
\ \"acc_norm_stderr\": 0.015183844307206151\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.021339479988816024,\n\
\ \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.021339479988816024\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7909967845659164,\n\
\ \"acc_stderr\": 0.02309314039837422,\n \"acc_norm\": 0.7909967845659164,\n\
\ \"acc_norm_stderr\": 0.02309314039837422\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.020736358408060006,\n\
\ \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.020736358408060006\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6170212765957447,\n \"acc_stderr\": 0.028999080904806185,\n \
\ \"acc_norm\": 0.6170212765957447,\n \"acc_norm_stderr\": 0.028999080904806185\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5827900912646675,\n\
\ \"acc_stderr\": 0.012593959992906427,\n \"acc_norm\": 0.5827900912646675,\n\
\ \"acc_norm_stderr\": 0.012593959992906427\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8014705882352942,\n \"acc_stderr\": 0.024231013370541087,\n\
\ \"acc_norm\": 0.8014705882352942,\n \"acc_norm_stderr\": 0.024231013370541087\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8120915032679739,\n \"acc_stderr\": 0.0158035657367767,\n \
\ \"acc_norm\": 0.8120915032679739,\n \"acc_norm_stderr\": 0.0158035657367767\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\
\ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \
\ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8367346938775511,\n \"acc_stderr\": 0.023661699177098608,\n\
\ \"acc_norm\": 0.8367346938775511,\n \"acc_norm_stderr\": 0.023661699177098608\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9154228855721394,\n\
\ \"acc_stderr\": 0.019675343217199173,\n \"acc_norm\": 0.9154228855721394,\n\
\ \"acc_norm_stderr\": 0.019675343217199173\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.02876234912646613,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.02876234912646613\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.02517298435015577,\n\
\ \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.02517298435015577\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.40024479804161567,\n\
\ \"mc1_stderr\": 0.017151605555749138,\n \"mc2\": 0.568876394941753,\n\
\ \"mc2_stderr\": 0.015032807114194642\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8113654301499605,\n \"acc_stderr\": 0.010995172318019808\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5708870356330553,\n \
\ \"acc_stderr\": 0.01363336942564724\n }\n}\n```"
repo_url: https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|arc:challenge|25_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|gsm8k|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hellaswag|10_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T02-39-52.943697.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-26T02-39-52.943697.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- '**/details_harness|winogrande|5_2024-01-26T02-39-52.943697.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-26T02-39-52.943697.parquet'
- config_name: results
data_files:
- split: 2024_01_26T02_39_52.943697
path:
- results_2024-01-26T02-39-52.943697.parquet
- split: latest
path:
- results_2024-01-26T02-39-52.943697.parquet
---
# Dataset Card for Evaluation run of adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301](https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_adamo1139__Yi-34B-200K-AEZAKMI-RAW-2301",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-26T02:39:52.943697](https://huggingface.co/datasets/open-llm-leaderboard/details_adamo1139__Yi-34B-200K-AEZAKMI-RAW-2301/blob/main/results_2024-01-26T02-39-52.943697.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7432390573583032,
"acc_stderr": 0.028856954294040817,
"acc_norm": 0.749080934110935,
"acc_norm_stderr": 0.02939165201523678,
"mc1": 0.40024479804161567,
"mc1_stderr": 0.017151605555749138,
"mc2": 0.568876394941753,
"mc2_stderr": 0.015032807114194642
},
"harness|arc:challenge|25": {
"acc": 0.6160409556313993,
"acc_stderr": 0.01421244498065189,
"acc_norm": 0.6604095563139932,
"acc_norm_stderr": 0.013839039762820169
},
"harness|hellaswag|10": {
"acc": 0.6509659430392352,
"acc_stderr": 0.0047569058196499725,
"acc_norm": 0.847042421828321,
"acc_norm_stderr": 0.0035921097436286183
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6888888888888889,
"acc_stderr": 0.03999262876617722,
"acc_norm": 0.6888888888888889,
"acc_norm_stderr": 0.03999262876617722
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.875,
"acc_stderr": 0.026913523521537846,
"acc_norm": 0.875,
"acc_norm_stderr": 0.026913523521537846
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8,
"acc_stderr": 0.024618298195866514,
"acc_norm": 0.8,
"acc_norm_stderr": 0.024618298195866514
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8819444444444444,
"acc_stderr": 0.026983346503309375,
"acc_norm": 0.8819444444444444,
"acc_norm_stderr": 0.026983346503309375
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.034355680560478746,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.034355680560478746
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5098039215686274,
"acc_stderr": 0.04974229460422817,
"acc_norm": 0.5098039215686274,
"acc_norm_stderr": 0.04974229460422817
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7489361702127659,
"acc_stderr": 0.02834696377716245,
"acc_norm": 0.7489361702127659,
"acc_norm_stderr": 0.02834696377716245
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.543859649122807,
"acc_stderr": 0.046854730419077895,
"acc_norm": 0.543859649122807,
"acc_norm_stderr": 0.046854730419077895
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7172413793103448,
"acc_stderr": 0.037528339580033376,
"acc_norm": 0.7172413793103448,
"acc_norm_stderr": 0.037528339580033376
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6613756613756614,
"acc_stderr": 0.024373197867983053,
"acc_norm": 0.6613756613756614,
"acc_norm_stderr": 0.024373197867983053
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5396825396825397,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.5396825396825397,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8806451612903226,
"acc_stderr": 0.01844341132531541,
"acc_norm": 0.8806451612903226,
"acc_norm_stderr": 0.01844341132531541
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6650246305418719,
"acc_stderr": 0.033208527423483104,
"acc_norm": 0.6650246305418719,
"acc_norm_stderr": 0.033208527423483104
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9292929292929293,
"acc_stderr": 0.018263105420199488,
"acc_norm": 0.9292929292929293,
"acc_norm_stderr": 0.018263105420199488
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9689119170984456,
"acc_stderr": 0.012525310625527029,
"acc_norm": 0.9689119170984456,
"acc_norm_stderr": 0.012525310625527029
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8025641025641026,
"acc_stderr": 0.02018264696867483,
"acc_norm": 0.8025641025641026,
"acc_norm_stderr": 0.02018264696867483
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.029723278961476664,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.029723278961476664
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8361344537815126,
"acc_stderr": 0.02404405494044049,
"acc_norm": 0.8361344537815126,
"acc_norm_stderr": 0.02404405494044049
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4900662251655629,
"acc_stderr": 0.04081677107248436,
"acc_norm": 0.4900662251655629,
"acc_norm_stderr": 0.04081677107248436
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9155963302752294,
"acc_stderr": 0.011918819327334866,
"acc_norm": 0.9155963302752294,
"acc_norm_stderr": 0.011918819327334866
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.032365852526021574,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.032365852526021574
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9068627450980392,
"acc_stderr": 0.020397853969426998,
"acc_norm": 0.9068627450980392,
"acc_norm_stderr": 0.020397853969426998
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9029535864978903,
"acc_stderr": 0.019269323025640255,
"acc_norm": 0.9029535864978903,
"acc_norm_stderr": 0.019269323025640255
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7757847533632287,
"acc_stderr": 0.027991534258519517,
"acc_norm": 0.7757847533632287,
"acc_norm_stderr": 0.027991534258519517
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8702290076335878,
"acc_stderr": 0.029473649496907065,
"acc_norm": 0.8702290076335878,
"acc_norm_stderr": 0.029473649496907065
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8842975206611571,
"acc_stderr": 0.02919980245562281,
"acc_norm": 0.8842975206611571,
"acc_norm_stderr": 0.02919980245562281
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8796296296296297,
"acc_stderr": 0.031457038543062504,
"acc_norm": 0.8796296296296297,
"acc_norm_stderr": 0.031457038543062504
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8588957055214724,
"acc_stderr": 0.027351605518389752,
"acc_norm": 0.8588957055214724,
"acc_norm_stderr": 0.027351605518389752
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5267857142857143,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.5267857142857143,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.8737864077669902,
"acc_stderr": 0.03288180278808628,
"acc_norm": 0.8737864077669902,
"acc_norm_stderr": 0.03288180278808628
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.01553751426325388,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.01553751426325388
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8914431673052363,
"acc_stderr": 0.011124283175851188,
"acc_norm": 0.8914431673052363,
"acc_norm_stderr": 0.011124283175851188
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8208092485549133,
"acc_stderr": 0.020647590029679332,
"acc_norm": 0.8208092485549133,
"acc_norm_stderr": 0.020647590029679332
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7094972067039106,
"acc_stderr": 0.015183844307206151,
"acc_norm": 0.7094972067039106,
"acc_norm_stderr": 0.015183844307206151
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.021339479988816024,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.021339479988816024
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7909967845659164,
"acc_stderr": 0.02309314039837422,
"acc_norm": 0.7909967845659164,
"acc_norm_stderr": 0.02309314039837422
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.020736358408060006,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.020736358408060006
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6170212765957447,
"acc_stderr": 0.028999080904806185,
"acc_norm": 0.6170212765957447,
"acc_norm_stderr": 0.028999080904806185
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5827900912646675,
"acc_stderr": 0.012593959992906427,
"acc_norm": 0.5827900912646675,
"acc_norm_stderr": 0.012593959992906427
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8014705882352942,
"acc_stderr": 0.024231013370541087,
"acc_norm": 0.8014705882352942,
"acc_norm_stderr": 0.024231013370541087
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8120915032679739,
"acc_stderr": 0.0158035657367767,
"acc_norm": 0.8120915032679739,
"acc_norm_stderr": 0.0158035657367767
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8367346938775511,
"acc_stderr": 0.023661699177098608,
"acc_norm": 0.8367346938775511,
"acc_norm_stderr": 0.023661699177098608
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.9154228855721394,
"acc_stderr": 0.019675343217199173,
"acc_norm": 0.9154228855721394,
"acc_norm_stderr": 0.019675343217199173
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.02876234912646613,
"acc_norm": 0.91,
"acc_norm_stderr": 0.02876234912646613
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8771929824561403,
"acc_stderr": 0.02517298435015577,
"acc_norm": 0.8771929824561403,
"acc_norm_stderr": 0.02517298435015577
},
"harness|truthfulqa:mc|0": {
"mc1": 0.40024479804161567,
"mc1_stderr": 0.017151605555749138,
"mc2": 0.568876394941753,
"mc2_stderr": 0.015032807114194642
},
"harness|winogrande|5": {
"acc": 0.8113654301499605,
"acc_stderr": 0.010995172318019808
},
"harness|gsm8k|5": {
"acc": 0.5708870356330553,
"acc_stderr": 0.01363336942564724
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ZHENGRAN/code_ujb_testgen | ---
dataset_info:
features:
- name: test_class_signature
dtype: string
- name: function
dtype: string
- name: prompt_complete_with_comment
dtype: string
- name: location
dtype: string
- name: be_test_import_context
dtype: string
- name: be_test_class_function_signature_context
dtype: string
- name: end
dtype: int64
- name: function_name
dtype: string
- name: prompt_chat_with_comment
dtype: string
- name: start
dtype: int64
- name: prompt_complete
dtype: string
- name: comment
dtype: string
- name: bug_id
dtype: int64
- name: be_test_class_long_name
dtype: string
- name: source_dir
dtype: string
- name: prompt_chat
dtype: string
- name: be_test_class_signature
dtype: string
- name: test_import_context
dtype: string
- name: test_class_function_signature_context
dtype: string
- name: task_id
dtype: string
- name: testmethods
sequence: string
- name: be_test_class_field_context
dtype: string
- name: function_signature
dtype: string
- name: test_class_field_context
dtype: string
- name: project
dtype: string
- name: source
dtype: string
- name: indent
dtype: string
- name: classmethods
list:
- name: be_test_class_file
dtype: string
- name: be_test_class_name
dtype: string
- name: be_test_function_name
dtype: string
- name: be_test_function_signature
dtype: string
- name: line_numbers
sequence: string
- name: method_line_rate
dtype: float64
splits:
- name: train
num_bytes: 9537122
num_examples: 140
download_size: 1701724
dataset_size: 9537122
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
raminass/full_opinions_1994_2020 | ---
dataset_info:
features:
- name: author_name
dtype: string
- name: category
dtype: string
- name: per_curiam
dtype: bool
- name: case_name
dtype: string
- name: date_filed
dtype: string
- name: federal_cite_one
dtype: string
- name: absolute_url
dtype: string
- name: cluster
dtype: string
- name: year_filed
dtype: int64
- name: scdb_id
dtype: string
- name: scdb_decision_direction
dtype: float64
- name: scdb_votes_majority
dtype: float64
- name: scdb_votes_minority
dtype: float64
- name: text
dtype: string
- name: clean_text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 180294148
num_examples: 3790
download_size: 94040799
dataset_size: 180294148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shengcao1006/MMHal-Bench | ---
arxiv: 2309.14525
license: apache-2.0
task_categories:
- visual-question-answering
- image-to-text
language:
- en
pretty_name: MMHal-Bench
size_categories:
- n<1K
---
### Overview
MMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents.
You may check `response_template.json` for more details. In the folder `responses` we have included some example responses from representative LMMs.
### Usage
To evaluate your own model on MMHal-Bench, first generate model responses to the image-question pairs. You may check the template `get_response.py` about how to read and write to the response file.
After that, you may let GPT-4 rate your model's responses automatically. You will need package `openai` installed and an API key. Then, run `eval_gpt4.py`:
```
python eval_gpt4.py \
--response [JSON file with model responses] \
--evaluation [JSON file with GPT-4 evaluation to be saved] \
--api-key [your OpenAI API key, starting with 'sk-'] \
--gpt-model [GPT model to be used, or 'gpt-4-0314' by default]
```
Please note that the GPT-4 API calls are not free. Depending on your model response lengths, evaluating each question may use 1.5k-2k tokens. Also, GPT-4 responses are not deterministic, so you may get different results with the same responses.
At the end of the outputs, you can see the evaluation results like this:
```
Average score: 2.05
Hallucination rate: 0.61
Average score for each question type: 2.33,1.25,2,2.5,1.5,3.33,2.33,1.17
``` |
hemachandher/freshdataset | ---
dataset_info:
features:
- name: image
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: text
dtype: string
splits:
- name: train
num_bytes: 138098403
num_examples: 1001
download_size: 100680724
dataset_size: 138098403
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
one-sec-cv12/chunk_146 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 19708857504.25
num_examples: 205198
download_size: 17829413682
dataset_size: 19708857504.25
---
# Dataset Card for "chunk_146"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wenbopan/OpenOrca-zh-20k | ---
license: apache-2.0
dataset_info:
- config_name: en
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 359541091.33014905
num_examples: 200000
download_size: 205541392
dataset_size: 359541091.33014905
- config_name: zh
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 36081175
num_examples: 19836
download_size: 22533011
dataset_size: 36081175
configs:
- config_name: en
data_files:
- split: train
path: en/train-*
- config_name: zh
data_files:
- split: train
path: zh/train-*
task_categories:
- question-answering
- text-generation
language:
- zh
- en
tags:
- synthetic
---
# Datsetcard for 'OpenOrca-zh-20k'
This is the Chinese version of [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) from [Azure99/blossom-orca-v3](https://huggingface.co/datasets/Azure99/blossom-orca-v3).
Compared to [Azure99/blossom-orca-v3](https://huggingface.co/datasets/Azure99/blossom-orca-v3):
- This dataset extracts all Chinese blossom-orca-v3 samples (around 20K) into a separate `zh` split.
- All samples are formatted in the `ocra` format with an optional `system` role in the first round.
- Instead of using a 1:1 En-Zh ratio as in blossom-orca-v3, this dataset contains 200K GPT-4 generated English samples from OpenOrca in the `en` split. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.