datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
adrionthiago/rickx | ---
license: openrail
---
|
Multimodal-Fatima/vocab_with_openai_classes | ---
dataset_info:
features:
- name: prompt_descriptions
dtype: string
splits:
- name: train
num_bytes: 376362
num_examples: 24741
download_size: 324909
dataset_size: 376362
---
# Dataset Card for "vocab_with_openai_classes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chuyin0321/earnings-forecast-stocks | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: id
dtype: int64
- name: fiscal_end
dtype: string
- name: consensus_eps_forecast
dtype: float64
- name: high_eps_forecast
dtype: float64
- name: low_eps_forecast
dtype: float64
- name: no_of_estimates
dtype: int64
- name: up
dtype: int64
- name: down
dtype: int64
splits:
- name: train
num_bytes: 509571
num_examples: 5699
download_size: 92802
dataset_size: 509571
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "earnings-forecast-stocks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
codebender/faq-vector-embeddings | ---
license: mit
language:
- en
tags:
- us-medical
pretty_name: faq-vector-embeddings
--- |
YiDuo1999/medpub | ---
license: mit
language:
- en
--- |
valerielucro/Preference-Dataset-sample | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 923895
num_examples: 525
download_size: 485838
dataset_size: 923895
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
theGhoul21/t-pas-test-light-3 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4091581
num_examples: 12220
download_size: 2180249
dataset_size: 4091581
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thomasavare/deepl_output | ---
language:
- en
---
transscription of waste-classification-audio-deepl using whisper small asr model and its original before italian translation+text-to-speech+italian-to-english asr. |
theblackcat102/gpt-4v-eval-samples | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: conversations
dtype: string
splits:
- name: test
num_bytes: 334178840.35
num_examples: 1682
download_size: 324453952
dataset_size: 334178840.35
---
# GPT-4V Eval samples
This is a hand curated images from the web and questions asked by myself to GPT-4V to understand its ability and limits.
I am mainly focus in localization, OCR ability and understanding of GPT-4V vision module. So the language part is skipped as we already seen in GPT-4. As long as GPT-4V can extract the required information in text, the rest of the LLM shouldn't have any issue answering the rest of the questions.
The numbers of examples is still pretty tiny and will continue to increase further in the future until I am satisfy with the size. So please check back from time to time.
Note : the dataset viewer had a bug which cause the image displayed differ from the actual dataset (Due to frequent update). Please load the dataset and save it on your local path for best accuracy.
## How to use:
```
import json
from datasets import load_dataset
dataset = load_dataset('theblackcat102/gpt-4v-eval-samples')['test']
print(dataset[0]['image'])
print(json.loads(dataset[0]['conversations']))
```
## Contributions
Please checkout my github repo for more details : [theblackcat102/gpt-4v-samples](https://github.com/theblackcat102/gpt-4v-samples)
## Citation
```
@article{yang2023dawn,
title={The Dawn of LMMs: Preliminary Explorations with GPT-4V (ision)},
author={Yang, Zhengyuan and Li, Linjie and Lin, Kevin and Wang, Jianfeng and Lin, Chung-Ching and Liu, Zicheng and Wang, Lijuan},
journal={arXiv preprint arXiv:2309.17421},
year={2023}
}
```
|
cwchoi/whisper_small_tele | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 27288695432
num_examples: 28409
- name: test
num_bytes: 3411941944
num_examples: 3552
- name: valid
num_bytes: 3410971152
num_examples: 3551
download_size: 5240018465
dataset_size: 34111608528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
hongji-s/test_curated_dataset | ---
dataset_info:
features:
- name: conversations
dtype: string
- name: source
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: generated_instruction
dtype: string
- name: filtered_instruction
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 25614
num_examples: 5
download_size: 41512
dataset_size: 25614
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RENILSON/cloneadolescente | ---
license: openrail
---
|
tasksource/sts-companion | ---
license: apache-2.0
task_categories:
- sentence-similarity
- text-classification
language:
- en
tags:
- sts
---
https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark
The companion datasets to the STS Benchmark comprise the rest of the English datasets used in the STS tasks organized by us in the context of SemEval between 2012 and 2017.
Authors collated two datasets, one with pairs of sentences related to machine translation evaluation. Another one with the rest of datasets, which can be used for domain adaptation studies.
```bib
@inproceedings{cer-etal-2017-semeval,
title = "{S}em{E}val-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation",
author = "Cer, Daniel and
Diab, Mona and
Agirre, Eneko and
Lopez-Gazpio, I{\~n}igo and
Specia, Lucia",
booktitle = "Proceedings of the 11th International Workshop on Semantic Evaluation ({S}em{E}val-2017)",
month = aug,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S17-2001",
doi = "10.18653/v1/S17-2001",
pages = "1--14",
}
``` |
amphora/kobest-trans-en | ---
license: cc-by-sa-4.0
---
|
RIW/butterfly_wm_50_1 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 96434374.0
num_examples: 949
download_size: 96449437
dataset_size: 96434374.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
guoyu-zhang/shp_4 | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 867655
num_examples: 1000
download_size: 574450
dataset_size: 867655
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
irds/tripclick_train_torso | ---
pretty_name: '`tripclick/train/torso`'
viewer: false
source_datasets: ['irds/tripclick']
task_categories:
- text-retrieval
---
# Dataset Card for `tripclick/train/torso`
The `tripclick/train/torso` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/torso).
# Data
This dataset provides:
- `queries` (i.e., topics); count=105,964
- `qrels`: (relevance assessments); count=966,898
- For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/tripclick_train_torso', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/tripclick_train_torso', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Rekabsaz2021TripClick,
title={TripClick: The Log Files of a Large Health Web Search Engine},
author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff},
year={2021},
booktitle={SIGIR}
}
```
|
PocketDoc/Retro-YahooAnswers | ---
task_categories:
- question-answering
language:
- en
tags:
- not-for-all-audiences
- alpaca
pretty_name: Retro Yahoo! Answers
size_categories:
- 1M<n<10M
---
### Description
This dataset is an instruct style dataset comprised of a scrape of the Yahoo! Answers website that was done in 2007. The dataset is comprised of 10 categories labeled 1-10. The categories are as follows:
1. Society & Culture
2. Science & Mathematics
3. Health
4. Education & Reference
5. Computers & Internet
6. Sports
7. Business & Finance
8. Entertainment & Music
9. Family & Relationships
10. Politics & Government
The subject line and body of the question have been combined into a single field and separated by a newline character.
I would caution against using this dataset for any serious application as it contains hilariously out of date information, offensive language, and frequent spelling and grammar errors. It is, however, a charming snapshot of the internet in 2007.
**Roughly 228m llama tokens in 1.4m samples**
### Original README
>Yahoo! Answers Topic Classification Dataset
>
>Version 2, Updated 09/09/2015
>
>
>ORIGIN
>
>The original Yahoo! Answers corpus can be obtained through the Yahoo! Research Alliance Webscope program. The dataset is to be used for approved non-commercial research purposes by recipients who have signed a Data Sharing Agreement with Yahoo!. The dataset is the Yahoo! Answers corpus as of 10/25/2007. It includes all the questions and their corresponding answers. The corpus contains 4483032 questions and their answers.
>
>The Yahoo! Answers topic classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the above dataset. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
>
>
>DESCRIPTION
>
>The Yahoo! Answers topic classification dataset is constructed using 10 largest main categories. Each class contains 140,000 training samples and 6,000 testing samples. Therefore, the total number of training samples is 1,400,000 and testing samples 60,000 in this dataset. From all the answers and other meta-information, we only used the best answer content and the main category information.
>
>The file classes.txt contains a list of classes corresponding to each label.
>
>The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 4 columns in them, corresponding to class index (1 to 10), question title, question content and best answer. The text fields are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n". |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/7f1103ad | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1334
dataset_size: 182
---
# Dataset Card for "7f1103ad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fathyshalab/reklamation24_haus-reinigung-intent | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 199635
num_examples: 395
- name: test
num_bytes: 54472
num_examples: 99
download_size: 140834
dataset_size: 254107
---
# Dataset Card for "reklamation24_haus-reinigung-intent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stanmalkinson199/MikeBirch | ---
license: openrail
---
|
qualitydatalab/autotrain-data-car-review-project | ---
language:
- en
task_categories:
- text-classification
---
# AutoTrain Dataset for project: car-review-project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project car-review-project. It contains consumer car ratings and reviews from [Edmunds website](https://www.kaggle.com/datasets/ankkur13/edmundsconsumer-car-ratings-and-reviews)
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": " ",
"target": 1
},
{
"text": " Mazda truck costs less than the sister look-a-like Ford; Mazda is a \"A\" series of the Ford Ranger, [...]",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['great', 'ok', 'poor'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 19731 |
| valid | 4935 |
|
CyberHarem/ppk_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ppk/PPK/PPK (Girls' Frontline)
This is the dataset of ppk/PPK/PPK (Girls' Frontline), containing 182 images and their tags.
The core tags of this character are `long_hair, earrings, brown_eyes, hair_ornament, blonde_hair, very_long_hair, breasts, light_brown_hair, cross_earrings, hairband, frilled_hairband, bangs, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 182 | 281.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ppk_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 182 | 137.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ppk_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 453 | 301.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ppk_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 182 | 236.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ppk_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 453 | 458.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ppk_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ppk_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 28 |  |  |  |  |  | 1girl, puffy_short_sleeves, frills, solo, jewelry, walther, handgun, black_gloves, cross, holding_gun, black_dress, looking_at_viewer, smile, gothic_lolita, yellow_eyes, simple_background |
| 1 | 8 |  |  |  |  |  | 1girl, black_dress, cross, jewelry, solo, looking_at_viewer, mod3_(girls'_frontline), small_breasts, bare_shoulders, black_gloves, choker, hairclip, hair_ribbon, collarbone, medium_breasts, official_alternate_costume, simple_background, smile, walther, black_footwear, full_body, thighhighs, white_background |
| 2 | 13 |  |  |  |  |  | 1girl, elbow_gloves, jewelry, looking_at_viewer, race_queen, solo, official_alternate_costume, cross, fingerless_gloves, medium_breasts, blush, checkered_flag, smile, visor_cap, white_headwear, holding_flag, thigh_boots, black_footwear, black_thighhighs, thighs |
| 3 | 30 |  |  |  |  |  | 1girl, solo, jewelry, looking_at_viewer, official_alternate_costume, smile, navel, blush, cross, black_bikini, hairclip, medium_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | puffy_short_sleeves | frills | solo | jewelry | walther | handgun | black_gloves | cross | holding_gun | black_dress | looking_at_viewer | smile | gothic_lolita | yellow_eyes | simple_background | mod3_(girls'_frontline) | small_breasts | bare_shoulders | choker | hairclip | hair_ribbon | collarbone | medium_breasts | official_alternate_costume | black_footwear | full_body | thighhighs | white_background | elbow_gloves | race_queen | fingerless_gloves | blush | checkered_flag | visor_cap | white_headwear | holding_flag | thigh_boots | black_thighhighs | thighs | navel | black_bikini |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------|:---------|:-------|:----------|:----------|:----------|:---------------|:--------|:--------------|:--------------|:--------------------|:--------|:----------------|:--------------|:--------------------|:--------------------------|:----------------|:-----------------|:---------|:-----------|:--------------|:-------------|:-----------------|:-----------------------------|:-----------------|:------------|:-------------|:-------------------|:---------------|:-------------|:--------------------|:--------|:-----------------|:------------|:-----------------|:---------------|:--------------|:-------------------|:---------|:--------|:---------------|
| 0 | 28 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | | X | X | X | | X | X | | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | X | | | X | X | | | | X | | | X | X | | | | | | | | | | | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | | |
| 3 | 30 |  |  |  |  |  | X | | | X | X | | | | X | | | X | X | | | | | | | | X | | | X | X | | | | | | | | X | | | | | | | | X | X |
|
akjindal53244/testing-1 | ---
license: apache-2.0
---
|
Tom-nerd/English-signs-with-text | ---
license: mit
language:
- en
size_categories:
- n<1K
---
This dataset contains 67 images around kent that have text on the signs. They have varying levels of being cropped. |
tuqinabc/test | ---
license: mit
---
|
open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama | ---
pretty_name: Evaluation run of OrionStarAI/OrionStar-Yi-34B-Chat-Llama
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-03T18:22:03.358595](https://huggingface.co/datasets/open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama/blob/main/results_2023-12-03T18-22-03.358595.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5390447308567097,\n\
\ \"acc_stderr\": 0.013730428449116344\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.5390447308567097,\n \"acc_stderr\": 0.013730428449116344\n\
\ }\n}\n```"
repo_url: https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_03T17_19_19.971847
path:
- '**/details_harness|gsm8k|5_2023-12-03T17-19-19.971847.parquet'
- split: 2023_12_03T17_20_20.086635
path:
- '**/details_harness|gsm8k|5_2023-12-03T17-20-20.086635.parquet'
- split: 2023_12_03T18_21_56.763818
path:
- '**/details_harness|gsm8k|5_2023-12-03T18-21-56.763818.parquet'
- split: 2023_12_03T18_22_03.358595
path:
- '**/details_harness|gsm8k|5_2023-12-03T18-22-03.358595.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-03T18-22-03.358595.parquet'
- config_name: results
data_files:
- split: 2023_12_03T17_19_19.971847
path:
- results_2023-12-03T17-19-19.971847.parquet
- split: 2023_12_03T17_20_20.086635
path:
- results_2023-12-03T17-20-20.086635.parquet
- split: 2023_12_03T18_21_56.763818
path:
- results_2023-12-03T18-21-56.763818.parquet
- split: 2023_12_03T18_22_03.358595
path:
- results_2023-12-03T18-22-03.358595.parquet
- split: latest
path:
- results_2023-12-03T18-22-03.358595.parquet
---
# Dataset Card for Evaluation run of OrionStarAI/OrionStar-Yi-34B-Chat-Llama
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:22:03.358595](https://huggingface.co/datasets/open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama/blob/main/results_2023-12-03T18-22-03.358595.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5390447308567097,
"acc_stderr": 0.013730428449116344
},
"harness|gsm8k|5": {
"acc": 0.5390447308567097,
"acc_stderr": 0.013730428449116344
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
tvergho/maestro | ---
dataset_info:
features:
- name: image
dtype: image
- name: audio_file
dtype: string
- name: slice
dtype: int16
splits:
- name: train
num_bytes: 8059364821.5
num_examples: 59668
download_size: 8051660600
dataset_size: 8059364821.5
---
# Dataset Card for "maestro"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/trec-spanish_trec4 | ---
pretty_name: '`trec-spanish/trec4`'
viewer: false
source_datasets: ['irds/trec-spanish']
task_categories:
- text-retrieval
---
# Dataset Card for `trec-spanish/trec4`
The `trec-spanish/trec4` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-spanish#trec-spanish/trec4).
# Data
This dataset provides:
- `queries` (i.e., topics); count=25
- `qrels`: (relevance assessments); count=13,109
- For `docs`, use [`irds/trec-spanish`](https://huggingface.co/datasets/irds/trec-spanish)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/trec-spanish_trec4', 'queries')
for record in queries:
record # {'query_id': ..., 'description_es1': ..., 'description_en1': ..., 'description_es2': ..., 'description_en2': ...}
qrels = load_dataset('irds/trec-spanish_trec4', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Harman1995Trec4,
title={Overview of the Fourth Text REtrieval Conference (TREC-4)},
author={Donna Harman},
booktitle={TREC},
year={1995}
}
@misc{Rogers2000Spanish,
title={TREC Spanish LDC2000T51},
author={Rogers, Willie},
year={2000},
url={https://catalog.ldc.upenn.edu/LDC2000T51},
publisher={Linguistic Data Consortium}
}
```
|
nayohan/029_book | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15659070191
num_examples: 57000000
download_size: 9588594881
dataset_size: 15659070191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
adityamwagh/imdb-embeddings-cohere | ---
license: gpl
language:
- en
size_categories:
- 10K<n<100K
---
movie recommendation embeddings |
ImageIN/IA_unlabelled | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: 'Internet Archive historic book pages unlabelled.'
size_categories: []
source_datasets: []
tags: []
task_categories: []
task_ids: []
---
# Data card for Internet Archive historic book pages unlabelled.
- `10,844,387` unlabelled pages from historical books from the internet archive.
- Intended to be used for:
- pre-training computer vision models in an unsupervised manner
- using weak supervision to generate labels |
irds/gov2_trec-tb-2006 | ---
pretty_name: '`gov2/trec-tb-2006`'
viewer: false
source_datasets: ['irds/gov2']
task_categories:
- text-retrieval
---
# Dataset Card for `gov2/trec-tb-2006`
The `gov2/trec-tb-2006` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/gov2#gov2/trec-tb-2006).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=31,984
- For `docs`, use [`irds/gov2`](https://huggingface.co/datasets/irds/gov2)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/gov2_trec-tb-2006', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/gov2_trec-tb-2006', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Buttcher2006TrecTerabyte,
title={The TREC 2006 Terabyte Track},
author={Stefan B\"uttcher and Charles L. A. Clarke and Ian Soboroff},
booktitle={TREC},
year={2006}
}
```
|
distilled-one-sec-cv12-each-chunk-uniq/chunk_273 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 294027676.0
num_examples: 57293
download_size: 298555793
dataset_size: 294027676.0
---
# Dataset Card for "chunk_273"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_qa_no_id_v5_full_recite_ans_sent_random_permute_rerun_2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 5320998.8570247935
num_examples: 3365
- name: validation
num_bytes: 402971
num_examples: 300
download_size: 1441265
dataset_size: 5723969.8570247935
---
# Dataset Card for "squad_qa_no_id_v5_full_recite_ans_sent_random_permute_rerun_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
makram93/accepted_pairs_50 | ---
dataset_info:
features:
- name: url
dtype: string
- name: doc_id
dtype: string
- name: original_title
sequence: string
- name: right
dtype: string
- name: left
dtype: string
splits:
- name: train
num_bytes: 88447.0623234648
num_examples: 100
download_size: 78941
dataset_size: 88447.0623234648
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "accepted_pairs_50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fddemarco/pushshift-reddit | ---
dataset_info:
features:
- name: author
dtype: string
- name: created_utc
dtype: int64
- name: id
dtype: string
- name: num_comments
dtype: int64
- name: score
dtype: int64
- name: selftext
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 20253299583
num_examples: 121782217
download_size: 20253299583
dataset_size: 20253299583
---
# Dataset Card for "pushshift-reddit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
graphmc/minecraft_packet_varint_and_varlong | ---
license: mit
---
### varint dataset
path = /varints
[(int32, varint bytes)]
### varlong dataset
path = /varlosgs
[(int64, varlong bytes)]
|
Stanley8712/telugu3 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 46422230
num_examples: 100000
download_size: 24683316
dataset_size: 46422230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yuan-sf63/chenyu_label_0.5_16 | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
splits:
- name: train
num_bytes: 6743113.545731417
num_examples: 37825
- name: validation
num_bytes: 749274.4542685829
num_examples: 4203
download_size: 0
dataset_size: 7492388.0
---
# Dataset Card for "chenyu_label_0.5_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_osanseviero__mistral-instruct-slerp | ---
pretty_name: Evaluation run of osanseviero/mistral-instruct-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [osanseviero/mistral-instruct-slerp](https://huggingface.co/osanseviero/mistral-instruct-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_osanseviero__mistral-instruct-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-10T19:39:10.172387](https://huggingface.co/datasets/open-llm-leaderboard/details_osanseviero__mistral-instruct-slerp/blob/main/results_2024-01-10T19-39-10.172387.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5514236008900887,\n\
\ \"acc_stderr\": 0.033791449375361236,\n \"acc_norm\": 0.5561976919598308,\n\
\ \"acc_norm_stderr\": 0.03449972215885168,\n \"mc1\": 0.41615667074663404,\n\
\ \"mc1_stderr\": 0.01725565750290304,\n \"mc2\": 0.5761316177255528,\n\
\ \"mc2_stderr\": 0.015724067025526787\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5349829351535836,\n \"acc_stderr\": 0.014575583922019672,\n\
\ \"acc_norm\": 0.5742320819112628,\n \"acc_norm_stderr\": 0.014449464278868814\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5846444931288588,\n\
\ \"acc_stderr\": 0.004917761181740162,\n \"acc_norm\": 0.7834096793467437,\n\
\ \"acc_norm_stderr\": 0.00411079202343171\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.48148148148148145,\n\
\ \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.48148148148148145,\n\
\ \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.038607315993160904,\n\
\ \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.038607315993160904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n\
\ \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.660377358490566,\n \"acc_stderr\": 0.02914690474779833,\n\
\ \"acc_norm\": 0.660377358490566,\n \"acc_norm_stderr\": 0.02914690474779833\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6180555555555556,\n\
\ \"acc_stderr\": 0.04062990784146667,\n \"acc_norm\": 0.6180555555555556,\n\
\ \"acc_norm_stderr\": 0.04062990784146667\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n\
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5549132947976878,\n\
\ \"acc_stderr\": 0.037894017602836484,\n \"acc_norm\": 0.5549132947976878,\n\
\ \"acc_norm_stderr\": 0.037894017602836484\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.047551296160629475,\n\
\ \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.047551296160629475\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n\
\ \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.451063829787234,\n \"acc_stderr\": 0.03252909619613197,\n\
\ \"acc_norm\": 0.451063829787234,\n \"acc_norm_stderr\": 0.03252909619613197\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n\
\ \"acc_stderr\": 0.04615186962583703,\n \"acc_norm\": 0.40350877192982454,\n\
\ \"acc_norm_stderr\": 0.04615186962583703\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4074074074074074,\n \"acc_stderr\": 0.025305906241590632,\n \"\
acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.025305906241590632\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n\
\ \"acc_stderr\": 0.04343525428949098,\n \"acc_norm\": 0.38095238095238093,\n\
\ \"acc_norm_stderr\": 0.04343525428949098\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4064516129032258,\n\
\ \"acc_stderr\": 0.02794172734625631,\n \"acc_norm\": 0.4064516129032258,\n\
\ \"acc_norm_stderr\": 0.02794172734625631\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4433497536945813,\n \"acc_stderr\": 0.03495334582162934,\n\
\ \"acc_norm\": 0.4433497536945813,\n \"acc_norm_stderr\": 0.03495334582162934\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\"\
: 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.03713158067481913,\n\
\ \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.03713158067481913\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7474747474747475,\n \"acc_stderr\": 0.030954055470365897,\n \"\
acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.030954055470365897\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8082901554404145,\n \"acc_stderr\": 0.028408953626245282,\n\
\ \"acc_norm\": 0.8082901554404145,\n \"acc_norm_stderr\": 0.028408953626245282\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4948717948717949,\n \"acc_stderr\": 0.025349672906838653,\n\
\ \"acc_norm\": 0.4948717948717949,\n \"acc_norm_stderr\": 0.025349672906838653\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.25925925925925924,\n \"acc_stderr\": 0.026719240783712173,\n \
\ \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.026719240783712173\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.0322529423239964,\n \
\ \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.0322529423239964\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7577981651376147,\n \"acc_stderr\": 0.01836817630659862,\n \"\
acc_norm\": 0.7577981651376147,\n \"acc_norm_stderr\": 0.01836817630659862\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608044,\n \"\
acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608044\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6666666666666666,\n \"acc_stderr\": 0.033086111132364364,\n \"\
acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.033086111132364364\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7172995780590717,\n \"acc_stderr\": 0.029312814153955927,\n \
\ \"acc_norm\": 0.7172995780590717,\n \"acc_norm_stderr\": 0.029312814153955927\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5964125560538116,\n\
\ \"acc_stderr\": 0.03292802819330313,\n \"acc_norm\": 0.5964125560538116,\n\
\ \"acc_norm_stderr\": 0.03292802819330313\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n\
\ \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7107438016528925,\n \"acc_stderr\": 0.04139112727635463,\n \"\
acc_norm\": 0.7107438016528925,\n \"acc_norm_stderr\": 0.04139112727635463\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.04330043749650743,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.04330043749650743\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6871165644171779,\n \"acc_stderr\": 0.036429145782924055,\n\
\ \"acc_norm\": 0.6871165644171779,\n \"acc_norm_stderr\": 0.036429145782924055\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n\
\ \"acc_stderr\": 0.02250903393707779,\n \"acc_norm\": 0.8632478632478633,\n\
\ \"acc_norm_stderr\": 0.02250903393707779\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7522349936143039,\n\
\ \"acc_stderr\": 0.015438083080568965,\n \"acc_norm\": 0.7522349936143039,\n\
\ \"acc_norm_stderr\": 0.015438083080568965\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5982658959537572,\n \"acc_stderr\": 0.026394104177643634,\n\
\ \"acc_norm\": 0.5982658959537572,\n \"acc_norm_stderr\": 0.026394104177643634\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2860335195530726,\n\
\ \"acc_stderr\": 0.015113972129062143,\n \"acc_norm\": 0.2860335195530726,\n\
\ \"acc_norm_stderr\": 0.015113972129062143\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5947712418300654,\n \"acc_stderr\": 0.02811092849280907,\n\
\ \"acc_norm\": 0.5947712418300654,\n \"acc_norm_stderr\": 0.02811092849280907\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6077170418006431,\n\
\ \"acc_stderr\": 0.02773125864701199,\n \"acc_norm\": 0.6077170418006431,\n\
\ \"acc_norm_stderr\": 0.02773125864701199\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.02712511551316685,\n\
\ \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.02712511551316685\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.41134751773049644,\n \"acc_stderr\": 0.02935491115994098,\n \
\ \"acc_norm\": 0.41134751773049644,\n \"acc_norm_stderr\": 0.02935491115994098\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.39113428943937417,\n\
\ \"acc_stderr\": 0.012463861839982064,\n \"acc_norm\": 0.39113428943937417,\n\
\ \"acc_norm_stderr\": 0.012463861839982064\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.47794117647058826,\n \"acc_stderr\": 0.030343264224213535,\n\
\ \"acc_norm\": 0.47794117647058826,\n \"acc_norm_stderr\": 0.030343264224213535\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.553921568627451,\n \"acc_stderr\": 0.020109864547181354,\n \
\ \"acc_norm\": 0.553921568627451,\n \"acc_norm_stderr\": 0.020109864547181354\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n\
\ \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n\
\ \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.689795918367347,\n \"acc_stderr\": 0.029613459872484378,\n\
\ \"acc_norm\": 0.689795918367347,\n \"acc_norm_stderr\": 0.029613459872484378\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.3383084577114428,\n\
\ \"acc_stderr\": 0.033455630703391914,\n \"acc_norm\": 0.3383084577114428,\n\
\ \"acc_norm_stderr\": 0.033455630703391914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4397590361445783,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.4397590361445783,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.031267817146631786,\n\
\ \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.031267817146631786\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.41615667074663404,\n\
\ \"mc1_stderr\": 0.01725565750290304,\n \"mc2\": 0.5761316177255528,\n\
\ \"mc2_stderr\": 0.015724067025526787\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7513812154696132,\n \"acc_stderr\": 0.012147314713403108\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3078089461713419,\n \
\ \"acc_stderr\": 0.01271440100992365\n }\n}\n```"
repo_url: https://huggingface.co/osanseviero/mistral-instruct-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|arc:challenge|25_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|gsm8k|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hellaswag|10_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-10T19-39-10.172387.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-10T19-39-10.172387.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- '**/details_harness|winogrande|5_2024-01-10T19-39-10.172387.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-10T19-39-10.172387.parquet'
- config_name: results
data_files:
- split: 2024_01_10T19_39_10.172387
path:
- results_2024-01-10T19-39-10.172387.parquet
- split: latest
path:
- results_2024-01-10T19-39-10.172387.parquet
---
# Dataset Card for Evaluation run of osanseviero/mistral-instruct-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [osanseviero/mistral-instruct-slerp](https://huggingface.co/osanseviero/mistral-instruct-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_osanseviero__mistral-instruct-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-10T19:39:10.172387](https://huggingface.co/datasets/open-llm-leaderboard/details_osanseviero__mistral-instruct-slerp/blob/main/results_2024-01-10T19-39-10.172387.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5514236008900887,
"acc_stderr": 0.033791449375361236,
"acc_norm": 0.5561976919598308,
"acc_norm_stderr": 0.03449972215885168,
"mc1": 0.41615667074663404,
"mc1_stderr": 0.01725565750290304,
"mc2": 0.5761316177255528,
"mc2_stderr": 0.015724067025526787
},
"harness|arc:challenge|25": {
"acc": 0.5349829351535836,
"acc_stderr": 0.014575583922019672,
"acc_norm": 0.5742320819112628,
"acc_norm_stderr": 0.014449464278868814
},
"harness|hellaswag|10": {
"acc": 0.5846444931288588,
"acc_stderr": 0.004917761181740162,
"acc_norm": 0.7834096793467437,
"acc_norm_stderr": 0.00411079202343171
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.043163785995113245,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.043163785995113245
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.038607315993160904,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.038607315993160904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.660377358490566,
"acc_stderr": 0.02914690474779833,
"acc_norm": 0.660377358490566,
"acc_norm_stderr": 0.02914690474779833
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6180555555555556,
"acc_stderr": 0.04062990784146667,
"acc_norm": 0.6180555555555556,
"acc_norm_stderr": 0.04062990784146667
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5549132947976878,
"acc_stderr": 0.037894017602836484,
"acc_norm": 0.5549132947976878,
"acc_norm_stderr": 0.037894017602836484
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.047551296160629475,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.047551296160629475
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.451063829787234,
"acc_stderr": 0.03252909619613197,
"acc_norm": 0.451063829787234,
"acc_norm_stderr": 0.03252909619613197
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.04615186962583703,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.04615186962583703
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.025305906241590632,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.025305906241590632
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.04343525428949098,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.04343525428949098
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4064516129032258,
"acc_stderr": 0.02794172734625631,
"acc_norm": 0.4064516129032258,
"acc_norm_stderr": 0.02794172734625631
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4433497536945813,
"acc_stderr": 0.03495334582162934,
"acc_norm": 0.4433497536945813,
"acc_norm_stderr": 0.03495334582162934
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.03713158067481913,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.03713158067481913
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7474747474747475,
"acc_stderr": 0.030954055470365897,
"acc_norm": 0.7474747474747475,
"acc_norm_stderr": 0.030954055470365897
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8082901554404145,
"acc_stderr": 0.028408953626245282,
"acc_norm": 0.8082901554404145,
"acc_norm_stderr": 0.028408953626245282
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4948717948717949,
"acc_stderr": 0.025349672906838653,
"acc_norm": 0.4948717948717949,
"acc_norm_stderr": 0.025349672906838653
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.026719240783712173,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.026719240783712173
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.0322529423239964,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.0322529423239964
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7577981651376147,
"acc_stderr": 0.01836817630659862,
"acc_norm": 0.7577981651376147,
"acc_norm_stderr": 0.01836817630659862
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.03362277436608044,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.03362277436608044
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.033086111132364364,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.033086111132364364
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7172995780590717,
"acc_stderr": 0.029312814153955927,
"acc_norm": 0.7172995780590717,
"acc_norm_stderr": 0.029312814153955927
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5964125560538116,
"acc_stderr": 0.03292802819330313,
"acc_norm": 0.5964125560538116,
"acc_norm_stderr": 0.03292802819330313
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.648854961832061,
"acc_stderr": 0.04186445163013751,
"acc_norm": 0.648854961832061,
"acc_norm_stderr": 0.04186445163013751
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7107438016528925,
"acc_stderr": 0.04139112727635463,
"acc_norm": 0.7107438016528925,
"acc_norm_stderr": 0.04139112727635463
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.04330043749650743,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.04330043749650743
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6871165644171779,
"acc_stderr": 0.036429145782924055,
"acc_norm": 0.6871165644171779,
"acc_norm_stderr": 0.036429145782924055
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.02250903393707779,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.02250903393707779
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7522349936143039,
"acc_stderr": 0.015438083080568965,
"acc_norm": 0.7522349936143039,
"acc_norm_stderr": 0.015438083080568965
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5982658959537572,
"acc_stderr": 0.026394104177643634,
"acc_norm": 0.5982658959537572,
"acc_norm_stderr": 0.026394104177643634
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2860335195530726,
"acc_stderr": 0.015113972129062143,
"acc_norm": 0.2860335195530726,
"acc_norm_stderr": 0.015113972129062143
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5947712418300654,
"acc_stderr": 0.02811092849280907,
"acc_norm": 0.5947712418300654,
"acc_norm_stderr": 0.02811092849280907
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6077170418006431,
"acc_stderr": 0.02773125864701199,
"acc_norm": 0.6077170418006431,
"acc_norm_stderr": 0.02773125864701199
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.02712511551316685,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.02712511551316685
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.41134751773049644,
"acc_stderr": 0.02935491115994098,
"acc_norm": 0.41134751773049644,
"acc_norm_stderr": 0.02935491115994098
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.39113428943937417,
"acc_stderr": 0.012463861839982064,
"acc_norm": 0.39113428943937417,
"acc_norm_stderr": 0.012463861839982064
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.47794117647058826,
"acc_stderr": 0.030343264224213535,
"acc_norm": 0.47794117647058826,
"acc_norm_stderr": 0.030343264224213535
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.553921568627451,
"acc_stderr": 0.020109864547181354,
"acc_norm": 0.553921568627451,
"acc_norm_stderr": 0.020109864547181354
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.046534298079135075,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.689795918367347,
"acc_stderr": 0.029613459872484378,
"acc_norm": 0.689795918367347,
"acc_norm_stderr": 0.029613459872484378
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.3383084577114428,
"acc_stderr": 0.033455630703391914,
"acc_norm": 0.3383084577114428,
"acc_norm_stderr": 0.033455630703391914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4397590361445783,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.4397590361445783,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.031267817146631786,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.031267817146631786
},
"harness|truthfulqa:mc|0": {
"mc1": 0.41615667074663404,
"mc1_stderr": 0.01725565750290304,
"mc2": 0.5761316177255528,
"mc2_stderr": 0.015724067025526787
},
"harness|winogrande|5": {
"acc": 0.7513812154696132,
"acc_stderr": 0.012147314713403108
},
"harness|gsm8k|5": {
"acc": 0.3078089461713419,
"acc_stderr": 0.01271440100992365
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
AxuJI/cathode-1 | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 55347464.0
num_examples: 56
download_size: 51606062
dataset_size: 55347464.0
---
# Dataset Card for "cathode-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-alex-apostolo__filtered-cuad-alex-apostolo__filtered-cu-fd7768-3096988010 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- alex-apostolo/filtered-cuad
eval_info:
task: extractive_question_answering
model: alex-apostolo/legal-bert-base-filtered-cuad
metrics: ['accuracy']
dataset_name: alex-apostolo/filtered-cuad
dataset_config: alex-apostolo--filtered-cuad
dataset_split: test
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: alex-apostolo/legal-bert-base-filtered-cuad
* Dataset: alex-apostolo/filtered-cuad
* Config: alex-apostolo--filtered-cuad
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pankajm](https://huggingface.co/pankajm) for evaluating this model. |
open-llm-leaderboard/details_Dans-DiscountModels__TinyMistral-v2-Test1 | ---
pretty_name: Evaluation run of Dans-DiscountModels/TinyMistral-v2-Test1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Dans-DiscountModels/TinyMistral-v2-Test1](https://huggingface.co/Dans-DiscountModels/TinyMistral-v2-Test1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Dans-DiscountModels__TinyMistral-v2-Test1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-21T02:38:49.773813](https://huggingface.co/datasets/open-llm-leaderboard/details_Dans-DiscountModels__TinyMistral-v2-Test1/blob/main/results_2024-01-21T02-38-49.773813.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2335310199962483,\n\
\ \"acc_stderr\": 0.02999531007525961,\n \"acc_norm\": 0.23385996059713224,\n\
\ \"acc_norm_stderr\": 0.03078636978062643,\n \"mc1\": 0.25091799265605874,\n\
\ \"mc1_stderr\": 0.015176985027707703,\n \"mc2\": 0.5030342289474727,\n\
\ \"mc2_stderr\": 0.015464982097707176\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.18344709897610922,\n \"acc_stderr\": 0.011310170179554543,\n\
\ \"acc_norm\": 0.2150170648464164,\n \"acc_norm_stderr\": 0.01200571763413361\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.261700856403107,\n\
\ \"acc_stderr\": 0.004386622589119065,\n \"acc_norm\": 0.2678749253136825,\n\
\ \"acc_norm_stderr\": 0.00441946998393918\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.1925925925925926,\n\
\ \"acc_stderr\": 0.03406542058502653,\n \"acc_norm\": 0.1925925925925926,\n\
\ \"acc_norm_stderr\": 0.03406542058502653\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17105263157894737,\n \"acc_stderr\": 0.030643607071677088,\n\
\ \"acc_norm\": 0.17105263157894737,\n \"acc_norm_stderr\": 0.030643607071677088\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.3,\n\
\ \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.20754716981132076,\n \"acc_stderr\": 0.02495991802891127,\n\
\ \"acc_norm\": 0.20754716981132076,\n \"acc_norm_stderr\": 0.02495991802891127\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.16,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.16,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.22,\n \"acc_stderr\": 0.0416333199893227,\n \"acc_norm\": 0.22,\n\
\ \"acc_norm_stderr\": 0.0416333199893227\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n\
\ \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n\
\ \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n\
\ \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.02880998985410297,\n\
\ \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.02880998985410297\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n\
\ \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n\
\ \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.23448275862068965,\n \"acc_stderr\": 0.035306258743465914,\n\
\ \"acc_norm\": 0.23448275862068965,\n \"acc_norm_stderr\": 0.035306258743465914\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.21428571428571427,\n \"acc_stderr\": 0.02113285918275444,\n \"\
acc_norm\": 0.21428571428571427,\n \"acc_norm_stderr\": 0.02113285918275444\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.30158730158730157,\n\
\ \"acc_stderr\": 0.04104947269903394,\n \"acc_norm\": 0.30158730158730157,\n\
\ \"acc_norm_stderr\": 0.04104947269903394\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.18387096774193548,\n \"acc_stderr\": 0.022037217340267836,\n \"\
acc_norm\": 0.18387096774193548,\n \"acc_norm_stderr\": 0.022037217340267836\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.19704433497536947,\n \"acc_stderr\": 0.027986724666736205,\n \"\
acc_norm\": 0.19704433497536947,\n \"acc_norm_stderr\": 0.027986724666736205\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.18181818181818182,\n \"acc_stderr\": 0.027479603010538797,\n \"\
acc_norm\": 0.18181818181818182,\n \"acc_norm_stderr\": 0.027479603010538797\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n\
\ \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2,\n \"acc_stderr\": 0.020280805062535722,\n \"acc_norm\"\
: 0.2,\n \"acc_norm_stderr\": 0.020280805062535722\n },\n \"harness|hendrycksTest-high_school_mathematics|5\"\
: {\n \"acc\": 0.21851851851851853,\n \"acc_stderr\": 0.02519575225182379,\n\
\ \"acc_norm\": 0.21851851851851853,\n \"acc_norm_stderr\": 0.02519575225182379\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.22268907563025211,\n \"acc_stderr\": 0.027025433498882392,\n\
\ \"acc_norm\": 0.22268907563025211,\n \"acc_norm_stderr\": 0.027025433498882392\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436776,\n \"\
acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436776\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.1834862385321101,\n \"acc_stderr\": 0.01659525971039931,\n \"\
acc_norm\": 0.1834862385321101,\n \"acc_norm_stderr\": 0.01659525971039931\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.1527777777777778,\n \"acc_stderr\": 0.024536326026134217,\n \"\
acc_norm\": 0.1527777777777778,\n \"acc_norm_stderr\": 0.024536326026134217\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293426,\n\
\ \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293426\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.30493273542600896,\n\
\ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.30493273542600896,\n\
\ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.037683359597287434,\n\
\ \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.037683359597287434\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794088,\n \"\
acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794088\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n\
\ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n\
\ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.2147239263803681,\n \"acc_stderr\": 0.03226219377286774,\n\
\ \"acc_norm\": 0.2147239263803681,\n \"acc_norm_stderr\": 0.03226219377286774\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n\
\ \"acc_stderr\": 0.04432804055291519,\n \"acc_norm\": 0.32142857142857145,\n\
\ \"acc_norm_stderr\": 0.04432804055291519\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n\
\ \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2948717948717949,\n\
\ \"acc_stderr\": 0.029872577708891148,\n \"acc_norm\": 0.2948717948717949,\n\
\ \"acc_norm_stderr\": 0.029872577708891148\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23627075351213284,\n\
\ \"acc_stderr\": 0.0151904737170375,\n \"acc_norm\": 0.23627075351213284,\n\
\ \"acc_norm_stderr\": 0.0151904737170375\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24566473988439305,\n \"acc_stderr\": 0.02317629820399201,\n\
\ \"acc_norm\": 0.24566473988439305,\n \"acc_norm_stderr\": 0.02317629820399201\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2223463687150838,\n\
\ \"acc_stderr\": 0.01390718920815688,\n \"acc_norm\": 0.2223463687150838,\n\
\ \"acc_norm_stderr\": 0.01390718920815688\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.21895424836601307,\n \"acc_stderr\": 0.02367908986180772,\n\
\ \"acc_norm\": 0.21895424836601307,\n \"acc_norm_stderr\": 0.02367908986180772\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.1864951768488746,\n\
\ \"acc_stderr\": 0.02212243977248077,\n \"acc_norm\": 0.1864951768488746,\n\
\ \"acc_norm_stderr\": 0.02212243977248077\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.023132376234543336,\n\
\ \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.023132376234543336\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.22695035460992907,\n \"acc_stderr\": 0.02498710636564297,\n \
\ \"acc_norm\": 0.22695035460992907,\n \"acc_norm_stderr\": 0.02498710636564297\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24511082138200782,\n\
\ \"acc_stderr\": 0.010986307870045517,\n \"acc_norm\": 0.24511082138200782,\n\
\ \"acc_norm_stderr\": 0.010986307870045517\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.26838235294117646,\n \"acc_stderr\": 0.0269174812243772,\n\
\ \"acc_norm\": 0.26838235294117646,\n \"acc_norm_stderr\": 0.0269174812243772\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.2549019607843137,\n \"acc_stderr\": 0.017630827375148383,\n \
\ \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.017630827375148383\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n\
\ \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n\
\ \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.18775510204081633,\n \"acc_stderr\": 0.02500025603954621,\n\
\ \"acc_norm\": 0.18775510204081633,\n \"acc_norm_stderr\": 0.02500025603954621\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n\
\ \"acc_stderr\": 0.03036049015401465,\n \"acc_norm\": 0.24378109452736318,\n\
\ \"acc_norm_stderr\": 0.03036049015401465\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.30120481927710846,\n\
\ \"acc_stderr\": 0.0357160923005348,\n \"acc_norm\": 0.30120481927710846,\n\
\ \"acc_norm_stderr\": 0.0357160923005348\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.3216374269005848,\n \"acc_stderr\": 0.03582529442573122,\n\
\ \"acc_norm\": 0.3216374269005848,\n \"acc_norm_stderr\": 0.03582529442573122\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.25091799265605874,\n\
\ \"mc1_stderr\": 0.015176985027707703,\n \"mc2\": 0.5030342289474727,\n\
\ \"mc2_stderr\": 0.015464982097707176\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.48539857932123126,\n \"acc_stderr\": 0.01404649238327584\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/Dans-DiscountModels/TinyMistral-v2-Test1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|arc:challenge|25_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|gsm8k|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hellaswag|10_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T02-38-49.773813.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-21T02-38-49.773813.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- '**/details_harness|winogrande|5_2024-01-21T02-38-49.773813.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-21T02-38-49.773813.parquet'
- config_name: results
data_files:
- split: 2024_01_21T02_38_49.773813
path:
- results_2024-01-21T02-38-49.773813.parquet
- split: latest
path:
- results_2024-01-21T02-38-49.773813.parquet
---
# Dataset Card for Evaluation run of Dans-DiscountModels/TinyMistral-v2-Test1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Dans-DiscountModels/TinyMistral-v2-Test1](https://huggingface.co/Dans-DiscountModels/TinyMistral-v2-Test1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Dans-DiscountModels__TinyMistral-v2-Test1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-21T02:38:49.773813](https://huggingface.co/datasets/open-llm-leaderboard/details_Dans-DiscountModels__TinyMistral-v2-Test1/blob/main/results_2024-01-21T02-38-49.773813.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2335310199962483,
"acc_stderr": 0.02999531007525961,
"acc_norm": 0.23385996059713224,
"acc_norm_stderr": 0.03078636978062643,
"mc1": 0.25091799265605874,
"mc1_stderr": 0.015176985027707703,
"mc2": 0.5030342289474727,
"mc2_stderr": 0.015464982097707176
},
"harness|arc:challenge|25": {
"acc": 0.18344709897610922,
"acc_stderr": 0.011310170179554543,
"acc_norm": 0.2150170648464164,
"acc_norm_stderr": 0.01200571763413361
},
"harness|hellaswag|10": {
"acc": 0.261700856403107,
"acc_stderr": 0.004386622589119065,
"acc_norm": 0.2678749253136825,
"acc_norm_stderr": 0.00441946998393918
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.1925925925925926,
"acc_stderr": 0.03406542058502653,
"acc_norm": 0.1925925925925926,
"acc_norm_stderr": 0.03406542058502653
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17105263157894737,
"acc_stderr": 0.030643607071677088,
"acc_norm": 0.17105263157894737,
"acc_norm_stderr": 0.030643607071677088
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.20754716981132076,
"acc_stderr": 0.02495991802891127,
"acc_norm": 0.20754716981132076,
"acc_norm_stderr": 0.02495991802891127
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.037455547914624555,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.037455547914624555
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.16,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.16,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.22,
"acc_stderr": 0.0416333199893227,
"acc_norm": 0.22,
"acc_norm_stderr": 0.0416333199893227
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.21965317919075145,
"acc_stderr": 0.031568093627031744,
"acc_norm": 0.21965317919075145,
"acc_norm_stderr": 0.031568093627031744
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.26382978723404255,
"acc_stderr": 0.02880998985410297,
"acc_norm": 0.26382978723404255,
"acc_norm_stderr": 0.02880998985410297
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813365,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813365
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.23448275862068965,
"acc_stderr": 0.035306258743465914,
"acc_norm": 0.23448275862068965,
"acc_norm_stderr": 0.035306258743465914
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.21428571428571427,
"acc_stderr": 0.02113285918275444,
"acc_norm": 0.21428571428571427,
"acc_norm_stderr": 0.02113285918275444
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.30158730158730157,
"acc_stderr": 0.04104947269903394,
"acc_norm": 0.30158730158730157,
"acc_norm_stderr": 0.04104947269903394
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.18387096774193548,
"acc_stderr": 0.022037217340267836,
"acc_norm": 0.18387096774193548,
"acc_norm_stderr": 0.022037217340267836
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.19704433497536947,
"acc_stderr": 0.027986724666736205,
"acc_norm": 0.19704433497536947,
"acc_norm_stderr": 0.027986724666736205
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.18181818181818182,
"acc_stderr": 0.027479603010538797,
"acc_norm": 0.18181818181818182,
"acc_norm_stderr": 0.027479603010538797
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.028697873971860664,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.028697873971860664
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2,
"acc_stderr": 0.020280805062535722,
"acc_norm": 0.2,
"acc_norm_stderr": 0.020280805062535722
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.21851851851851853,
"acc_stderr": 0.02519575225182379,
"acc_norm": 0.21851851851851853,
"acc_norm_stderr": 0.02519575225182379
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.22268907563025211,
"acc_stderr": 0.027025433498882392,
"acc_norm": 0.22268907563025211,
"acc_norm_stderr": 0.027025433498882392
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.1986754966887417,
"acc_stderr": 0.03257847384436776,
"acc_norm": 0.1986754966887417,
"acc_norm_stderr": 0.03257847384436776
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.1834862385321101,
"acc_stderr": 0.01659525971039931,
"acc_norm": 0.1834862385321101,
"acc_norm_stderr": 0.01659525971039931
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1527777777777778,
"acc_stderr": 0.024536326026134217,
"acc_norm": 0.1527777777777778,
"acc_norm_stderr": 0.024536326026134217
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.270042194092827,
"acc_stderr": 0.028900721906293426,
"acc_norm": 0.270042194092827,
"acc_norm_stderr": 0.028900721906293426
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.30493273542600896,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.30493273542600896,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.24427480916030533,
"acc_stderr": 0.037683359597287434,
"acc_norm": 0.24427480916030533,
"acc_norm_stderr": 0.037683359597287434
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.23140495867768596,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.23140495867768596,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2147239263803681,
"acc_stderr": 0.03226219377286774,
"acc_norm": 0.2147239263803681,
"acc_norm_stderr": 0.03226219377286774
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.32142857142857145,
"acc_stderr": 0.04432804055291519,
"acc_norm": 0.32142857142857145,
"acc_norm_stderr": 0.04432804055291519
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2948717948717949,
"acc_stderr": 0.029872577708891148,
"acc_norm": 0.2948717948717949,
"acc_norm_stderr": 0.029872577708891148
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23627075351213284,
"acc_stderr": 0.0151904737170375,
"acc_norm": 0.23627075351213284,
"acc_norm_stderr": 0.0151904737170375
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24566473988439305,
"acc_stderr": 0.02317629820399201,
"acc_norm": 0.24566473988439305,
"acc_norm_stderr": 0.02317629820399201
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2223463687150838,
"acc_stderr": 0.01390718920815688,
"acc_norm": 0.2223463687150838,
"acc_norm_stderr": 0.01390718920815688
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.21895424836601307,
"acc_stderr": 0.02367908986180772,
"acc_norm": 0.21895424836601307,
"acc_norm_stderr": 0.02367908986180772
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.1864951768488746,
"acc_stderr": 0.02212243977248077,
"acc_norm": 0.1864951768488746,
"acc_norm_stderr": 0.02212243977248077
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.023132376234543336,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.023132376234543336
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.22695035460992907,
"acc_stderr": 0.02498710636564297,
"acc_norm": 0.22695035460992907,
"acc_norm_stderr": 0.02498710636564297
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24511082138200782,
"acc_stderr": 0.010986307870045517,
"acc_norm": 0.24511082138200782,
"acc_norm_stderr": 0.010986307870045517
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.26838235294117646,
"acc_stderr": 0.0269174812243772,
"acc_norm": 0.26838235294117646,
"acc_norm_stderr": 0.0269174812243772
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.017630827375148383,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.017630827375148383
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.18775510204081633,
"acc_stderr": 0.02500025603954621,
"acc_norm": 0.18775510204081633,
"acc_norm_stderr": 0.02500025603954621
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.03036049015401465,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.03036049015401465
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.30120481927710846,
"acc_stderr": 0.0357160923005348,
"acc_norm": 0.30120481927710846,
"acc_norm_stderr": 0.0357160923005348
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3216374269005848,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.3216374269005848,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.25091799265605874,
"mc1_stderr": 0.015176985027707703,
"mc2": 0.5030342289474727,
"mc2_stderr": 0.015464982097707176
},
"harness|winogrande|5": {
"acc": 0.48539857932123126,
"acc_stderr": 0.01404649238327584
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
huggingartists/galenskaparna-and-after-shave | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/galenskaparna-and-after-shave"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.252487 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://assets.genius.com/images/default_avatar_300.png?1629820244')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/galenskaparna-and-after-shave">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Galenskaparna & After Shave</div>
<a href="https://genius.com/artists/galenskaparna-and-after-shave">
<div style="text-align: center; font-size: 14px;">@galenskaparna-and-after-shave</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/galenskaparna-and-after-shave).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/galenskaparna-and-after-shave")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|157| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/galenskaparna-and-after-shave")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
llm-aes/gemini_hana_full_rate_explain | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: worker_id
dtype: string
- name: human_label
dtype: int64
- name: llm_label
dtype: int64
- name: generator_1
dtype: string
- name: generator_2
dtype: string
- name: premise
dtype: string
splits:
- name: train
num_bytes: 1133925
num_examples: 5280
download_size: 109556
dataset_size: 1133925
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ylacombe/dummy-optimus-prime-tts | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 29648939.0
num_examples: 21
download_size: 27769319
dataset_size: 29648939.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
prerana17/testing1 | ---
license: afl-3.0
---
|
liuyanchen1015/MULTI_VALUE_rte_regularized_reflexives_aave | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 14208
num_examples: 30
- name: train
num_bytes: 15666
num_examples: 34
download_size: 30156
dataset_size: 29874
---
# Dataset Card for "MULTI_VALUE_rte_regularized_reflexives_aave"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shujatoor/receipt_ocr-small | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 558494
num_examples: 2233
download_size: 238966
dataset_size: 558494
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mborkhat/autotrain-data-nlxe-ggzg-28qh | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 46221549
num_examples: 52002
- name: validation
num_bytes: 46221549
num_examples: 52002
download_size: 48492298
dataset_size: 92443098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-nlxe-ggzg-28qh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dynabench/qa | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
---
# Dataset Card for Dynabench.QA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Dynabench.QA](https://dynabench.org/tasks/2#overall)
- **Paper:** [Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension](https://arxiv.org/abs/2002.00293)
- **Leaderboard:** [Dynabench QA Round 1 Leaderboard](https://dynabench.org/tasks/2#overall)
- **Point of Contact:** [Max Bartolo](max.bartolo@ucl.ac.uk)
### Dataset Summary
Dynabench.QA is an adversarially collected Reading Comprehension dataset spanning over multiple rounds of data collect.
For round 1, it is identical to the [adversarialQA dataset](https://adversarialqa.github.io/), where we have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.
We use three different models; BiDAF (Seo et al., 2016), BERT-Large (Devlin et al., 2018), and RoBERTa-Large (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.
The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.
### Supported Tasks and Leaderboards
`extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1). The [RoBERTa-Large](https://huggingface.co/roberta-large) model trained on all the data combined with [SQuAD](https://arxiv.org/abs/1606.05250) currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "Oxygen",
"paragraphs": [
{
"context": "Among the most important classes of organic compounds that contain oxygen are (where \"R\" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms.",
"qas": [
{
"id": "22bbe104aa72aa9b511dd53237deb11afa14d6e3",
"question": "In addition to having oxygen, what do alcohols, ethers and esters have in common, according to the article?",
"answers": [
{
"answer_start": 36,
"text": "organic compounds"
}
]
},
{
"id": "4240a8e708c703796347a3702cf1463eed05584a",
"question": "What letter does the abbreviation for acid anhydrides both begin and end in?",
"answers": [
{
"answer_start": 244,
"text": "R"
}
]
},
{
"id": "0681a0a5ec852ec6920d6a30f7ef65dced493366",
"question": "Which of the organic compounds, in the article, contains nitrogen?",
"answers": [
{
"answer_start": 262,
"text": "amides"
}
]
},
{
"id": "2990efe1a56ccf81938fa5e18104f7d3803069fb",
"question": "Which of the important classes of organic compounds, in the article, has a number in its abbreviation?",
"answers": [
{
"answer_start": 262,
"text": "amides"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: the title of the Wikipedia page from which the context is sourced
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text
### Data Splits
For round 1, the dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.
## Dataset Creation
### Curation Rationale
This dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.
### Annotations
#### Annotation process
The dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.
#### Who are the annotators?
The annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).
### Licensing Information
This dataset is distributed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
```
@article{bartolo2020beat,
author = {Bartolo, Max and Roberts, Alastair and Welbl, Johannes and Riedel, Sebastian and Stenetorp, Pontus},
title = {Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension},
journal = {Transactions of the Association for Computational Linguistics},
volume = {8},
number = {},
pages = {662-678},
year = {2020},
doi = {10.1162/tacl\_a\_00338},
URL = { https://doi.org/10.1162/tacl_a_00338 },
eprint = { https://doi.org/10.1162/tacl_a_00338 },
abstract = { Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F1 on questions that it cannot answer when trained on SQuAD—only marginally lower than when trained on data collected using RoBERTa itself (41.0F1). }
}
```
### Contributions
Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset. |
irds/mmarco_v2_zh_dev | ---
pretty_name: '`mmarco/v2/zh/dev`'
viewer: false
source_datasets: ['irds/mmarco_v2_zh']
task_categories:
- text-retrieval
---
# Dataset Card for `mmarco/v2/zh/dev`
The `mmarco/v2/zh/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/zh/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=101,093
- `qrels`: (relevance assessments); count=59,273
- For `docs`, use [`irds/mmarco_v2_zh`](https://huggingface.co/datasets/irds/mmarco_v2_zh)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/mmarco_v2_zh_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/mmarco_v2_zh_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Bonifacio2021MMarco,
title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
journal={arXiv:2108.13897}
}
```
|
kovuru/Accidents | ---
license: apache-2.0
---
|
HuggingFaceH4/h4-tests-format-dpo-dataset | ---
dataset_info:
features:
- name: system
dtype: string
- name: prompt
dtype: string
- name: chosen_response
dtype: string
- name: rejected_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 370
num_examples: 1
download_size: 5393
dataset_size: 370
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_openbmb__UltraLM-13b | ---
pretty_name: Evaluation run of openbmb/UltraLM-13b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [openbmb/UltraLM-13b](https://huggingface.co/openbmb/UltraLM-13b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openbmb__UltraLM-13b\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T13:31:34.076061](https://huggingface.co/datasets/open-llm-leaderboard/details_openbmb__UltraLM-13b/blob/main/results_2023-12-02T13-31-34.076061.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"\
acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \
\ \"acc_stderr\": 0.0\n }\n}\n```"
repo_url: https://huggingface.co/openbmb/UltraLM-13b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|arc:challenge|25_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_28T22_40_25.196177
path:
- '**/details_harness|drop|3_2023-10-28T22-40-25.196177.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-28T22-40-25.196177.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_28T22_40_25.196177
path:
- '**/details_harness|gsm8k|5_2023-10-28T22-40-25.196177.parquet'
- split: 2023_12_02T13_31_34.076061
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-31-34.076061.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-31-34.076061.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hellaswag|10_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T00-32-52.750601.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T00-32-52.750601.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T00-32-52.750601.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_28T22_40_25.196177
path:
- '**/details_harness|winogrande|5_2023-10-28T22-40-25.196177.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-28T22-40-25.196177.parquet'
- config_name: results
data_files:
- split: 2023_10_04T00_32_52.750601
path:
- results_2023-10-04T00-32-52.750601.parquet
- split: 2023_10_28T22_40_25.196177
path:
- results_2023-10-28T22-40-25.196177.parquet
- split: 2023_12_02T13_31_34.076061
path:
- results_2023-12-02T13-31-34.076061.parquet
- split: latest
path:
- results_2023-12-02T13-31-34.076061.parquet
---
# Dataset Card for Evaluation run of openbmb/UltraLM-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/openbmb/UltraLM-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [openbmb/UltraLM-13b](https://huggingface.co/openbmb/UltraLM-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openbmb__UltraLM-13b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:31:34.076061](https://huggingface.co/datasets/open-llm-leaderboard/details_openbmb__UltraLM-13b/blob/main/results_2023-12-02T13-31-34.076061.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
EleutherAI/fake-svhn | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
splits:
- name: train
num_bytes: 146232532.875
num_examples: 73257
- name: test
num_bytes: 51384741.0
num_examples: 26032
download_size: 208365744
dataset_size: 197617273.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
fruits-music/fruits-music | ---
extra_gated_prompt: "Please read LICENSE.md before downloading this corpus."
extra_gated_fields:
Country: country
Affiliation: text
I acknowledge that I must not use this corpus for appreciation or entertainment: checkbox
I acknowledge that I must not use this corpus for/with generative AIs: checkbox
I acknowledge that I must not associate the data in this corpus to the real idol groups and idols: checkbox
I agree ALL the statements in the license text: checkbox
extra_gated_button_content: "Acknowledge license"
license: other
license_name: fruits-music-license
license_link: LICENSE.md
language:
- ja
tags:
- music
- idol
- singing voice
- diarization
viewer: false
---
# 🍈 🍒 🍇 FruitsMusic 🍉 🍊 🍓
Corpus of **F**ully **R**eal Pop**u**lar **I**dol-group Songs from You**T**ube Video**s** for **Mus**ic **I**nformation Pro**c**essing.
---
# FruitsMusic: 歌声情報処理のためのアイドルグループ楽曲コーパス
YouTube 上にアップロードされている実在のアイドルグループのミュージックビデオの動画 ID と、楽曲内でどの歌唱者がいつ何を歌唱しているかのアノテーションからなるコーパスです。
## ファイル構成
```
fruits-music
├ singers.csv: 歌唱者リスト
├ songs.csv: 楽曲リスト
├ json: アノテーションファイル
│ ├ AUm01.json
│ ├ AUm02.json
│ └ …
├ rttm: RTTM ファイル
│ ├ AUm01.rttm
│ ├ AUm02.rttm
│ └ …
├ lyrics: 歌詞のテキストファイル
│ ├ AUm01.txt
│ ├ AUm02.txt
│ └ …
├ split_a.txt: Subset A の楽曲 ID リスト
└ split_b.txt: Subset B の楽曲 ID リスト
```
### 歌唱者リスト
```csv singers.csv
id,gender
AUs01,f
AUs02,f
AUs03,f
```
ID はアイドルグループ ID 2 文字 + s + 2 桁の数字 からなります。
gender は現在 f で固定です。
同一のアイドルが複数の ID を持つことはありません。
### 楽曲リスト
```csv songs.csv
id,youtube_id,type,number_of_singers
AUm01,xxxxxxxxxxx,dance_practice,7
AUm02,xxxxxxxxxxx,middle_music_video,7
```
ID はアイドルグループ ID 2 文字 + m + 2 桁の数字 からなります。
type は以下の 3 種類のいずれかです。
- `music_video`: 通常のミュージックビデオ。
- `middle_music_video`: ライブ映像風などのミュージックビデオ。
- `dance_practice`: ダンス練習動画(スタジオなどでのダンスの様子を撮影した動画)
### JSON アノテーションファイル
```json
{
"id": "DRm03",
"youtubeId": "xxxxxxxxxxx",
"type": "dance_practice",
"singerIds": [
"DRs01",
"DRs02",
"DRs03",
"DRs04",
"DRs05",
"DRs06",
"DRs07"
],
"title": "Title",
"songStartsAt": 34779,
"duration": 288368,
"states": [
{
"start": 37727,
"end": 46745,
"singers": [
5
],
"lyrics": "Lyrics",
"realLyrics": null
},
{
"start": 46745,
"end": 53175,
"singers": [
0,
1,
2,
3,
4,
5,
6
],
"lyrics": "Lyrics",
"realLyrics": null
}
]
}
```
- `songStartsAt`: 動画内での楽曲が始まる時刻(ミリ秒)
- `duration`: 楽曲の長さ(ミリ秒)
- `states`: 歌唱状態の情報
- `start`: 歌唱区間の開始時刻(動画に対して・ミリ秒)
- `end`: 歌唱区間の終了時刻(動画に対して・ミリ秒)
- `singers`: 歌唱者のインデックスのリスト
- `lyrics`: 歌詞
- `realLyrics`: 実際に歌唱されている歌詞
- 歌唱されている歌詞が本来の歌詞と同一の場合は `null`
### RTTM ファイル
ダイアライゼーションの評価に用いるためのアノテーションファイルです。
時刻はトリミング後の音声における時刻です。
```
SPEAKER DRm03 1 2.948 15.448 <NA> <NA> DRs06 <NA> <NA>
SPEAKER DRm03 1 11.966 6.43 <NA> <NA> DRs01 <NA> <NA>
SPEAKER DRm03 1 11.966 6.43 <NA> <NA> DRs02 <NA> <NA>
SPEAKER DRm03 1 11.966 6.43 <NA> <NA> DRs03 <NA> <NA>
SPEAKER DRm03 1 11.966 6.43 <NA> <NA> DRs04 <NA> <NA>
SPEAKER DRm03 1 11.966 6.43 <NA> <NA> DRs05 <NA> <NA>
SPEAKER DRm03 1 11.966 6.43 <NA> <NA> DRs07 <NA> <NA>
```
### 歌詞のテキストファイル
歌詞認識などの性能を評価するため、JSON 形式のアノテーションから変換し、手作業で修正したものを同梱しています。
### サブセット定義ファイル
FruitsMusic は Subset A および Subset B に分かれています。
各サブセットの内容は、`split_a.txt` および `split_b.txt` に記載されています。
## ライセンス・利用規約
利用する際には、必ず[ライセンス文](LICENSE.md)をお読みください。
## 引用
- FruitsMusic (https://huggingface.co/datasets/fruits-music/fruits-music)
- 須田仁志,中村友彦,深山覚,緒方淳.FruitsMusic: 音楽情報処理のためのアイドルユニット楽曲コーパス.研究報告音楽情報科学(MUS),2024-MUS-139 (13),pp. 1–10,2024.
## 更新履歴
- 2024/03: v1.1.2
- ZXm01 の歌詞ファイルを修正
- 2024/03: v1.1.1
- 歌詞のテキストファイルを追加
- 2024/03: v1.1.0
- 複数の楽曲のアノテーションの誤りを修正
- Subset A に楽曲 VYm03 を追加
- 2024/01: v1.0.0
|
AdapterOcean/langchain-standardized_embedded | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 8041013
num_examples: 993
download_size: 3773821
dataset_size: 8041013
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "langchain-standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marcoyang/pruned_transducer_stateless6_hubert_xtralarge_ll60k_finetune_ls960 | ---
license: apache-2.0
---
|
songlab/clinvar | ---
license: mit
tags:
- dna
- variant-effect-prediction
- biology
- genomics
---
# ClinVar variants
For more information check out our [paper](https://doi.org/10.1101/2023.10.10.561776) and [repository](https://github.com/songlab-cal/gpn).
## Usage
* Pandas
```python
import pandas as pd
df = pd.read_parquet("hf://datasets/songlab/clinvar/test.parquet")
```
* Polars
```python
import polars as pl
df = pl.read_parquet("https://huggingface.co/datasets/songlab/clinvar/resolve/main/test.parquet")
```
* Datasets
```python
from datasets import load_dataset
dataset = load_dataset("songlab/clinvar", split="test")
``` |
Jianshu001/Voice_test1.0 | ---
license: mit
---
|
AdapterOcean/data-standardized_unified | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 272559159
num_examples: 129062
download_size: 0
dataset_size: 272559159
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data-standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Zone369/Ai | ---
license: artistic-2.0
---
|
alpayariyak/IAM_Sentences | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1053121464.077
num_examples: 5663
download_size: 1128818107
dataset_size: 1053121464.077
---
# IAM Sentences
This dataset contains all sentences from the IAM Handwriting database as combined images instead of separate lines. |
paoloitaliani/ace_attorney | ---
dataset_info:
- config_name: all
features:
- name: document
dtype: string
- name: qa_pair
dtype: string
- name: subset
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 4066829
num_examples: 3854
- name: validation
num_bytes: 514282
num_examples: 481
- name: test
num_bytes: 509760
num_examples: 483
download_size: 2508666
dataset_size: 5090871
- config_name: multilex
features:
- name: id
dtype: string
- name: document
dtype: string
- name: qa_pair
dtype: string
splits:
- name: train
num_bytes: 2510781
num_examples: 2235
- name: validation
num_bytes: 313336
num_examples: 280
- name: test
num_bytes: 314132
num_examples: 279
download_size: 1553363
dataset_size: 3138249
- config_name: output_few_shots_task_desk
features:
- name: document
dtype: string
- name: qa_pair
dtype: string
splits:
- name: train
num_bytes: 80571
num_examples: 80
- name: validation
num_bytes: 8287
num_examples: 10
- name: test
num_bytes: 9032
num_examples: 10
download_size: 73428
dataset_size: 97890
- config_name: output_fewshots
features:
- name: document
dtype: string
- name: qa_pair
dtype: string
splits:
- name: train
num_bytes: 78734
num_examples: 80
- name: validation
num_bytes: 7509
num_examples: 10
- name: test
num_bytes: 8889
num_examples: 10
download_size: 71778
dataset_size: 95132
- config_name: output_zero_shot_llama_prompt
features:
- name: document
dtype: string
- name: qa_pair
dtype: string
splits:
- name: train
num_bytes: 80072
num_examples: 80
- name: validation
num_bytes: 7291
num_examples: 10
- name: test
num_bytes: 9572
num_examples: 10
download_size: 75797
dataset_size: 96935
- config_name: output_zero_shot_task_desk
features:
- name: document
dtype: string
- name: qa_pair
dtype: string
splits:
- name: train
num_bytes: 83927
num_examples: 80
- name: validation
num_bytes: 7766
num_examples: 10
- name: test
num_bytes: 9107
num_examples: 10
download_size: 76564
dataset_size: 100800
- config_name: policies
features:
- name: document
dtype: string
- name: qa_pair
dtype: string
splits:
- name: train
num_bytes: 1502842
num_examples: 1619
- name: validation
num_bytes: 189755
num_examples: 203
- name: test
num_bytes: 193509
num_examples: 202
download_size: 972367
dataset_size: 1886106
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- split: validation
path: all/validation-*
- split: test
path: all/test-*
- config_name: multilex
data_files:
- split: train
path: multilex/train-*
- split: validation
path: multilex/validation-*
- split: test
path: multilex/test-*
- config_name: output_few_shots_task_desk
data_files:
- split: train
path: output_few_shots_task_desk/train-*
- split: validation
path: output_few_shots_task_desk/validation-*
- split: test
path: output_few_shots_task_desk/test-*
- config_name: output_fewshots
data_files:
- split: train
path: output_fewshots/train-*
- split: validation
path: output_fewshots/validation-*
- split: test
path: output_fewshots/test-*
- config_name: output_zero_shot_llama_prompt
data_files:
- split: train
path: output_zero_shot_llama_prompt/train-*
- split: validation
path: output_zero_shot_llama_prompt/validation-*
- split: test
path: output_zero_shot_llama_prompt/test-*
- config_name: output_zero_shot_task_desk
data_files:
- split: train
path: output_zero_shot_task_desk/train-*
- split: validation
path: output_zero_shot_task_desk/validation-*
- split: test
path: output_zero_shot_task_desk/test-*
- config_name: policies
data_files:
- split: train
path: policies/train-*
- split: validation
path: policies/validation-*
- split: test
path: policies/test-*
---
# Dataset Card for "ace_attorney"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CronosGhost/code-reranking | ---
license: mit
dataset_info:
- config_name: CodeLangQueries
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: train
num_bytes: 23150542.5
num_examples: 9900
- name: test
num_bytes: 2572282.5
num_examples: 1100
download_size: 10367838
dataset_size: 25722825.0
- config_name: CodeLangQueries-MachineGeneratedDocs
features:
- name: query
dtype: string
- name: positive
dtype: string
- name: negative
sequence: string
splits:
- name: train
num_bytes: 373862.7
num_examples: 495
- name: test
num_bytes: 41540.3
num_examples: 55
download_size: 166214
dataset_size: 415403.0
- config_name: NaturalLangQueries
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: train
num_bytes: 62984485.8
num_examples: 9900
- name: test
num_bytes: 6998276.2
num_examples: 1100
download_size: 29469643
dataset_size: 69982762.0
- config_name: default
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: train
num_bytes: 23176584.9
num_examples: 9900
- name: test
num_bytes: 2575176.1
num_examples: 1100
download_size: 10376964
dataset_size: 25751761.0
configs:
- config_name: CodeLangQueries
data_files:
- split: train
path: CodeLangQueries/train-*
- split: test
path: CodeLangQueries/test-*
- config_name: CodeLangQueries-MachineGeneratedDocs
data_files:
- split: train
path: CodeLangQueries-MachineGeneratedDocs/train-*
- split: test
path: CodeLangQueries-MachineGeneratedDocs/test-*
- config_name: NaturalLangQueries
data_files:
- split: train
path: NaturalLangQueries/train-*
- split: test
path: NaturalLangQueries/test-*
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Nexdata/Japanese_Conversational_Speech_by_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
task_categories:
- conversational
language:
- ja
---
# Dataset Card for Nexdata/Japanese_Conversational_Speech_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1166?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
About 1000 speakers participated in the recording, and conducted face-to-face communication in a natural way. They had free discussion on a number of given topics, with a wide range of fields; the voice was natural and fluent, in line with the actual dialogue scene. Text is transferred manually, with high accuracy.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1166?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Japanese
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commercial License
### Citation Information
[More Information Needed]
### Contributions |
mozilla-foundation/common_voice_16_1 | ---
pretty_name: Common Voice Corpus 16.1
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ab
- af
- am
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- dyu
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gn
- ha
- he
- hi
- hsb
- hu
- hy
- ia
- id
- ig
- is
- it
- ja
- ka
- kab
- kk
- kmr
- ko
- ky
- lg
- lij
- lo
- lt
- ltg
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mrj
- mt
- myv
- nan
- ne
- nhi
- nl
- nn
- oc
- or
- os
- pa
- pl
- ps
- pt
- quy
- rm
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- skr
- sl
- sq
- sr
- sv
- sw
- ta
- te
- th
- ti
- tig
- tk
- tok
- tr
- tt
- tw
- ug
- uk
- ur
- uz
- vi
- vot
- yi
- yo
- yue
- zgh
- zh
language_bcp47:
- zh-CN
- zh-HK
- zh-TW
- sv-SE
- rm-sursilv
- rm-vallader
- pa-IN
- nn-NO
- ne-NP
- nan-tw
- hy-AM
- ga-IE
- fy-NL
license:
- cc0-1.0
multilinguality:
- multilingual
paperswithcode_id: common-voice
extra_gated_prompt: "By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."
---
# Dataset Card for Common Voice Corpus 16
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Languages
```
Abkhaz, Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hebrew, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latgalian, Latvian, Ligurian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Ossetian, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Telugu, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Western Sierra Puebla Nahuatl, Yiddish, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_16 = load_dataset("mozilla-foundation/common_voice_16_1", "hi", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_16 = load_dataset("mozilla-foundation/common_voice_16_1", "hi", split="train", streaming=True)
print(next(iter(cv_16)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_16 = load_dataset("mozilla-foundation/common_voice_16_1", "hi", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_16), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_16, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_16 = load_dataset("mozilla-foundation/common_voice_16_1", "hi", split="train")
dataloader = DataLoader(cv_16, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_16_1", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
UthmanAyo/Trainingtest | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 338808
num_examples: 200
download_size: 201257
dataset_size: 338808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_wnli_be_perfect | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 2647
num_examples: 12
- name: test
num_bytes: 14221
num_examples: 46
- name: train
num_bytes: 21327
num_examples: 98
download_size: 20308
dataset_size: 38195
---
# Dataset Card for "MULTI_VALUE_wnli_be_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SEACrowd/liputan6 | ---
tags:
- summarization
language:
- ind
---
# liputan6
A large-scale Indonesian summarization dataset consisting of harvested articles from Liputan6.com, an online news portal, resulting in 215,827 document-summary pairs.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{koto2020liputan6,
title={Liputan6: A Large-scale Indonesian Dataset for Text Summarization},
author={Koto, Fajri and Lau, Jey Han and Baldwin, Timothy},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
pages={598--608},
year={2020}
}
```
## License
CC-BY-SA 4.0
## Homepage
[https://github.com/fajri91/sum_liputan6](https://github.com/fajri91/sum_liputan6)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
Jumtra/dolly_oast_jglue_ja | ---
license: cc-by-sa-4.0
---
This dataset is licensed under CC BY SA 4.0
Last Update : 2023-05-17
以下のデータをマージして作成したデータセットです。
databricks-dolly-15k-ja (CC BY 3.0)
https://github.com/kunishou/databricks-dolly-15k-ja
oasst1-ja-89k Repository (apach 1.0)
https://github.com/kunishou/oasst1-89k-ja
JGLUE-JSQuAD (CC BY 4.0)
https://github.com/yahoojapan/JGLUE
|
Kaue123456/MajorAntonioMoraesPauloGoulart | ---
license: openrail
---
|
mk10/Anna | ---
license: creativeml-openrail-m
---
|
librarian-bots/model_card_dataset_mentions | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dataset_mention
'1': no_dataset_mention
splits:
- name: train
num_bytes: 58112
num_examples: 297
download_size: 19321
dataset_size: 58112
license: mit
task_categories:
- text-classification
language:
- en
tags:
- model cards
- metadata
pretty_name: Model Card Dataset Mentions
size_categories:
- n<1K
---
# Dataset Card for Model Card Dataset Mentions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Yuhthe/phoner_seq2seq | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: words
dtype: string
- name: tags
dtype: string
splits:
- name: train
num_bytes: 2534372
num_examples: 5027
- name: val
num_bytes: 1140004
num_examples: 2000
- name: test
num_bytes: 1742126
num_examples: 3000
download_size: 2188554
dataset_size: 5416502
---
# Dataset Card for "phoner_seq2seq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ramgus/musicdiffuser | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1244122934.912
num_examples: 9929
download_size: 1183249933
dataset_size: 1244122934.912
---
# Dataset Card for "musicdiffuser"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xiazeyu/DT_SegNet | ---
dataset_info:
features:
- name: id
dtype: int8
- name: original_name
dtype: string
- name: image
dtype: image
- name: det_annotation
sequence:
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': precipitate
- name: seg_annotation
dtype: image
- name: raw_seg_annotation
dtype: string
splits:
- name: train
num_bytes: 7130619
num_examples: 15
- name: validation
num_bytes: 2195097
num_examples: 4
- name: test
num_bytes: 1956008
num_examples: 5
download_size: 10468587
dataset_size: 11281724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: cc
task_categories:
- image-segmentation
- feature-extraction
language:
- en
tags:
- code
- physics
pretty_name: DT-SegNet
size_categories:
- n<1K
---
# DT_SegNet Dataset
[](https://doi.org/10.1039/D3CP00402C)

[](http://creativecommons.org/licenses/by-nc/3.0/)
[](https://doi.org/10.5281/zenodo.7510032)
[](./LICENSE)
## About The Project
The performance of advanced materials for extreme environments is underpinned by their microstruc- ture, such as the size and distribution of nano- to micro-sized reinforcing phase(s). Chromium-based superalloys are a recently proposed alternative to conventional face-centred-cubic superalloys for high-temperature applications, e.g., Concentrated Solar Power. Their development requires the de- termination of precipitate volume fraction and size distribution using Electron Microscopy (EM), as these properties are crucial for the thermal stability and mechanical properties of chromium superal- loys. Traditional approaches to EM image processing utilise filtering with a fixed contrast threshold, leading to weak robustness to background noise and poor generalisability to different materials. It also requires an enormous amount of time for manual object measurements. Efficient and accurate object detection and segmentation are therefore highly desired to accelerate the development of novel materials like chromium-based superalloys. To address these bottlenecks, based on YOLOv5 and SegFormer structures, this study proposes an end-to-end, two-stage deep learning scheme, DT- SegNet, to perform object detection and segmentation for EM images. The proposed approach can thus benefit from the training efficiency of Convolutional Neural Networks at the detection stage (i.e., a small number of training images required) and the accuracy of the Vision Transformer at the segmentation stage. Extensive numerical experiments demonstrate that the proposed DT-SegNet significantly outperforms the state-of-the-art segmentation tools offered by Weka and ilastik regard- ing a large number of metrics, including accuracy, precision, recall and F1-score. This model will be a meaningful tool for accelerating alloy development and microstructure examination.
## Dataset
All data for this project are stored in the `data/` folder.
Apache Parquet is used for a more efficient storage format.
The dataset is split into three sets: `test`, `train`, and `validation`.
Detection annotation format follows the YOLO format, and segmentation annotation is stored as a PNG image.
The category label is `0` for precipitate.
## Reference
```bibtex
@article{xia2023Accurate,
author = {Zeyu Xia and Kan Ma and Sibo Cheng and Thomas Blackburn and Ziling Peng and Kewei Zhu and Weihang Zhang and Dunhui Xiao and Alexander J Knowles and Rossella Arcucci},
copyright = {CC BY-NC 3.0},
doi = {10.1039/d3cp00402c},
issn = {1463-9076},
journal = {Physical Chemistry Chemical Physics},
keywords = {},
language = {English},
month = {6},
number = {23},
pages = {15970--15987},
pmid = {37265373},
publisher = {Royal Society of Chemistry (RSC)},
title = {Accurate Identification and Measurement of the Precipitate Area by Two-Stage Deep Neural Networks in Novel Chromium-Based Alloy},
url = {https://pubs.rsc.org/en/content/articlelanding/2023/CP/D3CP00402C},
volume = {25},
year = {2023}
}
```
## Contact
Zeyu Xia - [zeyu.xia@connect.qut.edu.au](mailto:zeyu.xia@connect.qut.edu.au)
Kan Ma - [arnaud.masysu@gmail.com](mailto:arnaud.masysu@gmail.com)
Sibo Cheng - [sibo.cheng@imperial.ac.uk](mailto:sibo.cheng@imperial.ac.uk) |
datajuicer/redpajama-cc-2023-06-refined-by-data-juicer | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- data-juicer
- pretraining
size_categories:
- 10M<n<100M
---
# RedPajama -- CommonCrawl-2023-06 (refined by Data-Juicer)
A refined version of CommonCrawl-2023-06 dataset in RedPajama by [Data-Juicer](https://github.com/alibaba/data-juicer). Removing some "bad" samples from the original dataset to make it higher-quality.
This dataset is usually used to pretrain a Large Language Model.
**Notice**: Here is a small subset for previewing. The whole dataset is available [here](https://dail-wlcb.oss-cn-wulanchabu.aliyuncs.com/LLM_data/our_refined_datasets/pretraining/redpajama-cc-refine-results/redpajama-cc-2023-06-refine-result.jsonl) (About 310GB).
## Dataset Information
- Number of samples: 50,643,699 (Keep ~45.46% from the original dataset)
## Refining Recipe
```yaml
# global parameters
project_name: 'Data-Juicer-recipes-cc-2013-06'
dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl'
np: 50 # number of subprocess to process your dataset
open_tracer: true
# process schedule
# a list of several process operators with their arguments
process:
- document_simhash_deduplicator:
tokenization: space
window_size: 6
lowercase: true
ignore_pattern: '\p{P}'
num_blocks: 6
hamming_distance: 4
- clean_email_mapper:
- clean_links_mapper:
- fix_unicode_mapper:
- punctuation_normalization_mapper:
- whitespace_normalization_mapper:
- alphanumeric_filter:
tokenization: false
min_ratio: 0.7508 # 3sigma
max_ratio: 0.8591 # 3sigma -- 1036821
- average_line_length_filter: # for code
max_len: 1500 # < 3sigma -- 395868
- character_repetition_filter:
rep_len: 10
max_ratio: 0.3 # > 3sigma -- 195026
- flagged_words_filter:
lang: en
tokenization: true
max_ratio: 0.0015 # 3sigma -- 287896
- language_id_score_filter: # remove language filter
min_score: 0.793 # 3sigma -- 2173246
- maximum_line_length_filter: # for code
max_len: 5000 # < 3sigma -- 797111
- perplexity_filter:
lang: en
max_ppl: 5000 # 3sigma -- 942162
- special_characters_filter:
min_ratio: 0.15 # > 3sigma
max_ratio: 0.35 # > 3sigma -- 1155090
- text_length_filter:
max_len: 58187 # 3sigma -- 1165902
- words_num_filter:
lang: en
tokenization: true
min_num: 20
max_num: 11529 # 3sigma -- 1185363
- word_repetition_filter:
lang: en
tokenization: true
rep_len: 10
max_ratio: 0.2962 # 3sigma -- 2407282
``` |
visheratin/laion-coco-nllb | ---
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
license: cc-by-nc-4.0
size_categories:
- 100K<n<1M
task_categories:
- image-to-text
- translation
pretty_name: LAION-COCO translated to 200 languages
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: eng_caption
dtype: string
- name: captions
sequence:
sequence: string
- name: score
dtype: float64
splits:
- name: test
num_bytes: 271360114
num_examples: 14906
- name: train
num_bytes: 15986931307
num_examples: 878978
download_size: 10358151216
dataset_size: 16258291421
language_details: ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab,
aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng,
ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl,
bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn,
bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn,
dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn,
est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn,
fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr,
hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn,
ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn,
kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn,
kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn,
kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn,
lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn,
mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn,
mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn,
nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya,
pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn,
ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr,
sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn,
spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn,
szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi,
taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn,
twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn,
vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans,
zho_Hant, zul_Latn
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
# LAION COCO translated into 200 languages
This dataset contains the samples of the [LAION-COCO](https://huggingface.co/datasets/laion/laion-coco) dataset translated to 200 languages using
the largest [NLLB-200 model](https://huggingface.co/facebook/nllb-200-3.3B) (3.3B parameters).
## Fields description
1. `id` - unique ID of the image.
2. `url` - original URL of the image from the LAION-COCO dataset.
3. `eng_caption` - original English caption from the LAION-COCO dataset.
4. `captions` - a list of captions translated to the languages from the Flores 200 dataset. Every item in the list is a list where the first element is a BCP-47 language code, and the second one is a caption in this language. The list of all language codes for the Flores 200 dataset can be found [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
5. `score` - aesthetic score generated using [LAION aesthetic predictor](https://github.com/christophschuhmann/improved-aesthetic-predictor/). The images in the dataset have the score of 4.5+.
## Images
The dataset was filtered to contain only working image URLs. However, the availability may change in the future. Because of that, all images from this dataset are available at [https://nllb-data.com/](https://nllb-data.com/).
To get the image, use the following format:
```
https://nllb-data.com/{id}.jpg
```
## Paper
The dataset was used to train the models in the paper: "[NLLB-CLIP - train performant multilingual image retrieval model on a budget](https://arxiv.org/abs/2309.01859)". |
open-llm-leaderboard/details_openchat__openchat_v3.1 | ---
pretty_name: Evaluation run of openchat/openchat_v3.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [openchat/openchat_v3.1](https://huggingface.co/openchat/openchat_v3.1) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openchat__openchat_v3.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-16T02:39:54.553691](https://huggingface.co/datasets/open-llm-leaderboard/details_openchat__openchat_v3.1/blob/main/results_2023-10-16T02-39-54.553691.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0016778523489932886,\n\
\ \"em_stderr\": 0.00041913301788269345,\n \"f1\": 0.06259228187919454,\n\
\ \"f1_stderr\": 0.001365935795409535,\n \"acc\": 0.45020712996200873,\n\
\ \"acc_stderr\": 0.010730538116775\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.00041913301788269345,\n\
\ \"f1\": 0.06259228187919454,\n \"f1_stderr\": 0.001365935795409535\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1379833206974981,\n \
\ \"acc_stderr\": 0.009499777327746841\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7624309392265194,\n \"acc_stderr\": 0.011961298905803162\n\
\ }\n}\n```"
repo_url: https://huggingface.co/openchat/openchat_v3.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|arc:challenge|25_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_24T04_16_26.631092
path:
- '**/details_harness|drop|3_2023-09-24T04-16-26.631092.parquet'
- split: 2023_10_16T02_39_54.553691
path:
- '**/details_harness|drop|3_2023-10-16T02-39-54.553691.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T02-39-54.553691.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_24T04_16_26.631092
path:
- '**/details_harness|gsm8k|5_2023-09-24T04-16-26.631092.parquet'
- split: 2023_10_16T02_39_54.553691
path:
- '**/details_harness|gsm8k|5_2023-10-16T02-39-54.553691.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-16T02-39-54.553691.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hellaswag|10_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T17:45:13.943818.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-02T17:45:13.943818.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-02T17:45:13.943818.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_24T04_16_26.631092
path:
- '**/details_harness|winogrande|5_2023-09-24T04-16-26.631092.parquet'
- split: 2023_10_16T02_39_54.553691
path:
- '**/details_harness|winogrande|5_2023-10-16T02-39-54.553691.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T02-39-54.553691.parquet'
- config_name: results
data_files:
- split: 2023_08_02T17_45_13.943818
path:
- results_2023-08-02T17:45:13.943818.parquet
- split: 2023_09_24T04_16_26.631092
path:
- results_2023-09-24T04-16-26.631092.parquet
- split: 2023_10_16T02_39_54.553691
path:
- results_2023-10-16T02-39-54.553691.parquet
- split: latest
path:
- results_2023-10-16T02-39-54.553691.parquet
---
# Dataset Card for Evaluation run of openchat/openchat_v3.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/openchat/openchat_v3.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [openchat/openchat_v3.1](https://huggingface.co/openchat/openchat_v3.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openchat__openchat_v3.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T02:39:54.553691](https://huggingface.co/datasets/open-llm-leaderboard/details_openchat__openchat_v3.1/blob/main/results_2023-10-16T02-39-54.553691.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0016778523489932886,
"em_stderr": 0.00041913301788269345,
"f1": 0.06259228187919454,
"f1_stderr": 0.001365935795409535,
"acc": 0.45020712996200873,
"acc_stderr": 0.010730538116775
},
"harness|drop|3": {
"em": 0.0016778523489932886,
"em_stderr": 0.00041913301788269345,
"f1": 0.06259228187919454,
"f1_stderr": 0.001365935795409535
},
"harness|gsm8k|5": {
"acc": 0.1379833206974981,
"acc_stderr": 0.009499777327746841
},
"harness|winogrande|5": {
"acc": 0.7624309392265194,
"acc_stderr": 0.011961298905803162
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
jan-hq/indonesian_sft_binarized | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 30717769.625560213
num_examples: 12450
- name: test
num_bytes: 3414730.374439786
num_examples: 1384
download_size: 15445003
dataset_size: 34132500.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Cartinoe5930/CLIcK_category | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: paragraph
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 1802820
num_examples: 1995
download_size: 744959
dataset_size: 1802820
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Syed-Hasan-8503/pretrain_test3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 70489729752
num_examples: 45550843
download_size: 33881086074
dataset_size: 70489729752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
edbeeching/prj_gia_dataset_atari_2B_atari_centipede_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_centipede environment, sample for the policy atari_2B_atari_centipede_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
asas-ai/Tashkeela | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: text_no_taskheel
dtype: string
splits:
- name: train
num_bytes: 1591938210.245426
num_examples: 1592319
download_size: 726281863
dataset_size: 1591938210.245426
---
# Dataset Card for "Tashkeela"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
deokhk/fr_wiki_sentences_1000000 | ---
dataset_info:
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 134836766
num_examples: 1000000
- name: dev
num_bytes: 136230
num_examples: 1000
download_size: 76821477
dataset_size: 134972996
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
|
breno30/McThVoz | ---
license: openrail
---
|
autoevaluate/autoeval-staging-eval-project-emotion-872f08fa-10855459 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: bhadresh-savani/distilbert-base-uncased-finetuned-emotion
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@bhadresh-savani](https://huggingface.co/bhadresh-savani) for evaluating this model. |
biadrivex/bonito | ---
license: openrail
---
|
vwxyzjn/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_1711138793 | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
splits:
- name: train
num_bytes: 2125689249
num_examples: 116722
- name: validation
num_bytes: 117437271
num_examples: 6447
- name: test
num_bytes: 119410966
num_examples: 6553
download_size: 562087836
dataset_size: 2362537486
---
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task
The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset
These columns are taken directly from the aforementioned dataset:
* **id**: unique identifier for the post
* **subreddit**: subreddit the post was taken from
* **title**: title of the post
* **post**: body of the post
* **summary**: summary of the post
* **reference_response**: reference response for the post
These columns are added by this preprocessing script:
* **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last `
`. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below).
* **query_token**: tokenized version of `query`
* **reference_response_token**: tokenized version of `reference_response`
* **reference_response_token_len**: length of `reference_response_token`
* **query_reference_response**: concatenation of `query.strip()` and `reference_response`
* **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens
* **query_reference_response_token_len**: length of `query_reference_response_token`
# Args
```python
{'base_model': 'EleutherAI/pythia-1b-deduped',
'check_length_correctness': True,
'cnndm_params': TaskQueryHParams(length=1919,
format_str='Article:\n{article}\n\nTL;DR:\n',
truncate_field='article',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=None,
max_sft_query_response_length=None,
max_rm_response_length=155,
max_rm_query_response_length=2021),
'debug': False,
'hf_entity': 'vwxyzjn',
'push_to_hub': True,
'tldr_params': TaskQueryHParams(length=512,
format_str='SUBREDDIT: r/{subreddit}\n'
'\n'
'TITLE: {title}\n'
'\n'
'POST: {post}\n'
'\n'
'TL;DR:',
truncate_field='post',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=53,
max_sft_query_response_length=562,
max_rm_response_length=169,
max_rm_query_response_length=638)}
```
|
AdapterOcean/python3-standardized_cluster_19_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 14812867
num_examples: 10446
download_size: 2733361
dataset_size: 14812867
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_cluster_19_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dsupa/dogdatasets | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': affenpinscher
'1': afghan_hound
'2': african_hunting_dog
'3': airedale
'4': american_staffordshire_terrier
'5': appenzeller
'6': australian_terrier
'7': basenji
'8': basset
'9': beagle
'10': bedlington_terrier
'11': bernese_mountain_dog
'12': black-and-tan_coonhound
'13': blenheim_spaniel
'14': bloodhound
'15': bluetick
'16': border_collie
'17': border_terrier
'18': borzoi
'19': boston_bull
'20': bouvier_des_flandres
'21': boxer
'22': brabancon_griffon
'23': briard
'24': brittany_spaniel
'25': bull_mastiff
'26': cairn
'27': cardigan
'28': chesapeake_bay_retriever
'29': chihuahua
'30': chow
'31': clumber
'32': cocker_spaniel
'33': collie
'34': curly-coated_retriever
'35': dandie_dinmont
'36': dhole
'37': dingo
'38': doberman
'39': english_foxhound
'40': english_setter
'41': english_springer
'42': entlebucher
'43': eskimo_dog
'44': flat-coated_retriever
'45': french_bulldog
'46': german_shepherd
'47': german_short-haired_pointer
'48': giant_schnauzer
'49': golden_retriever
'50': gordon_setter
'51': great_dane
'52': great_pyrenees
'53': greater_swiss_mountain_dog
'54': groenendael
'55': ibizan_hound
'56': irish_setter
'57': irish_terrier
'58': irish_water_spaniel
'59': irish_wolfhound
'60': italian_greyhound
'61': japanese_spaniel
'62': keeshond
'63': kelpie
'64': kerry_blue_terrier
'65': komondor
'66': kuvasz
'67': labrador_retriever
'68': lakeland_terrier
'69': leonberg
'70': lhasa
'71': malamute
'72': malinois
'73': maltese_dog
'74': mexican_hairless
'75': miniature_pinscher
'76': miniature_poodle
'77': miniature_schnauzer
'78': newfoundland
'79': norfolk_terrier
'80': norwegian_elkhound
'81': norwich_terrier
'82': old_english_sheepdog
'83': otterhound
'84': papillon
'85': pekinese
'86': pembroke
'87': pomeranian
'88': pug
'89': redbone
'90': rhodesian_ridgeback
'91': rottweiler
'92': saint_bernard
'93': saluki
'94': samoyed
'95': schipperke
'96': scotch_terrier
'97': scottish_deerhound
'98': sealyham_terrier
'99': shetland_sheepdog
'100': shih-tzu
'101': siberian_husky
'102': silky_terrier
'103': soft-coated_wheaten_terrier
'104': staffordshire_bullterrier
'105': standard_poodle
'106': standard_schnauzer
'107': sussex_spaniel
'108': tibetan_mastiff
'109': tibetan_terrier
'110': toy_poodle
'111': toy_terrier
'112': vizsla
'113': walker_hound
'114': weimaraner
'115': welsh_springer_spaniel
'116': west_highland_white_terrier
'117': whippet
'118': wire-haired_fox_terrier
'119': yorkshire_terrier
splits:
- name: train
num_bytes: 292133954.013
num_examples: 8127
- name: test
num_bytes: 79266534.295
num_examples: 2095
download_size: 361889607
dataset_size: 371400488.308
---
# Dataset Card for "dogdatasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ferrorist/20240324_korean_dataset_v01 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 125447634
num_examples: 258515
download_size: 68708388
dataset_size: 125447634
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BlueFalconHD/ORDialogueIcons | ---
license: mit
---
|
sileod/mindgames | ---
language:
- en
license: apache-2.0
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
tags:
- theory of mind
- tom
- Logical-Reasoning
- Modal-Logic
- Reasoning
- Logics
- Logic
- nli
- model-checking
- natural language inference
dataset_info:
features:
- name: premise
dtype: string
- name: smcdel_problem
dtype: string
- name: n_announcements
dtype: int64
- name: pbcheck
dtype: string
- name: hypothesis
dtype: string
- name: setup
dtype: string
- name: hypothesis_depth
dtype: int64
- name: n_agents
dtype: int64
- name: label
dtype: string
- name: names
sequence: string
- name: index
dtype: int64
- name: s-l
dtype: string
- name: deberta_pred
dtype: int64
- name: deberta_confidence
dtype: float64
- name: difficulty
dtype: float64
splits:
- name: train
num_bytes: 8702021
num_examples: 11174
- name: validation
num_bytes: 2904084
num_examples: 3725
- name: test
num_bytes: 2909341
num_examples: 3725
download_size: 2989857
dataset_size: 14515446
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
Mindgame dataset
Code:
https://github.com/sileod/llm-theory-of-mind
Article (Accepted at EMNLP 2023 Findings):
https://arxiv.org/abs/2305.03353
```
@article{sileo2023mindgames,
title={MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic},
author={Sileo, Damien and Lernould, Antoine},
journal={arXiv preprint arXiv:2305.03353},
year={2023}
}
``` |
HydraLM/TinyStoriesInstruct-standardized | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 2802340915
num_examples: 3615652
- name: validation
num_bytes: 28294261
num_examples: 36425
download_size: 1366119719
dataset_size: 2830635176
---
# Dataset Card for "TinyStoriesInstruct-standardized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RomilsonB/henryfreitas | ---
license: openrail
---
|
Mithil/amazonFakeReview | ---
license: afl-3.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.