datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
sam1120/dropoff-utcustom-TRAIN | ---
dataset_info:
features:
- name: name
dtype: string
- name: pixel_values
dtype: image
- name: labels
dtype: image
splits:
- name: train
num_bytes: 142272068.0
num_examples: 50
download_size: 43507500
dataset_size: 142272068.0
---
# Dataset Card for "dropoff-utcustom-TRAIN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mayank082000/Multilingual_Sentences_with_Sentences | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 509463
num_examples: 2289
download_size: 53713
dataset_size: 509463
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aisyahhrazak/crawl-malaysiagazette | ---
language:
- ms
---
About
- Data scraped from https://malaysiagazette.com/
- on 4.7.2023 |
dipteshkanojia/t5-qe-2023-indic-multi-da | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 47647871
num_examples: 58940
download_size: 18352409
dataset_size: 47647871
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "t5-qe-2023-indic-multi-da"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955857 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-2.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
liuyanchen1015/MULTI_VALUE_qqp_drop_copula_be_AP | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 574829
num_examples: 3667
- name: test
num_bytes: 6086644
num_examples: 38655
- name: train
num_bytes: 5201008
num_examples: 32930
download_size: 7404705
dataset_size: 11862481
---
# Dataset Card for "MULTI_VALUE_qqp_drop_copula_be_AP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
paul-w-qs/contracts_v6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: N_ROWS
dtype: int64
- name: N_COLS
dtype: int64
- name: FONT_SIZE
dtype: int64
- name: FONT_NAME
dtype: string
- name: BORDER_THICKNESS
dtype: int64
- name: TABLE_STYLE
dtype: string
- name: NOISED
dtype: bool
- name: LABEL_NOISE
dtype: bool
- name: JSON_LABEL
dtype: string
splits:
- name: train
num_bytes: 360922904.016
num_examples: 5364
download_size: 360853881
dataset_size: 360922904.016
---
# Dataset Card for "contracts_v6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
onurSakar/GYM-Exercise | ---
language:
- en
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 839599
num_examples: 1660
download_size: 293713
dataset_size: 839599
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
irds/wikir_it16k | ---
pretty_name: '`wikir/it16k`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wikir/it16k`
The `wikir/it16k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/it16k).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=503,012
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikir_it16k', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in ๐ค Dataset format.
## Citation Information
```
@inproceedings{Frej2020Wikir,
title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={LREC},
year={2020}
}
@inproceedings{Frej2020MlWikir,
title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={CIRCLE},
year={2020}
}
```
|
uobinxiao/open_tables_icttd_for_table_detection | ---
license: apache-2.0
---
Datasets for the paper "Revisiting Table Detection Datasets for Visually Rich Documents" (https://arxiv.org/abs/2305.04833).
## License
Since this dataset is built on several open datasets and open documents, users should also adhere to the licenses of these publicly available datasets and documents.
|
allganize/rag-ko | ---
dataset_info:
features:
- name: index
dtype: int64
- name: system
dtype: string
- name: human
dtype: string
- name: answer
dtype: string
- name: answer_position
dtype: int64
- name: answer_context_title
dtype: string
- name: answer_context_summary
dtype: string
splits:
- name: train
num_bytes: 914673
num_examples: 200
- name: test
num_bytes: 914673
num_examples: 200
download_size: 2352755
dataset_size: 1829346
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
language:
- ko
---
# rag-ko
- `rag-ko` ๋ฐ์ดํฐ๋ ๊ธ์ต ๋๋ฉ์ธ์ RAG(Retrieval Augmented Generation, ๊ฒ์์ฆ๊ฐ์์ฑ) ๋ฐ์ดํฐ์
์
๋๋ค. RAG๋ฅผ ์งํํ ์ ์๋๋ก Golden Context 1๊ฐ์ Negative Context 2๊ฐ๊ฐ ์ ๊ณต๋๊ณ Golen Context์ ๊ด๋ จ๋ ์ง๋ฌธ๊ณผ ๊ทธ ๋ต๋ณ์ด ์ฃผ์ด์ง๋๋ค.
- ๋ฐ์ดํฐ์ ์ปจํ
์คํธ๋ ์ํคํผ๋์์ ๊ณต๊ณต๊ธฐ๊ด์ ๊ธ์ต๋ณด๊ณ ์, ๊ธ์ต์ฉ์ด์ง๋ฑ์ ๋์์ผ๋ก ๋ง๋ญ๋๋ค. ์ดํ GPT-4๋ฅผ ์ด์ฉํ์ฌ ํด๋น ์ปจํ
์คํธ์ ๋ํ ์ง๋ฌธ๊ณผ ๋ต๋ณ์ ์์ฑํ๊ณ ์ด๋ฅผ ๊ฐ๊ฐ, Golden Context, Question, Golden Answer๋ก ์ผ์ต๋๋ค.
- ์ดํ ์ปจํ
์คํธ ์งํฉ์์ Question์ผ๋ก ๊ฒ์(BM25)ํ์๋ Golden Context๋ฅผ ์ ์ธํ๊ณ ์ ์๊ฐ ๋์ ๋๊ฐ์ Context๋ฅผ ์ ํํฉ๋๋ค. ์ด๋ฅผ Negative Context๋ก ์ผ์ต๋๋ค.
- Golden Context, 2๊ฐ์ Negative Context, Question๊ณผ Instruction์ ๋ชจ๋ ํฌํจํ์๋ 3K Token(Llama2 tokenizer๊ธฐ์ค)์ ๋์ง ์๋๋ก Allganize Summerizer(์ฌ๋ด ์ถ์ถํ ์์ฝ์์ง)์ ์ด์ฉํด ์์ฝํฉ๋๋ค.
- ์ดํ ์ฌ๋์ด ๊ฒ์ ์๋ฃํ 200๊ฐ์ ๋ฐ์ดํฐ์
์
๋๋ค.
### ๋ฐ์ดํฐ ์ถ์ฒ
- [ํ๊ตญ์ด wikipedia ๊ธ์ต ๋ถ๋ฅ](https://ko.wikipedia.org/wiki/%EB%B6%84%EB%A5%98:%EA%B8%88%EC%9C%B5)
- [ํ๊ตญ์ํ ๊ฒฝ์ ์ฐ๊ตฌ ๋ณด๊ณ ์](https://www.bok.or.kr/portal/bbs/P0002454/list.do?menuNo=200431)
- [ํ๊ตญ์ํ ํด์ธ๊ฒฝ์ ํฌ์ปค์ค](https://www.bok.or.kr/portal/bbs/P0000545/list.do?menuNo=200437)
### ๋ฐ์ดํฐ ์์
```
{
'conversation_id': 'financial_mmlu_0',
'conversations': array([
{
'from': 'human',
'value': '๊ธ๋ฆฌ์ ์ข
๋ฅ์ ๋ํ ์ค๋ช
์ผ๋ก ๋ฐ๋ฅด์ง ์์ ๊ฒ์?\n
1. ๋ณ๋๊ธ๋ฆฌ๋ ์์ฅ๊ธ๋ฆฌ ๋ณ๋์ ๋ฐ๋ฅธ ์ํ์ ์๊ธ๊ณต๊ธ์๊ฐ ๋ถ๋ดํ๊ฒ ๋๋ค\n
2. ํผ์
๋ฐฉ์ ์์ ์ํ๋ฉด ์ค์ง๊ธ๋ฆฌ๋ ๋ช
๋ชฉ๊ธ๋ฆฌ์์ ๊ธฐ๋์ธํ๋ ์ด์
์ ์ฐจ๊ฐํ๋ฉด\n ๊ตฌํ ์ ์๋ค.\n
3. ๋ณต๋ฆฌ๋ ์๊ธ์ ๋ํ ์ด์๋ฟ ์๋๋ผ ์ด์์ ๋ํ ์ด์๋ ํจ๊ป ๊ณ์ฐํ๋ ๋ฐฉ๋ฒ์ด\n๋ค.\n
4. ์คํจ๊ธ๋ฆฌ๋ ์ด์์ง๊ธ๋ฐฉ๋ฒ, ์ํ๋ฐฉ๋ฒ, ์์๋ฃ, ์ธ๊ธ ๋ฑ์ ๊ฐ์ํ ํ ์ฐจ์
์๊ฐ\n์ค์ง์ ์ผ๋ก ๋ถ๋ดํ๋ ์์๊ธ์กฐ๋ฌ๋น์ฉ์ ๋งํ๋ค.\n
5. ์ฑ๊ถ์์ฅ์์๋ ๊ธ๋ฆฌ๋ณด๋ค ์์ต๋ฅ ์ด๋ผ๋ ์ฉ์ด๋ฅผ ๋ ๋ง์ด ์ฌ์ฉํ๋ค.'
},
{
'from': 'gpt',
'value': '1'
}
], dtype=object)
}
```
License
- Wikipedia: CC BY-SA 4.0
- [ํ๊ตญ์ํ ์ ์๊ถ ๋ณดํธ๋ฐฉ์นจ](https://www.bok.or.kr/portal/main/contents.do?menuNo=200228) |
open-llm-leaderboard/details_ehartford__WizardLM-13B-Uncensored | ---
pretty_name: Evaluation run of ehartford/WizardLM-13B-Uncensored
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ehartford/WizardLM-13B-Uncensored](https://huggingface.co/ehartford/WizardLM-13B-Uncensored)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ehartford__WizardLM-13B-Uncensored\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T07:53:55.275923](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-13B-Uncensored/blob/main/results_2023-10-18T07-53-55.275923.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.20994127516778524,\n\
\ \"em_stderr\": 0.004170789326061059,\n \"f1\": 0.3040310402684571,\n\
\ \"f1_stderr\": 0.004210803460550511,\n \"acc\": 0.3630369207736123,\n\
\ \"acc_stderr\": 0.00835492026013406\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.20994127516778524,\n \"em_stderr\": 0.004170789326061059,\n\
\ \"f1\": 0.3040310402684571,\n \"f1_stderr\": 0.004210803460550511\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.02047005307050796,\n \
\ \"acc_stderr\": 0.0039004133859157192\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7056037884767167,\n \"acc_stderr\": 0.0128094271343524\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ehartford/WizardLM-13B-Uncensored
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|arc:challenge|25_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T07_53_55.275923
path:
- '**/details_harness|drop|3_2023-10-18T07-53-55.275923.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T07-53-55.275923.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T07_53_55.275923
path:
- '**/details_harness|gsm8k|5_2023-10-18T07-53-55.275923.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T07-53-55.275923.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hellaswag|10_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:00:32.745864.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T19:00:32.745864.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T19:00:32.745864.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T07_53_55.275923
path:
- '**/details_harness|winogrande|5_2023-10-18T07-53-55.275923.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T07-53-55.275923.parquet'
- config_name: results
data_files:
- split: 2023_07_19T19_00_32.745864
path:
- results_2023-07-19T19:00:32.745864.parquet
- split: 2023_10_18T07_53_55.275923
path:
- results_2023-10-18T07-53-55.275923.parquet
- split: latest
path:
- results_2023-10-18T07-53-55.275923.parquet
---
# Dataset Card for Evaluation run of ehartford/WizardLM-13B-Uncensored
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ehartford/WizardLM-13B-Uncensored
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [ehartford/WizardLM-13B-Uncensored](https://huggingface.co/ehartford/WizardLM-13B-Uncensored) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ehartford__WizardLM-13B-Uncensored",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T07:53:55.275923](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-13B-Uncensored/blob/main/results_2023-10-18T07-53-55.275923.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.20994127516778524,
"em_stderr": 0.004170789326061059,
"f1": 0.3040310402684571,
"f1_stderr": 0.004210803460550511,
"acc": 0.3630369207736123,
"acc_stderr": 0.00835492026013406
},
"harness|drop|3": {
"em": 0.20994127516778524,
"em_stderr": 0.004170789326061059,
"f1": 0.3040310402684571,
"f1_stderr": 0.004210803460550511
},
"harness|gsm8k|5": {
"acc": 0.02047005307050796,
"acc_stderr": 0.0039004133859157192
},
"harness|winogrande|5": {
"acc": 0.7056037884767167,
"acc_stderr": 0.0128094271343524
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
gagan3012/NewArOCRDatasetv5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 744898565.786
num_examples: 38219
- name: validation
num_bytes: 14180587.0
num_examples: 425
- name: test
num_bytes: 13690842.0
num_examples: 425
download_size: 692203880
dataset_size: 772769994.786
---
# Dataset Card for "NewArOCRDatasetv5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Baidicoot/openhermes-base64 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 109420486.63468009
num_examples: 210374
download_size: 66695207
dataset_size: 109420486.63468009
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/juno_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of juno/ใธใฅใใผ/ๅคฉๅ (Azur Lane)
This is the dataset of juno/ใธใฅใใผ/ๅคฉๅ (Azur Lane), containing 24 images and their tags.
The core tags of this character are `pink_hair, long_hair, crown, bangs, mini_crown, ribbon, twintails, pink_eyes, bow, purple_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 24 | 29.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/juno_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 24 | 20.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/juno_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 59 | 42.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/juno_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 24 | 28.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/juno_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 59 | 57.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/juno_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/juno_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | looking_at_viewer, 1girl, solo, open_mouth, blush, collarbone, bare_shoulders, :d, dress, long_sleeves, simple_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | 1girl | solo | open_mouth | blush | collarbone | bare_shoulders | :d | dress | long_sleeves | simple_background | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:--------|:-------|:-------------|:--------|:-------------|:-----------------|:-----|:--------|:---------------|:--------------------|:-------------------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X |
|
datahrvoje/twitter_dataset_1712730747 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 18262
num_examples: 45
download_size: 14179
dataset_size: 18262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GmoData/ui5_db | ---
license: cc-by-4.0
---
|
freshpearYoon/vr_train_free_24 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 6305883116
num_examples: 10000
download_size: 1075121807
dataset_size: 6305883116
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
batelidan/dataset-classifcation-tv | ---
dataset_info:
features:
- name: /content/drive/MyDrive/Wave/20220810-194349-3reality-00E93AAC59A5.wav
dtype: string
- name: TV
dtype: string
splits:
- name: train
num_bytes: 128533
num_examples: 1627
download_size: 23156
dataset_size: 128533
---
# Dataset Card for "dataset-classifcation-tv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yonathanstwn/ecolindo | ---
dataset_info:
features:
- name: translation
struct:
- name: english
dtype: string
- name: colloquial_indo
dtype: string
- name: formal_indo
dtype: string
splits:
- name: train
num_bytes: 70159189
num_examples: 672581
- name: test
num_bytes: 202512
num_examples: 2000
- name: validation
num_bytes: 202111
num_examples: 2000
download_size: 51847840
dataset_size: 70563812
task_categories:
- translation
language:
- id
- en
---
# English to Colloquial Indonesian Dataset (EColIndo)
First-ever large-scale high-quality English to Colloquial Indonesian dataset.
Fully generated from ChatGPT Zero-Shot Translation.
Author: Yonathan Setiawan
|
cfilt/IITB-MonoDoc | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- hi
- mr
- gu
- sa
- ta
- te
- ml
- ne
- as
- bn
- ks
- or
- pa
- ur
- sd
- kn
size_categories:
- 10B<n<100B
tags:
- language-modeling
- llm
- clm
--- |
israel/MT-llama | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt_header
dtype: string
- name: datasource
dtype: string
splits:
- name: train
num_bytes: 84762357.7818897
num_examples: 200000
- name: validation
num_bytes: 1209980
num_examples: 1994
- name: test
num_bytes: 1306100
num_examples: 2024
download_size: 23384531
dataset_size: 87278437.7818897
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
benny-abhishek/AB_Speech | ---
license: mit
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- en
tags:
- asr
- tts
- ser
pretty_name: ab_speech
size_categories:
- n<1K
---
Contains voice of a single male speaker spoken with emphasis on all phonemes. |
stuheart86/imageclassification | ---
license: creativeml-openrail-m
---
|
bgspaditya/phishing-dataset | ---
license: mit
---
|
open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B | ---
pretty_name: Evaluation run of PulsarAI/Nebula-v2-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [PulsarAI/Nebula-v2-7B](https://huggingface.co/PulsarAI/Nebula-v2-7B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T13:58:09.073163](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B/blob/main/results_2023-12-02T13-58-09.073163.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3169067475360121,\n\
\ \"acc_stderr\": 0.012815868296721373\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.3169067475360121,\n \"acc_stderr\": 0.012815868296721373\n\
\ }\n}\n```"
repo_url: https://huggingface.co/PulsarAI/Nebula-v2-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_02T13_58_09.073163
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-58-09.073163.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-58-09.073163.parquet'
- config_name: results
data_files:
- split: 2023_12_02T13_58_09.073163
path:
- results_2023-12-02T13-58-09.073163.parquet
- split: latest
path:
- results_2023-12-02T13-58-09.073163.parquet
---
# Dataset Card for Evaluation run of PulsarAI/Nebula-v2-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PulsarAI/Nebula-v2-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [PulsarAI/Nebula-v2-7B](https://huggingface.co/PulsarAI/Nebula-v2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:58:09.073163](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Nebula-v2-7B/blob/main/results_2023-12-02T13-58-09.073163.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3169067475360121,
"acc_stderr": 0.012815868296721373
},
"harness|gsm8k|5": {
"acc": 0.3169067475360121,
"acc_stderr": 0.012815868296721373
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Cohere/miracl-ar-queries-22-12 | ---
annotations_creators:
- expert-generated
language:
- ar
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL ๐๐๐ (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ar-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
mteb-pt/reddit-clustering | ---
configs:
- config_name: pt
data_files:
- split: test
path: test*
--- |
bigscience-data/roots_indic-or_wikisource | ---
language: or
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-or_wikisource
# wikisource_filtered
- Dataset uid: `wikisource_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.6306 % of total
- 12.7884 % of fr
- 19.8886 % of indic-bn
- 20.9966 % of indic-ta
- 2.3478 % of ar
- 4.7068 % of indic-hi
- 18.0998 % of indic-te
- 1.7155 % of es
- 19.4800 % of indic-kn
- 9.1737 % of indic-ml
- 17.1771 % of indic-mr
- 17.1870 % of indic-gu
- 70.3687 % of indic-as
- 1.0165 % of pt
- 7.8642 % of indic-pa
- 1.3501 % of vi
- 4.9411 % of indic-or
- 0.5307 % of ca
- 2.3593 % of id
- 1.5928 % of eu
### BigScience processing steps
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- remove_wiki_mojibake
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
|
waifu-research-department/embeddings | ---
license: mit
---
# Info
>Try to include embedding info in the commit description (model, author, artist, images, etc)
>Naming: name-object/style |
BAAI/COIG | ---
license: apache-2.0
arxiv: 2304.07987
language:
- zh
---
# This is the Chinese Open Instruction Generalist project
We propose the Chinese Open Instruction Generalist (**COIG**) project to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. We welcome all researchers in the community to contribute to the corpus set and collaborate with us. We only release the first chip of COIG to help the Chinese LLMs' development in the exploration stage and appeal to more researchers joining us in building COIG. We introduce a manually verified translated general instruction corpus, a manually annotated exam instruction corpus, a human value alignment instruction corpus, a multi-round counterfactual correction chat corpus, and a leetcode instruction corpus. We provide these new instruction corpora to assist the community with instruction tuning on Chinese LLMs. These instruction corpora are also template workflows for how new Chinese instruction corpora can be built and expanded effectively.
It is best to download the individual data files directly that you wish to use instead of using HF load_datasets. All datasets can be downloaded from: https://huggingface.co/datasets/BAAI/COIG/tree/main
This dataset card is modified from [OIG](https://huggingface.co/datasets/laion/OIG).
### Translated Instructions (66,858)
There are 66,858 instructions in total, which are composed of 1,616 task descriptions in [Super-NaturalInstructions](https://arxiv.org/abs/2204.07705) along with a single instance for each of them, 175 seed tasks in [Self-Instruct](https://arxiv.org/abs/2212.10560), and 66,007 instructions from [Unnatural Instructions](https://arxiv.org/abs/2212.09689). To reduce the cost and further improve the quality of the instruction corpus, we separate the translation procedure into three phases: automatic translation, manual verification, and manual correction. These strict quality verification procedures assure the reliability of the translated corpus.
### Exam Instructions (63,532)
The Chinese National College Entrance Examination, Middle School Entrance Examinations, and Civil Servant Examination are the main Chinese commonsense tests. These exams contain various question formats and detailed analysis that can be used as the Chain-of-Thought (**CoT**) corpus. We extract six informative elements from original exam questions, including instruction, question context, question, answer, answer analysis, and coarse-grained subject. There are six main coarse-grained subjects: Chinese, English, Politics, Biology, History, and Geology. There are very few Math, Physics, and Chemistry questions in the corpus because these questions are often with complex symbols which are hard to annotate. For many choice questions, we recommend that the researchers utilize this corpus to further post-process it using prompts or post-process it to blank-filling questions to increase the instructions' diversity further.
### Human Value Alignment Instructions (34,471)
To respect and reflect the major difference caused by different cultural backgrounds, different from other tasks in COIG that leverage one unified collection of instruction-following samples, we categorize the value alignment data into two separate series:
- A set of samples that present shared human values in the Chinese-speaking world. In total, we choose 50 instructions as the augmentation seeds, and produce 3k resulting instructions following samples for general-purpose value alignment in the Chinese-speaking world.
- Some additional sets of samples that present regional-culture or country-specific human values.
### Counterfactural Correction Multi-round Chat (13,653)
The Counterfactual Correction Multi-round Chat dataset (CCMC) is constructed based on the [CN-DBpedia knowledge graph dataset](https://link.springer.com/chapter/10.1007/978-3-319-60045-1_44) with the aim of alleviating and resolving the pain points of hallucination and factual inconsistency in current LLMs. The CCMC dataset includes 5 rounds of role-playing chat between a student and a teacher, and the corresponding knowledge they refer to. The dataset contains ~13,000 dialogues with an average of 5 rounds per dialogue, resulting in ~65,000 rounds of chat.
### Leetcode Instructions (11,737)
Given that the code-related tasks potentially contribute to the ability emergence of LLMs, we argue that code-related tasks aligned with the Chinese natural language should be considered in our datasets. Therefore, we build the Leetcode instructions from a **CC-BY-SA-4.0** license [collection](https://github.com/doocs/leetcode) of 2,589 programming questions. The questions contain problem descriptions, multiple programming languages, and explanations (834 questions do not have explanations).
## Support this project
Your contributions and feedback support the open source ecosystem, improve the bot and provide datasets for future AI research. To participate you can:
Submit Github issues, track issues and help create datasets that need improvement. https://github.com/BAAI-Zlab/COIG
## Update: May 27, 2023
- v0.3: Update counterfactural_correction_multi_round_chat.tar.gz and make sure all round responses can be decoded as json.
- v0.2: Update exam_instructions.jsonl, translated_instructions.jsonl and human_value_alignment_instructions_part2.json.
- v0.1: Release the five datasets of COIG.
## Disclaimer
These datasets contain synthetic data and in some cases data that includes humans trying to get the language model to say toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to reduce or eliminate undesirable content from the instruction tuning datasets.
## License
The COIG dataset that is authored by BAAI is released under an Apache 2.0 license. However, the data also includes content licensed under other permissive licenses such as unnatural instructions data which is licensed under MIT License, or web-crawled data which is used under fair use principles.
## BibTeX & Citation
```
@misc{zhang2023chinese,
title={Chinese Open Instruction Generalist: A Preliminary Release},
author={Ge Zhang and Yemin Shi and Ruibo Liu and Ruibin Yuan and Yizhi Li and Siwei Dong and Yu Shu and Zhaoqun Li and Zekun Wang and Chenghua Lin and Wenhao Huang and Jie Fu},
year={2023},
eprint={2304.07987},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
copenlu/scientific-exaggeration-detection | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: semi-supervised-exaggeration-detection-of
pretty_name: Scientific Exaggeration Detection
size_categories:
- n<1K
source_datasets: []
tags:
- scientific text
- scholarly text
- inference
- fact checking
- misinformation
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
---
# Dataset Card for Scientific Exaggeration Detection
## Dataset Description
- **Homepage:** https://github.com/copenlu/scientific-exaggeration-detection
- **Repository:** https://github.com/copenlu/scientific-exaggeration-detection
- **Paper:** https://aclanthology.org/2021.emnlp-main.845.pdf
### Dataset Summary
Public trust in science depends on honest and factual communication of scientific papers. However, recent studies have demonstrated a tendency of news media to misrepresent scientific papers by exaggerating their findings. Given this, we present a formalization of and study into the problem of exaggeration detection in science communication. While there are an abundance of scientific papers and popular media articles written about them, very rarely do the articles include a direct link to the original paper, making data collection challenging. We address this by curating a set of labeled press release/abstract pairs from existing expert annotated studies on exaggeration in press releases of scientific papers suitable for benchmarking the performance of machine learning models on the task. Using limited data from this and previous studies on exaggeration detection in science, we introduce MT-PET, a multi-task version of Pattern Exploiting Training (PET), which leverages knowledge from complementary cloze-style QA tasks to improve few-shot learning. We demonstrate that MT-PET outperforms PET and supervised learning both when data is limited, as well as when there is an abundance of data for the main task.
## Dataset Structure
The training and test data are derived from the InSciOut studies from [Sumner et al. 2014](https://www.bmj.com/content/349/bmj.g7015) and [Bratton et al. 2019](https://pubmed.ncbi.nlm.nih.gov/31728413/#:~:text=Results%3A%20We%20found%20that%20the,inference%20from%20non%2Dhuman%20studies.). The splits have the following fields:
```
original_file_id: The ID of the original spreadsheet in the Sumner/Bratton data where the annotations are derived from
press_release_conclusion: The conclusion sentence from the press release
press_release_strength: The strength label for the press release
abstract_conclusion: The conclusion sentence from the abstract
abstract_strength: The strength label for the abstract
exaggeration_label: The final exaggeration label
```
The exaggeration label is one of `same`, `exaggerates`, or `downplays`. The strength label is one of the following:
```
0: Statement of no relationship
1: Statement of correlation
2: Conditional statement of causation
3: Statement of causation
```
## Dataset Creation
See section 4 of the [paper](https://aclanthology.org/2021.emnlp-main.845.pdf) for details on how the dataset was curated. The original InSciOut data can be found [here](https://figshare.com/articles/dataset/InSciOut/903704)
## Citation
```
@inproceedings{wright2021exaggeration,
title={{Semi-Supervised Exaggeration Detection of Health Science Press Releases}},
author={Dustin Wright and Isabelle Augenstein},
booktitle = {Proceedings of EMNLP},
publisher = {Association for Computational Linguistics},
year = 2021
}
```
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset. |
irds/natural-questions | ---
pretty_name: '`natural-questions`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `natural-questions`
The `natural-questions` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/natural-questions#natural-questions).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=28,390,850
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/natural-questions', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'html': ..., 'start_byte': ..., 'end_byte': ..., 'start_token': ..., 'end_token': ..., 'document_title': ..., 'document_url': ..., 'parent_doc_id': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in ๐ค Dataset format.
## Citation Information
```
@article{Kwiatkowski2019Nq,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {TACL}
}
```
|
open-llm-leaderboard/details_Voicelab__trurl-2-7b | ---
pretty_name: Evaluation run of Voicelab/trurl-2-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Voicelab/trurl-2-7b](https://huggingface.co/Voicelab/trurl-2-7b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Voicelab__trurl-2-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T13:00:35.734451](https://huggingface.co/datasets/open-llm-leaderboard/details_Voicelab__trurl-2-7b/blob/main/results_2023-10-24T13-00-35.734451.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.26908557046979864,\n\
\ \"em_stderr\": 0.004541696656496853,\n \"f1\": 0.3290079697986583,\n\
\ \"f1_stderr\": 0.004499453214736992,\n \"acc\": 0.3967222424009962,\n\
\ \"acc_stderr\": 0.009837690155913053\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.26908557046979864,\n \"em_stderr\": 0.004541696656496853,\n\
\ \"f1\": 0.3290079697986583,\n \"f1_stderr\": 0.004499453214736992\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0712661106899166,\n \
\ \"acc_stderr\": 0.007086462127954499\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7221783741120757,\n \"acc_stderr\": 0.012588918183871605\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Voicelab/trurl-2-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|arc:challenge|25_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_24T13_00_35.734451
path:
- '**/details_harness|drop|3_2023-10-24T13-00-35.734451.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T13-00-35.734451.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_24T13_00_35.734451
path:
- '**/details_harness|gsm8k|5_2023-10-24T13-00-35.734451.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T13-00-35.734451.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hellaswag|10_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T14:14:32.422343.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T14:14:32.422343.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T14:14:32.422343.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_24T13_00_35.734451
path:
- '**/details_harness|winogrande|5_2023-10-24T13-00-35.734451.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T13-00-35.734451.parquet'
- config_name: results
data_files:
- split: 2023_08_17T14_14_32.422343
path:
- results_2023-08-17T14:14:32.422343.parquet
- split: 2023_10_24T13_00_35.734451
path:
- results_2023-10-24T13-00-35.734451.parquet
- split: latest
path:
- results_2023-10-24T13-00-35.734451.parquet
---
# Dataset Card for Evaluation run of Voicelab/trurl-2-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Voicelab/trurl-2-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Voicelab/trurl-2-7b](https://huggingface.co/Voicelab/trurl-2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Voicelab__trurl-2-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T13:00:35.734451](https://huggingface.co/datasets/open-llm-leaderboard/details_Voicelab__trurl-2-7b/blob/main/results_2023-10-24T13-00-35.734451.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.26908557046979864,
"em_stderr": 0.004541696656496853,
"f1": 0.3290079697986583,
"f1_stderr": 0.004499453214736992,
"acc": 0.3967222424009962,
"acc_stderr": 0.009837690155913053
},
"harness|drop|3": {
"em": 0.26908557046979864,
"em_stderr": 0.004541696656496853,
"f1": 0.3290079697986583,
"f1_stderr": 0.004499453214736992
},
"harness|gsm8k|5": {
"acc": 0.0712661106899166,
"acc_stderr": 0.007086462127954499
},
"harness|winogrande|5": {
"acc": 0.7221783741120757,
"acc_stderr": 0.012588918183871605
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ChristophSchuhmann/test-files | ---
license: apache-2.0
---
|
FSMBench/fsmbench_what_will_be_the_state_12K_think_step_by_step | ---
dataset_info:
features:
- name: query_id
dtype: string
- name: fsm_id
dtype: string
- name: fsm_json
dtype: string
- name: difficulty_level
dtype: int64
- name: transition_matrix
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: substring_index
dtype: int64
- name: number_of_states
dtype: int64
- name: number_of_alphabets
dtype: int64
- name: state_alpha_combo
dtype: string
splits:
- name: validation
num_bytes: 29342193
num_examples: 12800
download_size: 1211333
dataset_size: 29342193
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
mask-distilled-one-sec-cv12/chunk_90 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1296275532
num_examples: 254571
download_size: 1322328801
dataset_size: 1296275532
---
# Dataset Card for "chunk_90"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
astrosbd/fake_review_hedi | ---
dataset_info:
features:
- name: cat
dtype: string
- name: score
dtype: float64
- name: label
dtype: string
- name: review
dtype: string
splits:
- name: train
num_bytes: 15867393
num_examples: 40432
download_size: 8285372
dataset_size: 15867393
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fake_review_hedi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adilhabibi/bioacoustic_segments | ---
dataset_info:
features:
- name: segments
sequence:
sequence:
sequence: float32
- name: label_idices
dtype: int64
- name: label_names
dtype: string
splits:
- name: train
num_bytes: 72803953
num_examples: 1457
download_size: 53309954
dataset_size: 72803953
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bioacoustic_segments"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/deep_learning_books | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 2116608
num_examples: 1056
download_size: 1142689
dataset_size: 2116608
---
# Dataset Card for "deep_learning_books"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-billsum-default-37bdaa-1564755702 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
moodlep/dt_atari_replay_hf | ---
license: mit
---
|
adnankarim/polimer | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 28628023.0
num_examples: 976
- name: validation
num_bytes: 3228642.0
num_examples: 100
- name: test
num_bytes: 347463.0
num_examples: 10
download_size: 32107011
dataset_size: 32204128.0
---
# Dataset Card for "polimer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-18000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 961620
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sadeem-ai/arabic-qna | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: ar-qna-train-data-hf.csv
- split: test
path: ar-qna-test-data-hf.csv
task_categories:
- question-answering
language:
- ar
tags:
- qna
- questioning-answering
- questions-generation
pretty_name: arabic QnA dataset
size_categories:
- 1K<n<10K
---
# Sadeem QnA: An Arabic QnA Dataset ๐โจ
Welcome to the **Sadeem QnA** dataset, a vibrant collection designed for the advancement of Arabic natural language processing, specifically tailored for Question Answering (QnA) systems. Sourced from the rich and diverse content of Arabic Wikipedia, this dataset is a gateway to exploring the depths of Arabic language understanding, offering a unique challenge to both researchers and AI enthusiasts alike.
## Table of Contents
- [About Sadeem QnA](#about-sadeem-qna)
- [Dataset Structure](#dataset-structure)
- [Getting Started](#getting-started)
- [Usage](#usage)
- [Contributing](#contributing)
- [License](#license)
- [Citation](#citation)
## About Sadeem QnA
The **Sadeem QnA** dataset is crafted with the intent to foster research and development in Arabic Question Answering systems. It encompasses a broad range of topics, reflecting the rich tapestry of Arabic culture, history, and science, making it an ideal resource for training and evaluating AI models.
### Why Sadeem QnA?
- **Rich Content:** Over 6,000 QnA pairs across diverse subjects.
- **Real-World Questions:** Derived from actual queries people might ask, providing practical value for real-world applications.
- **Dual Splits:** Carefully partitioned into training (5,000 rows) and testing (1,030 rows) sets to facilitate effective model evaluation.
## Dataset Structure
Each record in the dataset follows a structured format, containing the following fields:
- `title`: The title of the Wikipedia article.
- `text`: A snippet from the article related to the question.
- `source`: The URL of the Wikipedia page.
- `question`: A question related to the text snippet.
- `answer`: The answer to the question.
- `has_answer`: A boolean indicating whether the answer is present in the text snippet.
### Example Record
```json
{
'title': 'ูุงุฆู
ุฉ ุงูุฌูุงุฆุฒ ูุงูุชุฑุดูุญุงุช ุงูุชู ุชููุชูุง ุณูุณูุฉ ุฃููุงู
ู
ุจุงุฑูุงุช ุงูุฌูุน',
'text': 'ูุงุฆู
ุฉ ุงูุฌูุงุฆุฒ ูุงูุชุฑุดูุญุงุช ุงูุชู ุชููุชูุง ุณูุณูุฉ ุฃููุงู
ู
ุจุงุฑูุงุช ุงูุฌูุน ูุงุฆู
ุฉ ุชูุณุฌูู ุงูุชุฑุดูุญุงุช ูุงูุฌูุงุฆุฒ ุงูุชู ุชููุชูุง ุณูุณูุฉ ุฃููุงู
ู
ุจุงุฑูุงุช ุงูุฌูุน ุงูู
ูุชุจุณุฉ ู
ู ุณูุณูุฉ ู
ุจุงุฑูุงุช ุงูุฌูุน ููู
ุคููุฉ ุงูุฃู
ุฑูููุฉ ุณูุฒุงู ููููุฒ. ูุงูุณูุณูุฉ ู
ู ุชูุฒูุน ุดุฑูุฉ ููููุฒุบูุช ุฅูุชุฑุชุงููู
ูุชุ ููุงู
ุจุจุทููุชูุง ุฌููููุฑ ููุฑูุณ ูู ุฏูุฑ ูุงุชููุณ ุฅููุฑุฏููุ ุฌูุด ููุชุดุฑุณู ูู ุฏูุฑ ุจูุชุง ู
ููุงุฑูู. ูุจุฏุฃุช ุงูุณูุณูุฉ ุจูููู
ู
ุจุงุฑูุงุช ุงูุฌูุน ุงูุฐู ุตุฏุฑ ูู ุงูุนุงู
2012ุ ุซู
ูููู
ูู ุงูุนุงู
2013ุ ูุชุจุนูู
ุง ูู ู
ู (2014) ูุฃุฎูุฑูุง: (2015). ูุงู ูุฌููููุฑ ููุฑูุณ ุญุตุฉ ุงูุฃุณุฏ ูู ุณุฌู ุงูุชุฑุดูุญุงุช ูุงูุฌูุงุฆุฒ ุงูุชู ูุงูุชูุง ุงูุณูุณูุฉ.',
'source': 'https://ar.wikipedia.org/wiki?curid=6237097',
'question': 'ู
ุชู ุตุฏุฑ ุงููููู
ุงูุฃูู ู
ู ุณูุณูุฉ ู
ุจุงุฑูุงุช ุงูุฌูุนุ',
'answer': 'ุนุงู
2012',
'has_answer': True
},
{
'title': 'ุณุงูุช ูุฑูุณูุณ (ููุณูููุณู)',
'text': 'ุจูุบ ุนุฏุฏ ุงูุฃุณุฑ 4,494 ุฃุณุฑุฉ ูุงูุช ูุณุจุฉ 19.8% ู
ููุง ูุฏููุง ุฃุทูุงู ุชุญุช ุณู ุงูุซุงู
ูุฉ ุนุดุฑ ุชุนูุด ู
ุนูู
ุ ูุจูุบุช ูุณุจุฉ ุงูุฃุฒูุงุฌ ุงููุงุทููู ู
ุน ุจุนุถูู
ุงูุจุนุถ 36.6% ู
ู ุฃุตู ุงูู
ุฌู
ูุน ุงูููู ููุฃุณุฑุ ููุณุจุฉ 8.7% ู
ู ุงูุฃุณุฑ ูุงู ูุฏููุง ู
ุนููุงุช ู
ู ุงูุฅูุงุซ ุฏูู ูุฌูุฏ ุดุฑููุ ุจููู
ุง ูุงูุช ูุณุจุฉ 3.9% ู
ู ุงูุฃุณุฑ ูุฏููุง ู
ุนูููู ู
ู ุงูุฐููุฑ ุฏูู ูุฌูุฏ ุดุฑููุฉ ููุงูุช ูุณุจุฉ 50.8% ู
ู ุบูุฑ ุงูุนุงุฆูุงุช. ุชุฃููุช ูุณุจุฉ 42.6% ู
ู ุฃุตู ุฌู
ูุน ุงูุฃุณุฑ ู
ู ุฃูุฑุงุฏ ููุณุจุฉ 13.7% ูุงููุง ูุนูุด ู
ุนูู
ุดุฎุต ูุญูุฏ ูุจูุบ ู
ู ุงูุนู
ุฑ 65 ุนุงู
ุงู ูู
ุง ููู. ูุจูุบ ู
ุชูุณุท ุญุฌู
ุงูุฃุณุฑุฉ ุงูู
ุนูุดูุฉ 2.80ุ ุฃู
ุง ู
ุชูุณุท ุญุฌู
ุงูุนุงุฆูุงุช ูุจูุบ 2.02.',
'source': 'https://ar.wikipedia.org/wiki?curid=2198358',
'question': 'ู
ุง ูู ุนุฏุฏ ุงูุนุงุฆูุงุช ุงูู
ููู
ุฉ ูู ุณุงูุช ูุฑูุณูุณุ',
'answer': '',
'has_answer': False
}
```
## Getting Started
To get started with the **Sadeem QnA** dataset, you can download it directly from our [Huggingface repository](https://huggingface.co/datasets/sadeem-ai/arabic-qna).
Follow the instructions there to load the dataset into your environment and begin exploring.
## Usage
This dataset is perfect for:
- Training machine learning models for Arabic question answering.
- Evaluating the performance of NLP models on Arabic text.
- Enhancing language understanding systems with a focus on Arabic.
## Contributing
We welcome contributions from the community! Whether it's improving the documentation, adding more questions, or reporting issues, your help makes **Sadeem QnA** better for everyone.
## License
The **Sadeem QnA** dataset is available under the Apache License 2.0. We encourage its use for academic research, commercial applications, and beyond, provided proper attribution is given.
## Citation
If you use the **Sadeem QnA** dataset in your research, please cite it using the following format:
```bibtex
@misc{sadeem_qna,
title={Sadeem QnA: An Arabic QnA Dataset},
author={},
year={2024},
publisher={Huggingface},
howpublished={\url{https://huggingface.co/datasets/sadeem-ai/arabic-qna}},
}
```
Embark on your journey through the Arabic language with **Sadeem QnA** and unlock the potential of AI in understanding the complexity and beauty of Arabic text. ๐๐ก
|
jlbaker361/league-maybe-runway-50 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: seed
dtype: int64
- name: steps
dtype: int64
splits:
- name: train
num_bytes: 30718722.0
num_examples: 72
download_size: 30717346
dataset_size: 30718722.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dhematillake/instructpix2pix-spatial | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: transformed_image
dtype: image
splits:
- name: train
num_bytes: 9300429930.63
num_examples: 9447
download_size: 8998126312
dataset_size: 9300429930.63
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AlekseyKorshuk/clean-dataset-preview | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: check_word_number_criteria
dtype: int64
splits:
- name: train
num_bytes: 686595411
num_examples: 337253
download_size: 283358430
dataset_size: 686595411
---
# Dataset Card for "clean-dataset-preview"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RZ412/mmlu_responses_1k | ---
dataset_info:
features:
- name: exemplar_questions
dtype: string
- name: test_questions
dtype: string
- name: subject
dtype: string
- name: answers
list:
- name: answer
dtype: string
- name: model
dtype: string
- name: reference_answers
dtype: int64
splits:
- name: train
num_bytes: 5847672
num_examples: 1000
download_size: 391151
dataset_size: 5847672
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
eduagarcia-temp/brwac_meta | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: dedup
struct:
- name: exact_norm
struct:
- name: cluster_main_idx
dtype: int64
- name: cluster_size
dtype: int64
- name: exact_hash_idx
dtype: int64
- name: is_duplicate
dtype: bool
- name: minhash
struct:
- name: cluster_main_idx
dtype: int64
- name: cluster_size
dtype: int64
- name: is_duplicate
dtype: bool
- name: minhash_idx
dtype: int64
- name: doc_id
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
splits:
- name: train
num_bytes: 18279917379
num_examples: 3530796
download_size: 11165124126
dataset_size: 18279917379
---
# Dataset Card for "brwac_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SDbiaseval/faces | ---
dataset_info:
features:
- name: model
dtype: string
- name: adjective
dtype: string
- name: profession
dtype: string
- name: 'no'
dtype: int32
- name: image_name
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3432470489.253643
num_examples: 88708
download_size: 1970670181
dataset_size: 3432470489.253643
---
# Dataset Card for "faces"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tkuhn1988/tkuhnstyle | ---
license: afl-3.0
---
|
trl-internal-testing/mlabonne-chatml-dpo-pairs-copy | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 35914686
num_examples: 12859
download_size: 19539812
dataset_size: 35914686
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is a copy and unmaintained version of [`mlabonne/chatml_dpo_pairs`](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) that we use in TRL CI for testing purpose. Please refer to the original dataset for usage and more details
|
vietgpt-archive/vungoi_question_type1 | ---
dataset_info:
features:
- name: question_raw
dtype: string
- name: options_raw
list:
- name: answer_raw
dtype: string
- name: key
dtype: string
- name: answer_raw
struct:
- name: answer_raw
dtype: string
- name: key
dtype: string
- name: solution_raw
dtype: string
- name: metadata
struct:
- name: chapter
dtype: string
- name: difficult_degree
dtype: int64
- name: grade
dtype: string
- name: id
dtype: string
- name: idx
dtype: int64
- name: subject
dtype: string
splits:
- name: train
num_bytes: 137805960
num_examples: 112037
download_size: 78332631
dataset_size: 137805960
---
# Dataset Card for "vungoi_question_type1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Kalva014/male-asian-hairstyles | ---
license: mit
---
|
alwanrahmana/ner_scientifict_resampled | ---
license: apache-2.0
---
|
longhoang06/MC-ViMath | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: explanation
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 6086303
num_examples: 9328
download_size: 3016997
dataset_size: 6086303
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "MC-ViMath"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingartists/kendrick-lamar | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/kendrick-lamar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 2.493223 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f08637c8cfdeaab4dfbf0631424001ec.640x640x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/kendrick-lamar">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค HuggingArtists Model ๐ค</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kendrick Lamar</div>
<a href="https://genius.com/artists/kendrick-lamar">
<div style="text-align: center; font-size: 14px;">@kendrick-lamar</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/kendrick-lamar).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/kendrick-lamar")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|861| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/kendrick-lamar")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
tellarin-ai/ntx_llm_instructions | ---
license: cc-by-sa-4.0
language:
- ar
- de
- en
- es
- fr
- hi
- it
- ja
- ko
- nl
- pt
- sv
- tr
- zh
task_categories:
- token-classification
---
# Dataset Card for NTX v1 in the Aya format
This dataset is a format conversion from its original v1 format into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license and conditions.
It contains data in multiple languages and this version is intended for multi-lingual LLM construction/tuning.
## Citation
If you utilize this dataset version, feel free to cite/footnote this huggingface dataset repo, but please also cite the original dataset publication.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Bรถrje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Dataset Details
For the original NTX dataset for information extraction of numerical and temporal expressions and more details, please check the arXiv paper: https://arxiv.org/abs/2303.18103.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Format Conversion Details
The templates used to reformat the dataset are in the ./templates-ntx directory. |
liuyanchen1015/MULTI_VALUE_rte_what_comparative | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 1345
num_examples: 5
- name: train
num_bytes: 668
num_examples: 2
download_size: 8739
dataset_size: 2013
---
# Dataset Card for "MULTI_VALUE_rte_what_comparative"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_mrpc_anaphoric_it | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 18672
num_examples: 64
- name: train
num_bytes: 42888
num_examples: 150
- name: validation
num_bytes: 2973
num_examples: 11
download_size: 54711
dataset_size: 64533
---
# Dataset Card for "MULTI_VALUE_mrpc_anaphoric_it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mohamedsaeed823/egyARA2eng | ---
license: apache-2.0
---
|
AlekseyKorshuk/chai-experiment-v0-chatml | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: do_train
dtype: bool
- name: role
dtype: string
splits:
- name: train
num_bytes: 1765802637.0
num_examples: 322064
download_size: 909265481
dataset_size: 1765802637.0
---
# Dataset Card for "chai-experiment-v0-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
metabloit/offensive-swahili-text | ---
license: mit
task_categories:
- text-classification
language:
- sw
size_categories:
- 1K<n<10K
viewer: true
---
# Overview
This dataset contains offensive and non-offensive sentences. The data was scraped from JamiiForums using a prepared wordlist.
The dataset contains sentences that consists of swahili abusive words (in the wordlist) but does not contain sarcastic abuse.
## Dataset details
The dataset is divided into train, evaluation and test datasets. The training dataset consists of 4954 sentences, evaluation dataset
consists of 990 sentences and the test dataset consists of 660 sentences.
### Dataset annotations
- 0: non-offensive
- 1: offensive
|
arbml/ArSAS | ---
dataset_info:
features:
- name: '#Tweet_ID'
dtype: string
- name: Tweet_text
dtype: string
- name: Topic
dtype: string
- name: Sentiment_label_confidence
dtype: string
- name: Speech_act_label
dtype: string
- name: Speech_act_label_confidence
dtype: string
- name: label
dtype:
class_label:
names:
0: Negative
1: Neutral
2: Positive
3: Mixed
splits:
- name: train
num_bytes: 6147723
num_examples: 19897
download_size: 2998319
dataset_size: 6147723
---
# Dataset Card for "ArSAS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
0xMaka/trading-candles-subset-sc-format | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Bearish
'1': Bullish
splits:
- name: train
num_bytes: 11878595.339187406
num_examples: 155885
- name: test
num_bytes: 5090913.660812595
num_examples: 66809
download_size: 6788665
dataset_size: 16969509.0
---
# Dataset Card for "trading-candles-subset-sc-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/lmsys_chatbot_arena_conversations | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: model_a
dtype: string
- name: model_b
dtype: string
- name: winner
dtype: string
- name: judge
dtype: string
- name: conversation_a
list:
- name: content
dtype: string
- name: role
dtype: string
- name: conversation_b
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
- name: anony
dtype: bool
- name: language
dtype: string
- name: tstamp
dtype: float64
- name: openai_moderation
struct:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: toxic_chat_tag
struct:
- name: roberta-large
struct:
- name: flagged
dtype: bool
- name: probability
dtype: float64
- name: t5-large
struct:
- name: flagged
dtype: bool
- name: score
dtype: float64
splits:
- name: train
num_bytes: 81159839
num_examples: 33000
download_size: 41573740
dataset_size: 81159839
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lmsys_chatbot_arena_conversations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
notfan/fa_pim_qalog | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 463142
num_examples: 742
download_size: 264389
dataset_size: 463142
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TrainingDataPro/llm-dataset | ---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
- table-question-answering
- question-answering
- text2text-generation
- text-generation
language:
- en
- es
- ar
- el
- fr
- ja
- pt
- uk
- az
- ga
- ko
- ca
- eo
- hi
- ml
- sl
- hu
- mr
- cs
- fa
- id
- nl
- th
- de
- fi
- it
- pl
- tr
tags:
- code
- legal
- finance
---
# LLM Dataset - Prompts and Generated Texts
The dataset contains prompts and texts generated by the Large Language Models (LLMs) in **32 different languages**. The prompts are short sentences or phrases for the model to generate text. The texts generated by the LLM are responses to these prompts and can vary in **length and complexity**.
Researchers and developers can use this dataset to train and fine-tune their own language models for multilingual applications. The dataset provides a rich and diverse collection of outputs from the model, demonstrating its ability to generate coherent and contextually relevant text in multiple languages.
# ๐ด For Commercial Usage: Full version of the dataset includes **4,000,000 logs** generated in **32 languages** with diferent types of LLM, including Uncensored GPT, leave a request on **[TrainingData](https://trainingdata.pro/data-market/llm?utm_source=huggingface&utm_medium=cpc&utm_campaign=llm)** to buy the dataset
### Models used for text generation:
- **GPT-3.5**,
- **GPT-4**
### Languages in the dataset:
*Arabic, Azerbaijani, Catalan, Chinese, Czech, Danish, German, Greek, English, Esperanto, Spanish, Persian, Finnish, French, Irish, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Malayalam, Maratham, Netherlands, Polish, Portuguese, Portuguese (Brazil), Slovak, Swedish, Thai, Turkish, Ukrainian*

# Content
CSV File includes the following data:
- **from_language**: language the prompt is made in,
- **model**: type of the model (GPT-3.5, GPT-4 and Uncensored GPT Version),
- **time**: time when the answer was generated,
- **text**: user prompt,
- **response**: response generated by the model
# ๐ด Buy the Dataset: This is just an example of the data. Leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market/llm?utm_source=huggingface&utm_medium=cpc&utm_campaign=llm)** to discuss your requirements, learn about the price and buy the dataset
## **[TrainingData](https://trainingdata.pro/data-market/llm?utm_source=huggingface&utm_medium=cpc&utm_campaign=llm)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
*keywords: dataset, machine learning, natural language processing, artificial intelligence, deep learning, neural networks, text generation, language models, openai, gpt-3, data science, predictive modeling, sentiment analysis, keyword extraction, text classification, sequence-to-sequence models, attention mechanisms, transformer architecture, word embeddings, glove embeddings, chatbots, question answering, language understanding, text mining, information retrieval, data preprocessing, feature engineering, explainable ai, model deployment* |
open-llm-leaderboard/details_Danielbrdz__Barcenas-7b | ---
pretty_name: Evaluation run of Danielbrdz/Barcenas-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Danielbrdz/Barcenas-7b](https://huggingface.co/Danielbrdz/Barcenas-7b) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Danielbrdz__Barcenas-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T23:34:07.541919](https://huggingface.co/datasets/open-llm-leaderboard/details_Danielbrdz__Barcenas-7b/blob/main/results_2023-09-17T23-34-07.541919.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004718959731543624,\n\
\ \"em_stderr\": 0.0007018360183131257,\n \"f1\": 0.0816715604026848,\n\
\ \"f1_stderr\": 0.0017762083839348887,\n \"acc\": 0.39889766050552516,\n\
\ \"acc_stderr\": 0.009497938418122394\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.004718959731543624,\n \"em_stderr\": 0.0007018360183131257,\n\
\ \"f1\": 0.0816715604026848,\n \"f1_stderr\": 0.0017762083839348887\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06141015921152388,\n \
\ \"acc_stderr\": 0.006613027536586322\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7363851617995264,\n \"acc_stderr\": 0.012382849299658464\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Danielbrdz/Barcenas-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|arc:challenge|25_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T23_34_07.541919
path:
- '**/details_harness|drop|3_2023-09-17T23-34-07.541919.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T23-34-07.541919.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T23_34_07.541919
path:
- '**/details_harness|gsm8k|5_2023-09-17T23-34-07.541919.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T23-34-07.541919.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hellaswag|10_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T22:47:45.353935.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-28T22:47:45.353935.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-28T22:47:45.353935.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T23_34_07.541919
path:
- '**/details_harness|winogrande|5_2023-09-17T23-34-07.541919.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T23-34-07.541919.parquet'
- config_name: results
data_files:
- split: 2023_08_28T22_47_45.353935
path:
- results_2023-08-28T22:47:45.353935.parquet
- split: 2023_09_17T23_34_07.541919
path:
- results_2023-09-17T23-34-07.541919.parquet
- split: latest
path:
- results_2023-09-17T23-34-07.541919.parquet
---
# Dataset Card for Evaluation run of Danielbrdz/Barcenas-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Danielbrdz/Barcenas-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Danielbrdz/Barcenas-7b](https://huggingface.co/Danielbrdz/Barcenas-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Danielbrdz__Barcenas-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T23:34:07.541919](https://huggingface.co/datasets/open-llm-leaderboard/details_Danielbrdz__Barcenas-7b/blob/main/results_2023-09-17T23-34-07.541919.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.004718959731543624,
"em_stderr": 0.0007018360183131257,
"f1": 0.0816715604026848,
"f1_stderr": 0.0017762083839348887,
"acc": 0.39889766050552516,
"acc_stderr": 0.009497938418122394
},
"harness|drop|3": {
"em": 0.004718959731543624,
"em_stderr": 0.0007018360183131257,
"f1": 0.0816715604026848,
"f1_stderr": 0.0017762083839348887
},
"harness|gsm8k|5": {
"acc": 0.06141015921152388,
"acc_stderr": 0.006613027536586322
},
"harness|winogrande|5": {
"acc": 0.7363851617995264,
"acc_stderr": 0.012382849299658464
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
nqv2291/en-Pile-NER-seq2seq_format | ---
dataset_info:
features:
- name: id
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 664745884
num_examples: 350974
- name: test
num_bytes: 13593212
num_examples: 7208
download_size: 147499047
dataset_size: 678339096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
zpn/zinc20 | ---
license: mit
dataset_info:
features:
- name: selfies
dtype: string
- name: smiles
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 238295712864
num_examples: 804925861
- name: validation
num_bytes: 26983481360
num_examples: 100642661
- name: test
num_bytes: 29158755632
num_examples: 101082073
download_size: 40061255073
dataset_size: 294437949856
tags:
- bio
- selfies
- smiles
- small_molecules
pretty_name: zinc20
size_categories:
- 1B<n<10B
---
# Dataset Card for Zinc20
## Dataset Description
- **Homepage:** https://zinc20.docking.org/
- **Paper:** https://pubs.acs.org/doi/10.1021/acs.jcim.0c00675
### Dataset Summary
ZINC is a publicly available database that aggregates commercially available and annotated compounds.
ZINC provides downloadable 2D and 3D versions as well as a website that enables rapid molecule lookup and analog search.
ZINC has grown from fewer than 1 million compounds in 2005 to nearly 2 billion now.
This dataset includes ~1B molecules in total. We have filtered out any compounds that were not avaible to be converted from `smiles` to `seflies` representations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset is split into an 80/10/10 train/valid/test random split across files (which roughly corresponds to the same percentages)
### Source Data
#### Initial Data Collection and Normalization
Initial data was released at https://zinc20.docking.org/. We have downloaded and added a `selfies` field and filtered out all molecules that did not contain molecules that could be converted to `selfies` representations.
### Citation Information
@article{Irwin2020,
doi = {10.1021/acs.jcim.0c00675},
url = {https://doi.org/10.1021/acs.jcim.0c00675},
year = {2020},
month = oct,
publisher = {American Chemical Society ({ACS})},
volume = {60},
number = {12},
pages = {6065--6073},
author = {John J. Irwin and Khanh G. Tang and Jennifer Young and Chinzorig Dandarchuluun and Benjamin R. Wong and Munkhzul Khurelbaatar and Yurii S. Moroz and John Mayfield and Roger A. Sayle},
title = {{ZINC}20{\textemdash}A Free Ultralarge-Scale Chemical Database for Ligand Discovery},
journal = {Journal of Chemical Information and Modeling}
}
### Contributions
This dataset was curated and added by [@zanussbaum](https://github.com/zanussbaum).
|
autoevaluate/autoeval-staging-eval-project-200453bd-7694959 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- masakhaner
eval_info:
task: entity_extraction
model: arnolfokam/bert-base-uncased-swa
metrics: []
dataset_name: masakhaner
dataset_config: swa
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: arnolfokam/bert-base-uncased-swa
* Dataset: masakhaner
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
emozilla/sat-reading | ---
dataset_info:
features:
- name: text
dtype: string
- name: answer
dtype: string
- name: requires_line
dtype: bool
- name: id
dtype: string
splits:
- name: train
num_bytes: 1399648
num_examples: 298
- name: test
num_bytes: 196027
num_examples: 38
- name: validation
num_bytes: 183162
num_examples: 39
download_size: 365469
dataset_size: 1778837
language:
- en
---
# Dataset Card for "sat-reading"
This dataset contains the passages and questions from the Reading part of ten publicly available SAT Practice Tests.
For more information see the blog post [Language Models vs. The SAT Reading Test](https://jeffq.com/blog/language-models-vs-the-sat-reading-test).
For each question, the reading passage from the section it is contained in is prefixed.
Then, the question is prompted with `Question #:`, followed by the four possible answers.
Each entry ends with `Answer:`.
Questions which reference a diagram, chart, table, etc. have been removed (typically three per test).
In addition, there is a boolean `requires_line` feature, which indiciates if the question references specific lines within the passage.
To maintain generalizability in finetuning scenarios, `SAT READING COMPREHENSION TEST` appears at the beginning of each entry -- it may be desireable to remove this depending on your intentions.
Eight tests appear in the training split; one each in the validation and test splits.
|
sgoedecke/5s_birdcall_samples_16k | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype: string
- name: input_values
sequence: float32
- name: attention_mask
sequence: int32
splits:
- name: train
num_bytes: 2149373508.375
num_examples: 4797
- name: test
num_bytes: 2149821590.25
num_examples: 4798
download_size: 3704408556
dataset_size: 4299195098.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Multimodal-Fatima/Caltech101_not_background_test_facebook_opt_350m_Attributes_Caption_ns_5647_random | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_1_bs_16
num_bytes: 85893233.125
num_examples: 5647
- name: fewshot_3_bs_16
num_bytes: 88896569.125
num_examples: 5647
download_size: 168043271
dataset_size: 174789802.25
---
# Dataset Card for "Caltech101_not_background_test_facebook_opt_350m_Attributes_Caption_ns_5647_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-futin__feed-top_en_-3f631c-2246071662 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: futin/feed
dataset_config: top_en_
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/feed
* Config: top_en_
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
Back-up/Topic-Prediction-Context-With-Random-Prompts-in-the-end | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: topic
struct:
- name: topic
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: instruction
dtype: string
- name: prompt_name
dtype: string
splits:
- name: train
num_bytes: 248398
num_examples: 101
download_size: 125460
dataset_size: 248398
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Topic-Prediction-Context-With-Random-Prompts-in-the-end"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NghiemAbe/Legal_Corpus | ---
dataset_info:
features:
- name: id
dtype: string
- name: ministry
dtype: string
- name: type
dtype: string
- name: name
dtype: string
- name: chapter_id
dtype: string
- name: chapter_name
dtype: string
- name: article
dtype: string
- name: content
dtype: string
splits:
- name: corpus
num_bytes: 1071858956
num_examples: 515188
download_size: 294811460
dataset_size: 1071858956
---
# Dataset Card for "Legal_Corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DazMashaly/test_data | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 365263950.94
num_examples: 5108
download_size: 354753479
dataset_size: 365263950.94
---
# Dataset Card for "test_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fancyzhx/dbpedia_14 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
paperswithcode_id: dbpedia
pretty_name: DBpedia
dataset_info:
config_name: dbpedia_14
features:
- name: label
dtype:
class_label:
names:
'0': Company
'1': EducationalInstitution
'2': Artist
'3': Athlete
'4': OfficeHolder
'5': MeanOfTransportation
'6': Building
'7': NaturalPlace
'8': Village
'9': Animal
'10': Plant
'11': Album
'12': Film
'13': WrittenWork
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 178428970
num_examples: 560000
- name: test
num_bytes: 22310285
num_examples: 70000
download_size: 119424374
dataset_size: 200739255
configs:
- config_name: dbpedia_14
data_files:
- split: train
path: dbpedia_14/train-*
- split: test
path: dbpedia_14/test-*
default: true
---
# Dataset Card for DBpedia14
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** https://github.com/zhangxiangxiao/Crepe
- **Paper:** https://arxiv.org/abs/1509.01626
- **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu)
### Dataset Summary
The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes
from DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we
randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size
of the training dataset is 560,000 and testing dataset 70,000.
There are 3 columns in the dataset (same for train and test splits), corresponding to class index
(1 to 14), title and content. The title and content are escaped using double quotes ("), and any
internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content.
### Supported Tasks and Leaderboards
- `text-classification`, `topic-classification`: The dataset is mainly used for text classification: given the content
and the title, predict the correct topic.
### Languages
Although DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear
(e.g. a film whose title is origanlly not English).
## Dataset Structure
### Data Instances
A typical data point, comprises of a title, a content and the corresponding label.
An example from the DBpedia test set looks as follows:
```
{
'title':'',
'content':" TY KU /taษชkuห/ is an American alcoholic beverage company that specializes in sake and other spirits. The privately-held company was founded in 2004 and is headquartered in New York City New York. While based in New York TY KU's beverages are made in Japan through a joint venture with two sake breweries. Since 2011 TY KU's growth has extended its products into all 50 states.",
'label':0
}
```
### Data Fields
- 'title': a string containing the title of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes ("").
- 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes ("").
- 'label': one of the 14 possible topics.
### Data Splits
The data is split into a training and test set.
For each of the 14 classes we have 40,000 training samples and 5,000 testing samples.
Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000.
## Dataset Creation
### Curation Rationale
The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Source Data
#### Initial Data Collection and Normalization
Source data is taken from DBpedia: https://wiki.dbpedia.org/develop/datasets
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Licensing Information
The DBPedia ontology classification dataset is licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License.
### Citation Information
```
@inproceedings{NIPS2015_250cf8b5,
author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
booktitle = {Advances in Neural Information Processing Systems},
editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett},
pages = {},
publisher = {Curran Associates, Inc.},
title = {Character-level Convolutional Networks for Text Classification},
url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf},
volume = {28},
year = {2015}
}
```
Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpediaโa large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.
### Contributions
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset. |
satendra4u2022/dpo_precise_datasets | ---
license: mit
---
|
vertigo23/njogerera_english_luganda_corpus | ---
license: unknown
size_categories:
- 10K<n<100K
--- |
aryamannningombam/indian-female-tts-dataset | ---
dataset_info:
features:
- name: file
dtype: string
- name: text
dtype: string
- name: tag
dtype: string
- name: file_path
dtype: string
- name: y
sequence: float32
- name: emotional_embedding
sequence: int64
- name: non_characters
dtype: 'null'
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 923807734
num_examples: 2843
download_size: 911770863
dataset_size: 923807734
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Svenni551/NPC_talks | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 38012
num_examples: 10
download_size: 31737
dataset_size: 38012
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NickyNicky/h2ogpt-oig-oasst1-instruct-cleaned-v3 | ---
dataset_info:
features:
- name: list_json
list:
- name: bot
dtype: string
- name: human
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 164763347
num_examples: 269406
download_size: 105519939
dataset_size: 164763347
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
benchang1110/sciencetw | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: article
dtype: string
splits:
- name: train
num_bytes: 157012034
num_examples: 26056
download_size: 105424999
dataset_size: 157012034
---
# Dataset Card for "sciencetw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NUS-IDS/beyond_blue | ---
configs:
- config_name: default
data_files:
- split: anxiety
path: data/anxiety-*
- split: depression
path: data/depression-*
- split: ptsd
path: data/ptsd-*
dataset_info:
features:
- name: url
dtype: string
- name: comments
list:
- name: author
dtype: string
- name: content
dtype: string
- name: date
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: content
dtype: string
- name: author
dtype: string
splits:
- name: anxiety
num_bytes: 56172807
num_examples: 6943
- name: depression
num_bytes: 60224734
num_examples: 6008
- name: ptsd
num_bytes: 21141031
num_examples: 1816
download_size: 68731517
dataset_size: 137538572
---
# Dataset Card for "beyond_blue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nbtpj/Movies_and_TV | ---
dataset_info:
features:
- name: overall
dtype: float64
- name: verified
dtype: bool
- name: reviewTime
dtype: string
- name: reviewerID
dtype: string
- name: asin
dtype: string
- name: style
dtype: string
- name: reviewerName
dtype: string
- name: reviewText
dtype: string
- name: summary
dtype: string
- name: unixReviewTime
dtype: int64
- name: vote
dtype: string
- name: image
sequence: string
splits:
- name: train
num_bytes: 4058038162
num_examples: 8765568
download_size: 2295911945
dataset_size: 4058038162
---
# Dataset Card for "Movies_and_TV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/nagayoshi_subaru_theidolmstermillionlive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nagayoshi_subaru/ๆฐธๅๆด (THE iDOLM@STER: Million Live!)
This is the dataset of nagayoshi_subaru/ๆฐธๅๆด (THE iDOLM@STER: Million Live!), containing 229 images and their tags.
The core tags of this character are `short_hair, green_hair, red_eyes, bangs, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 229 | 242.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagayoshi_subaru_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 229 | 154.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagayoshi_subaru_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 519 | 319.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagayoshi_subaru_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 229 | 219.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagayoshi_subaru_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 519 | 431.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagayoshi_subaru_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nagayoshi_subaru_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, solo, long_sleeves, white_gloves, ascot, closed_mouth, crown, hat, purple_eyes, white_background, white_jacket, white_pants, blush, epaulettes, grin, hair_between_eyes, hair_ornament, simple_background, upper_body |
| 1 | 12 |  |  |  |  |  | 1girl, looking_at_viewer, open_mouth, :d, solo, purple_eyes, dress, jewelry, necktie |
| 2 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, purple_eyes, solo, open_mouth, :d, baseball, letterman_jacket, shorts |
| 3 | 9 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, open_mouth, skirt, smile, jewelry |
| 4 | 12 |  |  |  |  |  | 1girl, hair_between_eyes, blush, looking_at_viewer, simple_background, solo, smile, white_background, white_shirt, upper_body, open_mouth, collarbone, jacket, short_sleeves |
| 5 | 5 |  |  |  |  |  | 1girl, blush, solo, cleavage, looking_at_viewer, medium_breasts, navel, collarbone, hair_between_eyes, sitting, smile, striped_bikini, beachball, one_eye_closed, open_mouth, partially_submerged, short_shorts, water |
| 6 | 6 |  |  |  |  |  | 1girl, blush, censored, nipples, pussy, small_breasts, hetero, on_back, open_mouth, solo_focus, 1boy, spread_legs, hair_between_eyes, navel, nude, on_bed, penis, pillow, sex, sweat |
| 7 | 5 |  |  |  |  |  | 1girl, detached_collar, playboy_bunny, rabbit_ears, solo, fake_animal_ears, strapless_leotard, wrist_cuffs, blush, indian_style, looking_at_viewer, pantyhose, rabbit_tail, simple_background, small_breasts, black_bowtie, black_leotard, cleavage, covered_navel, purple_eyes, red_bowtie, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | long_sleeves | white_gloves | ascot | closed_mouth | crown | hat | purple_eyes | white_background | white_jacket | white_pants | blush | epaulettes | grin | hair_between_eyes | hair_ornament | simple_background | upper_body | open_mouth | :d | dress | jewelry | necktie | baseball | letterman_jacket | shorts | skirt | smile | white_shirt | collarbone | jacket | short_sleeves | cleavage | medium_breasts | navel | sitting | striped_bikini | beachball | one_eye_closed | partially_submerged | short_shorts | water | censored | nipples | pussy | small_breasts | hetero | on_back | solo_focus | 1boy | spread_legs | nude | on_bed | penis | pillow | sex | sweat | detached_collar | playboy_bunny | rabbit_ears | fake_animal_ears | strapless_leotard | wrist_cuffs | indian_style | pantyhose | rabbit_tail | black_bowtie | black_leotard | covered_navel | red_bowtie |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:---------------|:---------------|:--------|:---------------|:--------|:------|:--------------|:-------------------|:---------------|:--------------|:--------|:-------------|:-------|:--------------------|:----------------|:--------------------|:-------------|:-------------|:-----|:--------|:----------|:----------|:-----------|:-------------------|:---------|:--------|:--------|:--------------|:-------------|:---------|:----------------|:-----------|:-----------------|:--------|:----------|:-----------------|:------------|:-----------------|:----------------------|:---------------|:--------|:-----------|:----------|:--------|:----------------|:---------|:----------|:-------------|:-------|:--------------|:-------|:---------|:--------|:---------|:------|:--------|:------------------|:----------------|:--------------|:-------------------|:--------------------|:--------------|:---------------|:------------|:--------------|:---------------|:----------------|:----------------|:-------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | X | | | | | | | X | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | | | | | | | X | | | | | | | | | | | X | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | X | X | | | | | | | | | | | X | | | | | | | X | | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 12 |  |  |  |  |  | X | X | X | | | | | | | | X | | | X | | | X | | X | X | X | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | X | | | | | | | | | | | X | | | X | | | | X | | | | | | | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | | | | | | | | | | X | | | X | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | X | X | | | | | | | X | X | | | X | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
acheong08/nsfw_reddit | ---
license: openrail
---
|
yangyz1230/promoter_tata | ---
dataset_info:
features:
- name: name
dtype: string
- name: sequence
dtype: string
- name: chrom
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: strand
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 954157
num_examples: 2732
- name: test
num_bytes: 115635
num_examples: 332
download_size: 519991
dataset_size: 1069792
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-23000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1080073
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jacquelinehe/enron-emails | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1701056967
num_examples: 926132
download_size: 972216068
dataset_size: 1701056967
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_wandb__pruned_mistral | ---
pretty_name: Evaluation run of wandb/pruned_mistral
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [wandb/pruned_mistral](https://huggingface.co/wandb/pruned_mistral) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_wandb__pruned_mistral\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-21T16:40:40.526366](https://huggingface.co/datasets/open-llm-leaderboard/details_wandb__pruned_mistral/blob/main/results_2024-03-21T16-40-40.526366.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2677672008134633,\n\
\ \"acc_stderr\": 0.031196121816193134,\n \"acc_norm\": 0.26979863634977536,\n\
\ \"acc_norm_stderr\": 0.03200972209199791,\n \"mc1\": 0.24724602203182375,\n\
\ \"mc1_stderr\": 0.015102404797359652,\n \"mc2\": 0.4108681590294748,\n\
\ \"mc2_stderr\": 0.014542287705752187\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.24829351535836178,\n \"acc_stderr\": 0.012624912868089764,\n\
\ \"acc_norm\": 0.2832764505119454,\n \"acc_norm_stderr\": 0.013167478735134575\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.37353116908982276,\n\
\ \"acc_stderr\": 0.004827526584889684,\n \"acc_norm\": 0.46345349531965746,\n\
\ \"acc_norm_stderr\": 0.004976434387469965\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3037037037037037,\n\
\ \"acc_stderr\": 0.039725528847851375,\n \"acc_norm\": 0.3037037037037037,\n\
\ \"acc_norm_stderr\": 0.039725528847851375\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123415,\n\
\ \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123415\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.18,\n\
\ \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.18,\n \
\ \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2792452830188679,\n \"acc_stderr\": 0.027611163402399715,\n\
\ \"acc_norm\": 0.2792452830188679,\n \"acc_norm_stderr\": 0.027611163402399715\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.24305555555555555,\n\
\ \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.24305555555555555,\n\
\ \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.24277456647398843,\n\
\ \"acc_stderr\": 0.0326926380614177,\n \"acc_norm\": 0.24277456647398843,\n\
\ \"acc_norm_stderr\": 0.0326926380614177\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237653,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237653\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2127659574468085,\n \"acc_stderr\": 0.02675439134803977,\n\
\ \"acc_norm\": 0.2127659574468085,\n \"acc_norm_stderr\": 0.02675439134803977\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n\
\ \"acc_stderr\": 0.0404933929774814,\n \"acc_norm\": 0.24561403508771928,\n\
\ \"acc_norm_stderr\": 0.0404933929774814\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2482758620689655,\n \"acc_stderr\": 0.03600105692727772,\n\
\ \"acc_norm\": 0.2482758620689655,\n \"acc_norm_stderr\": 0.03600105692727772\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2619047619047619,\n \"acc_stderr\": 0.022644212615525218,\n \"\
acc_norm\": 0.2619047619047619,\n \"acc_norm_stderr\": 0.022644212615525218\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15079365079365079,\n\
\ \"acc_stderr\": 0.03200686497287392,\n \"acc_norm\": 0.15079365079365079,\n\
\ \"acc_norm_stderr\": 0.03200686497287392\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.25806451612903225,\n \"acc_stderr\": 0.024892469172462836,\n \"\
acc_norm\": 0.25806451612903225,\n \"acc_norm_stderr\": 0.024892469172462836\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.24630541871921183,\n \"acc_stderr\": 0.030315099285617715,\n \"\
acc_norm\": 0.24630541871921183,\n \"acc_norm_stderr\": 0.030315099285617715\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.20606060606060606,\n \"acc_stderr\": 0.0315841532404771,\n\
\ \"acc_norm\": 0.20606060606060606,\n \"acc_norm_stderr\": 0.0315841532404771\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.25757575757575757,\n \"acc_stderr\": 0.031156269519646857,\n \"\
acc_norm\": 0.25757575757575757,\n \"acc_norm_stderr\": 0.031156269519646857\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.30569948186528495,\n \"acc_stderr\": 0.03324837939758159,\n\
\ \"acc_norm\": 0.30569948186528495,\n \"acc_norm_stderr\": 0.03324837939758159\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.36153846153846153,\n \"acc_stderr\": 0.02435958146539699,\n\
\ \"acc_norm\": 0.36153846153846153,\n \"acc_norm_stderr\": 0.02435958146539699\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.23333333333333334,\n \"acc_stderr\": 0.02578787422095931,\n \
\ \"acc_norm\": 0.23333333333333334,\n \"acc_norm_stderr\": 0.02578787422095931\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.27310924369747897,\n \"acc_stderr\": 0.028942004040998167,\n\
\ \"acc_norm\": 0.27310924369747897,\n \"acc_norm_stderr\": 0.028942004040998167\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.271523178807947,\n \"acc_stderr\": 0.03631329803969653,\n \"acc_norm\"\
: 0.271523178807947,\n \"acc_norm_stderr\": 0.03631329803969653\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.30458715596330277,\n\
\ \"acc_stderr\": 0.019732299420354038,\n \"acc_norm\": 0.30458715596330277,\n\
\ \"acc_norm_stderr\": 0.019732299420354038\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n\
\ \"acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.21568627450980393,\n \"acc_stderr\": 0.028867431449849303,\n \"\
acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.028867431449849303\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.23628691983122363,\n \"acc_stderr\": 0.027652153144159263,\n \
\ \"acc_norm\": 0.23628691983122363,\n \"acc_norm_stderr\": 0.027652153144159263\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3004484304932735,\n\
\ \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.3004484304932735,\n\
\ \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2748091603053435,\n \"acc_stderr\": 0.03915345408847836,\n\
\ \"acc_norm\": 0.2748091603053435,\n \"acc_norm_stderr\": 0.03915345408847836\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.371900826446281,\n \"acc_stderr\": 0.044120158066245044,\n \"\
acc_norm\": 0.371900826446281,\n \"acc_norm_stderr\": 0.044120158066245044\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.24074074074074073,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.24074074074074073,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.27607361963190186,\n \"acc_stderr\": 0.0351238528370505,\n\
\ \"acc_norm\": 0.27607361963190186,\n \"acc_norm_stderr\": 0.0351238528370505\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.24107142857142858,\n\
\ \"acc_stderr\": 0.04059867246952689,\n \"acc_norm\": 0.24107142857142858,\n\
\ \"acc_norm_stderr\": 0.04059867246952689\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.22330097087378642,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.22330097087378642,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.027236013946196697,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.027236013946196697\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23754789272030652,\n\
\ \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.23754789272030652,\n\
\ \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24277456647398843,\n \"acc_stderr\": 0.023083658586984204,\n\
\ \"acc_norm\": 0.24277456647398843,\n \"acc_norm_stderr\": 0.023083658586984204\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2748603351955307,\n\
\ \"acc_stderr\": 0.014931316703220508,\n \"acc_norm\": 0.2748603351955307,\n\
\ \"acc_norm_stderr\": 0.014931316703220508\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.238562091503268,\n \"acc_stderr\": 0.024404394928087873,\n\
\ \"acc_norm\": 0.238562091503268,\n \"acc_norm_stderr\": 0.024404394928087873\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.26366559485530544,\n\
\ \"acc_stderr\": 0.02502553850053234,\n \"acc_norm\": 0.26366559485530544,\n\
\ \"acc_norm_stderr\": 0.02502553850053234\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.023468429832451166,\n\
\ \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.023468429832451166\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2553191489361702,\n \"acc_stderr\": 0.026011992930902006,\n \
\ \"acc_norm\": 0.2553191489361702,\n \"acc_norm_stderr\": 0.026011992930902006\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23989569752281617,\n\
\ \"acc_stderr\": 0.010906282617981634,\n \"acc_norm\": 0.23989569752281617,\n\
\ \"acc_norm_stderr\": 0.010906282617981634\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4485294117647059,\n \"acc_stderr\": 0.030211479609121593,\n\
\ \"acc_norm\": 0.4485294117647059,\n \"acc_norm_stderr\": 0.030211479609121593\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.24019607843137256,\n \"acc_stderr\": 0.01728276069516742,\n \
\ \"acc_norm\": 0.24019607843137256,\n \"acc_norm_stderr\": 0.01728276069516742\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.22727272727272727,\n\
\ \"acc_stderr\": 0.040139645540727756,\n \"acc_norm\": 0.22727272727272727,\n\
\ \"acc_norm_stderr\": 0.040139645540727756\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.028920583220675585,\n\
\ \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.028920583220675585\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n\
\ \"acc_stderr\": 0.030147775935409224,\n \"acc_norm\": 0.23880597014925373,\n\
\ \"acc_norm_stderr\": 0.030147775935409224\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2891566265060241,\n\
\ \"acc_stderr\": 0.03529486801511115,\n \"acc_norm\": 0.2891566265060241,\n\
\ \"acc_norm_stderr\": 0.03529486801511115\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.32748538011695905,\n \"acc_stderr\": 0.035993357714560276,\n\
\ \"acc_norm\": 0.32748538011695905,\n \"acc_norm_stderr\": 0.035993357714560276\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24724602203182375,\n\
\ \"mc1_stderr\": 0.015102404797359652,\n \"mc2\": 0.4108681590294748,\n\
\ \"mc2_stderr\": 0.014542287705752187\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5390686661404893,\n \"acc_stderr\": 0.014009521680980318\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \
\ \"acc_stderr\": 0.0010717793485492608\n }\n}\n```"
repo_url: https://huggingface.co/wandb/pruned_mistral
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|arc:challenge|25_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|gsm8k|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hellaswag|10_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T16-40-40.526366.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T16-40-40.526366.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- '**/details_harness|winogrande|5_2024-03-21T16-40-40.526366.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-21T16-40-40.526366.parquet'
- config_name: results
data_files:
- split: 2024_03_21T16_40_40.526366
path:
- results_2024-03-21T16-40-40.526366.parquet
- split: latest
path:
- results_2024-03-21T16-40-40.526366.parquet
---
# Dataset Card for Evaluation run of wandb/pruned_mistral
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [wandb/pruned_mistral](https://huggingface.co/wandb/pruned_mistral) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_wandb__pruned_mistral",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-21T16:40:40.526366](https://huggingface.co/datasets/open-llm-leaderboard/details_wandb__pruned_mistral/blob/main/results_2024-03-21T16-40-40.526366.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2677672008134633,
"acc_stderr": 0.031196121816193134,
"acc_norm": 0.26979863634977536,
"acc_norm_stderr": 0.03200972209199791,
"mc1": 0.24724602203182375,
"mc1_stderr": 0.015102404797359652,
"mc2": 0.4108681590294748,
"mc2_stderr": 0.014542287705752187
},
"harness|arc:challenge|25": {
"acc": 0.24829351535836178,
"acc_stderr": 0.012624912868089764,
"acc_norm": 0.2832764505119454,
"acc_norm_stderr": 0.013167478735134575
},
"harness|hellaswag|10": {
"acc": 0.37353116908982276,
"acc_stderr": 0.004827526584889684,
"acc_norm": 0.46345349531965746,
"acc_norm_stderr": 0.004976434387469965
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.039725528847851375,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.039725528847851375
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123415,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123415
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2792452830188679,
"acc_stderr": 0.027611163402399715,
"acc_norm": 0.2792452830188679,
"acc_norm_stderr": 0.027611163402399715
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.24305555555555555,
"acc_stderr": 0.0358687928008034,
"acc_norm": 0.24305555555555555,
"acc_norm_stderr": 0.0358687928008034
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.24277456647398843,
"acc_stderr": 0.0326926380614177,
"acc_norm": 0.24277456647398843,
"acc_norm_stderr": 0.0326926380614177
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237653,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237653
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2127659574468085,
"acc_stderr": 0.02675439134803977,
"acc_norm": 0.2127659574468085,
"acc_norm_stderr": 0.02675439134803977
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.0404933929774814,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.0404933929774814
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2482758620689655,
"acc_stderr": 0.03600105692727772,
"acc_norm": 0.2482758620689655,
"acc_norm_stderr": 0.03600105692727772
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2619047619047619,
"acc_stderr": 0.022644212615525218,
"acc_norm": 0.2619047619047619,
"acc_norm_stderr": 0.022644212615525218
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15079365079365079,
"acc_stderr": 0.03200686497287392,
"acc_norm": 0.15079365079365079,
"acc_norm_stderr": 0.03200686497287392
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.25806451612903225,
"acc_stderr": 0.024892469172462836,
"acc_norm": 0.25806451612903225,
"acc_norm_stderr": 0.024892469172462836
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.24630541871921183,
"acc_stderr": 0.030315099285617715,
"acc_norm": 0.24630541871921183,
"acc_norm_stderr": 0.030315099285617715
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.20606060606060606,
"acc_stderr": 0.0315841532404771,
"acc_norm": 0.20606060606060606,
"acc_norm_stderr": 0.0315841532404771
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.25757575757575757,
"acc_stderr": 0.031156269519646857,
"acc_norm": 0.25757575757575757,
"acc_norm_stderr": 0.031156269519646857
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.30569948186528495,
"acc_stderr": 0.03324837939758159,
"acc_norm": 0.30569948186528495,
"acc_norm_stderr": 0.03324837939758159
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.36153846153846153,
"acc_stderr": 0.02435958146539699,
"acc_norm": 0.36153846153846153,
"acc_norm_stderr": 0.02435958146539699
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.23333333333333334,
"acc_stderr": 0.02578787422095931,
"acc_norm": 0.23333333333333334,
"acc_norm_stderr": 0.02578787422095931
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.27310924369747897,
"acc_stderr": 0.028942004040998167,
"acc_norm": 0.27310924369747897,
"acc_norm_stderr": 0.028942004040998167
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.271523178807947,
"acc_stderr": 0.03631329803969653,
"acc_norm": 0.271523178807947,
"acc_norm_stderr": 0.03631329803969653
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.30458715596330277,
"acc_stderr": 0.019732299420354038,
"acc_norm": 0.30458715596330277,
"acc_norm_stderr": 0.019732299420354038
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.028867431449849303,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.028867431449849303
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.23628691983122363,
"acc_stderr": 0.027652153144159263,
"acc_norm": 0.23628691983122363,
"acc_norm_stderr": 0.027652153144159263
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.3004484304932735,
"acc_stderr": 0.03076935200822914,
"acc_norm": 0.3004484304932735,
"acc_norm_stderr": 0.03076935200822914
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2748091603053435,
"acc_stderr": 0.03915345408847836,
"acc_norm": 0.2748091603053435,
"acc_norm_stderr": 0.03915345408847836
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.371900826446281,
"acc_stderr": 0.044120158066245044,
"acc_norm": 0.371900826446281,
"acc_norm_stderr": 0.044120158066245044
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.27607361963190186,
"acc_stderr": 0.0351238528370505,
"acc_norm": 0.27607361963190186,
"acc_norm_stderr": 0.0351238528370505
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.24107142857142858,
"acc_stderr": 0.04059867246952689,
"acc_norm": 0.24107142857142858,
"acc_norm_stderr": 0.04059867246952689
},
"harness|hendrycksTest-management|5": {
"acc": 0.22330097087378642,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.22330097087378642,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.027236013946196697,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.027236013946196697
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23754789272030652,
"acc_stderr": 0.015218733046150193,
"acc_norm": 0.23754789272030652,
"acc_norm_stderr": 0.015218733046150193
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24277456647398843,
"acc_stderr": 0.023083658586984204,
"acc_norm": 0.24277456647398843,
"acc_norm_stderr": 0.023083658586984204
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2748603351955307,
"acc_stderr": 0.014931316703220508,
"acc_norm": 0.2748603351955307,
"acc_norm_stderr": 0.014931316703220508
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.238562091503268,
"acc_stderr": 0.024404394928087873,
"acc_norm": 0.238562091503268,
"acc_norm_stderr": 0.024404394928087873
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.26366559485530544,
"acc_stderr": 0.02502553850053234,
"acc_norm": 0.26366559485530544,
"acc_norm_stderr": 0.02502553850053234
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.23148148148148148,
"acc_stderr": 0.023468429832451166,
"acc_norm": 0.23148148148148148,
"acc_norm_stderr": 0.023468429832451166
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2553191489361702,
"acc_stderr": 0.026011992930902006,
"acc_norm": 0.2553191489361702,
"acc_norm_stderr": 0.026011992930902006
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.23989569752281617,
"acc_stderr": 0.010906282617981634,
"acc_norm": 0.23989569752281617,
"acc_norm_stderr": 0.010906282617981634
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4485294117647059,
"acc_stderr": 0.030211479609121593,
"acc_norm": 0.4485294117647059,
"acc_norm_stderr": 0.030211479609121593
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.24019607843137256,
"acc_stderr": 0.01728276069516742,
"acc_norm": 0.24019607843137256,
"acc_norm_stderr": 0.01728276069516742
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.040139645540727756,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.040139645540727756
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.028920583220675585,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.028920583220675585
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23880597014925373,
"acc_stderr": 0.030147775935409224,
"acc_norm": 0.23880597014925373,
"acc_norm_stderr": 0.030147775935409224
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2891566265060241,
"acc_stderr": 0.03529486801511115,
"acc_norm": 0.2891566265060241,
"acc_norm_stderr": 0.03529486801511115
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.32748538011695905,
"acc_stderr": 0.035993357714560276,
"acc_norm": 0.32748538011695905,
"acc_norm_stderr": 0.035993357714560276
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24724602203182375,
"mc1_stderr": 0.015102404797359652,
"mc2": 0.4108681590294748,
"mc2_stderr": 0.014542287705752187
},
"harness|winogrande|5": {
"acc": 0.5390686661404893,
"acc_stderr": 0.014009521680980318
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.0010717793485492608
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
wav2gloss/cocoon-glosses | ---
license: cc-by-nc-nd-4.0
task_categories:
- automatic-speech-recognition
--- |
samurai-architects/materials-blip | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 32878470.0
num_examples: 10
download_size: 32881580
dataset_size: 32878470.0
---
# Dataset Card for "materials-blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stenio123/1000GovPdfLibraryCongress-vector | ---
license: openrail
dataset_info:
features:
- name: values
sequence: float64
- name: metadata
struct:
- name: pdf_file
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30704920
num_examples: 981
download_size: 14917630
dataset_size: 30704920
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jaejoo/llama-2-ko-law | ---
license: apache-2.0
language:
- ko
tags:
- legal
size_categories:
- 1K<n<10K
--- |
ndr01/jeans-captioning-dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: 'Unnamed: 0'
dtype: int64
- name: description
dtype: string
splits:
- name: train
num_bytes: 593273258.0
num_examples: 179
download_size: 588198577
dataset_size: 593273258.0
---
# Dataset Card for "jeans-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.